Call for Papers
The submission deadline was extended to 9 October 2020 (23:59 AoE).
|Author notification||Friday, 30 October 2020 (23:59 AoE)|
We welcome all original research papers of up to 4 pages in length,
using the template provided below. This length does not include
references or any supplementary materials. Reviewers are not obliged
to read supplementary materials when reviewing the paper. Submissions
should be a single file in
We also welcome extended abstracts of up to 2 pages in length that describe open problems, novel applications, or challenges in Topological Data Analysis, Topological Machine Learning, and related areas (using the template provided below). Again, this length does not include references or any supplementary materials.
We also permit papers that have been recently published or are under submission to another venue. Please mark such papers accordingly upon submission. The page limit for these submissions is 4 pages.
Selected papers will be presented as brief ‘spotlights’ of 3 minutes length. All authors of accepted papers will have the option to present their work in a live session.
This workshop is non-archival; even though all accepted papers will be available on OpenReview, there are no formally-published proceedings.
We only accept submissions that have been prepared using LaTeX. Use the following workshop style files for your submission (the PDF file is provided as a preview of the expected style):
Scope and topics
Please find a list of topics of interested, sorted alphabetically. If you are not sure whether your topic might be a good fit for the workshop, feel free to contact us at email@example.com.
Approximations: Can we approximate parts of the computational pipeline, for example when calculating Vietoris–Rips complexes and their associated filtrations, or when calculating distances between persistence diagrams?
Benchmark data sets and software: What are suitable benchmark data sets to quickly compare different TDA methods in a reproducible manner? What is ‘our’ MNIST or CIFAR data set? How do the existing software tools for TDA measure up, and how do we address the software needs?
Beyond persistent homology: What other machinery from topology can be applied to machine learning, beyond persistent homology? How do we address the methodological and computational challenges?
Connections to learning theory: How can topology help in understanding complex models such as neural networks? How can we use this understanding to uncover the underlying principles of generalisation in the context of neural networks? How can we design appropriate regularisation strategies to encourage beneficial properties of learned mappings?
Current and future applications: In which projects and applications do topology-based approaches have a decisive ‘edge’ over alternative approaches, and how to identify this ‘niche’ methodically?
Feature descriptors: How can we employ persistence diagrams in unsupervised or supervised machine learning scenarios? How can we improve their integration into our frameworks?
Higher-dimensional features: In many applications, we limit the dimension of topological features. What can we say about higher-dimensional features? When are they suitable? How can we approximate these efficiently?
Scaling and parallel processing: How can we scale TDA to the data set sizes that we encounter in machine learning?
Statistical approaches: What can we say about confidence estimates of topological features, in particular when there is an element of stochasticity in the input data generation?
TDA for evolving machine learning areas: How do we develop and use TDA approaches for evolving fronts of ML such fair/explainable ML, causality, meta-learning, or adversarial examples (note that this is not an exhaustive list; TDA methods have cropped up in all of these domains)?
Time-varying topology: What are suitable characterisations of time-varying topological features? What kind of computational tricks can we use to speed up their calculation? What is a good perspective to analyse dynamical systems with TDA?