Skip to the content.


We are happy to feature the following awesome speakers (this list is expanded on a rolling basis, stay tuned for more!):

Keynotes will be given by:

Serguei Barannikov

Serguei Barannikov
Topological obstructions in neural networks’ learning
We apply topological data analysis to study loss surfaces of neural networks. We define neural network’s Topological Obstructions’ score («TO-score») with help of robust topological invariants (barcodes of loss function) that quantify the “badness” of local minima for gradient-based optimization. We have made several experiments for computing these invariants, for small neural networks, and for fully connected, convolutional and ResNet-like neural networks on different datasets: MNIST, Fashion MNIST, CIFAR10, SVHN. Our two principal observations are 1) the neural network’s barcode and TO-score decrease with the increase of the neural network’s depth and width 2) there is an intriguing connection between the length of minima’s segments in the barcode and the minima’s generalization error. The talk is based on joint works with G.Sotnikov, A.Korotin, D.Oganesyan, I.Trofimov, D.Emtsev and E.Burnaev.
Affiliation Skoltech & Paris Diderot University
Biography Serguei Barannikov received his PhD in Mathematics from the University of California, Berkeley (1999). He is a leading research scientist at Skolkovo Institute of Science and Technology. He also holds a position as a CNRS researcher at Paris Diderot University. He worked as a researcher in mathematics at Institut des Hautes Etudes Scientifiques and at Ecole Normale Supérieure in Paris. At the age of 20, Barannikov wrote a paper on algebraic topology, in which he introduced “canonical form” invariants of filtered complexes, together with an efficient algorithm for their calculation. Ten years later, these invariants became widely used in topological data analysis under the name of “persistence barcodes” and “persistence diagrams”. Dr. Barannikov’s recent interests focus on applying topological data analysis methods to reveal patterns in learning of neural networks.

Ulrich Bauer

Ulrich Bauer
The Representation Theory of Filtered Hierarchical Clustering
In recent years, several fields in mathematics have sprung up in order to deal with the analysis of large data sets. Topological data analysis is one such field, providing ways of assigning topological invariants to discrete data sets. Perhaps the main example, persistent homology, is well understood and closely related to the representation theory of the Dynkin quiver A~ n. By allowing for an extra parameter, one is led to consider filtered hierarchical clustering methods. These methods give rise to representations of a commutative grid, whose structure maps in one direction are all epimorphisms. This talk will describe this subcategory by means of a cotorsion torsion triple coming from a tilting object. In particular, this allows us to determine the representation type! This talk is based on joint work with M.B. Botnan, S. Oppermann, and J. Steen.
Affiliation Technical University of Munich
Biography Ulrich Bauer is an assistant professor in the department of mathematics at the Technical University of Munich (TUM), leading the Applied & Computation Topology group. His research revolves around application-motivated concepts and computational methods in topology and geometry, popularized by application areas such as topological data analysis. Some of his key research areas are persistent homology, discrete Morse theory, and geometric complexes. Ulrich is the author of Ripser, a leading software for the computation of Vietoris–Rips persistence barcodes. He is also an editor of the Journal of Applied and Computational Topology.

Andrew J. Blumberg

Andrew J. Blumberg
Stability of 2-Parameter Persistent Homology
I will describe recent joint work with Lesnick that establishes probabilistic stability results for certain density-sensitive bifiltrations of point-cloud data.
Affiliation University of Texas at Austin

Peter Bubenik

Peter Bubenik
Topological Data Analysis for Cell Biology Images
In this talk I introduce topological data analysis and show how it may be combined with machine learning to analyze certain biological images. The protein actin is an important component of the cell that is involved in movement, organ development, and wound healing. Using a small collection of high-resolution images of a cell’s actin cytoskeleton, we are able to train a classifier to distinguish between treated and untreated cells. Furthermore, we are able to visualize the parts of the cell that the computer uses for its classification. Our analysis subsamples patches and uses homology relative to a boundary and persistence landscapes.
Affiliation University of Florida
Biography Peter Bubenik’s research is on applied topology and more specifically, topological data analysis, which studies the shape of data. More precisely, it consists of combining ideas from topology, statistics and machine learning to develop new tools for summarizing and visualizing large, complex, high-dimensional data; developing a general framework which quantifies topological and geometric changes using ideas from topology, algebra, analysis, and category theory; and working with collaborators to use these ideas to analyze data. Peter obtained his Ph.D. at the University of Toronto in 2003 and was a postdoc at the Ecole Polytechnique Federale de Lausanne (EPFL) in Switzerland from 2003 to 2005. From 2005 to 2015 he was at Cleveland State University, and since 2015, he is at the University of Florida.

Gunnar Carlsson

Gunnar Carlsson
Affiliation Stanford University

Mathieu Carrière

Mathieu Carrière
Probabilistic and Statistical Aspects of Reeb spaces and Mappers
Mappers and Reeb spaces are common data visualization tools of Topological Data Analysis. Despite their frequent use in many applications, not much is known about their statistical properties, such as their speed of convergence, or, more importantly, the confidence associated to its features. In this talk, I will present some recent results with a statistical flavor on these objects, as well as a few applications in machine learning and computational biology.
Affiliation Inria Sophia Antipolis
Biography Mathieu Carrière is a Research Scientist in the DataShape team at Inria Sophia Antipolis. He did his PhD at Inria Saclay in the DataShape team, under the supervision of Steve Oudot, and a postdoc of two years in the Rabadán Lab, at the Department of Systems Biology of Columbia University, under the supervision of Raúl Rabadán. His research focuses on topological data analysis (TDA) and statistical machine learning (ML), with an application to bioinformatics and genomics. He contributed to the analysis of topological descriptors and their use in ML methods such as kernel-SVM or deep learning.

Chao Chen

Chao Chen
Learning with Topological Information - Image Analysis and Label Noise
Modern machine learning faces new challenges. We are analyzing highly complex data with unknown noise. Topology provides novel structural information to model such data and noise. In this talk, we discuss two directions in which we are using topological information in the learning context. In image analysis, we proposed a topological loss to segment and generate images with not only per-pixel accuracy, but also topological accuracy. This is necessary in analysis of images of fine-scale biomedical structures such as neurons, vessels, etc. Extracting these structures with correct topology is essential for the success of downstream analysis. Meanwhile, we will briefly discuss how to use topological information to train classifiers robust to label noise. These results have been published in NeurIPS’19, ECCV’20, ICML’20 and NeurIPS’20.
Affiliation Stony Brook University
Biography Chao Chen is an assistant professor at Stony Brook University. His research interest spans topological data analysis, machine learning and biomedical image analysis. He works on applying topological data analysis tools, such as persistent homology and discrete Morse theory, to biomedical image analysis and to generic machine learning problems.

Lorin Crawford

Lorin Crawford
A Machine Learning Pipeline for Feature Selection and Association Mapping with 3D Shapes
It has been a long-standing challenge to implement an analogue of variable selection with 3D shapes as the covariates in a regression model. Here, we present SINATRA, a statistical pipeline for sub-image selection where the goal is to identify the physical features of 3D shapes that best explain the variation between two trait classes. A key insight is that, one can use tools from differential topology to transform objects represented as meshes into a collection of vectors (with little to no loss of information about their natural structure). Since this topological transformation is invertible, SINATRA uses an interpretable machine learning algorithm to map significant vector indices back onto the original shapes — thus, highlighting the physical 3D shape characteristics that best distinguish members in the two groups. As an application of our pipeline, we conduct feature selection on a dataset consisting of mandibular molars from five different genera of New World Monkeys and examine the physical properties of their teeth that best characterize their dietary regimens.
Affiliation Microsoft Research & Brown University
Twitter @lorin_crawford
Biography Lorin Crawford is a Senior Researcher at Microsoft Research New England. He also holds a position as the RGSS Assistant Professor of Biostatistics at Brown University. His scientific research interests involve the development of novel and efficient computational methodologies to address complex problems in statistical genetics, cancer pharmacology, and radiomics (e.g., cancer imaging). Dr. Crawford has an extensive background in modeling massive data sets of high-throughput molecular information as it pertains to functional genomics and cellular-based biological processes. His most recent work has earned him a place on Forbes 30 Under 30 list, The Root 100 Most Influential African Americans list, and recognition as an Alfred P. Sloan Research Fellow.

Brittany Terese Fasy

Brittany Terese Fasy
Searching in the Space of Persistence Diagrams
The doubling dimension of the space of persistence diagrams is infinite and, as a result, current approaches for finding the nearest neighbor to a persistence diagram among a set of other persistence diagrams is either linear in the number of diagrams or does not offer performance guarantees. This talk will present the first algorithm that supports approximate nearest neighbor search in the space of persistence diagrams using the bottleneck distance.
Affiliation Montana State University
Twitter @topologybee
Biography Brittany Terese Fasy is an assistant professor at Montana State University. She earned her PhD from Duke University in 2012. Her research is in computational topology. She studies topological descriptors (such as the persistence diagram) from both theoretical and applied perspectives. In addition, she studies data on graphs, directed topological spaces, and algorithmic problems in computational topology. Her research is grounded in real-world applications, including road network analysis and prostate cancer prognosis.

Robert Ghrist

Robert Ghrist
The Tarski Laplacian
This talk will introduce a new type of Laplacian that is built for lattice-valued data over networks. Like the graph Laplacian, it can be iterated to build consensus. Like the Hodge Laplacian on manifolds, it computes cohomology. This represents joint work with Hans Riess.
Affiliation University of Pennsylvania
Twitter @robertghrist
Biography Robert Ghrist is the Andrea Mitchell PIK Professor of Mathematics and Electrical & Systems Engineering at the University of Pennsylvania. After earning a BS in Mechanical Engineering (University of Toledo, 1991), and the MS and PhD in Applied Mathematics (Cornell University, 1994, 1995), he held positions in Mathematics departments at the University of Texas (Austin), Georgia Tech, and the University of Illinois (Urbana-Champaign). He has been at Penn since 2008. Ghrist is a recognized leader in the field of Applied Algebraic Topology, with publications detailing topological methods for sensor networks, robotics, signal processing, data analysis, optimization, and more. He is the author of a leading textbook on the subject (Elementary Applied Topology, 2014), and has managed numerous large DoD grants from AFOSR, ASDRE, DARPA, and ONR. His research has been recognized with the NSF CAREER, NSF PECASE, SciAm50, and Vannevar Bush Faculty Fellow awards. Ghrist has been an invited speaker at two International Congresses of Mathematicians: once (Madrid 2006) for research and once (Seoul, 2014) for education. Ghrist is a dedicated expositor and communicator of Mathematics, with teaching awards that include the MAA James Crawford Prize, Penn’s Lindback Award, and the S. Reid Warren award in Engineering at Penn. Ghrist is the author, designer, and animator of popular YouTube video texts (featuring the Calculus BLUE Project), as well as an online course on Coursera featured in the New York Times, BoingBoing, and Gizmodo.

Kathryn Hess

Kathryn Hess Bellwald
Topological insights in neuroscience
I will describe two applications of TDA in neuroscience that provide intriguing insights into the relation between between structure and function in networks of neurons.
Affiliation École polytechnique fédérale de Lausanne (EPFL)
Biography Kathryn Hess is a professor of mathematics and life sciences at the EPFL. She received her PhD from MIT and held positions at the universities of Stockholm, Nice, and Toronto before moving to the EPFL. Her research focuses on algebraic topology and its applications, primarily in the life sciences, but also in materials science. On the applied side, she has elaborated methods based on topological data analysis for high-throughput screening of nanoporous crystalline materials, classification and synthesis of neuron morphologies, and classification of neuronal network dynamics. She has also developed and applied innovative topological approaches to network theory, leading to a powerful, parameter-free mathematical framework relating the activity of a neural network to its underlying structure, both locally and globally. In 2016 she was elected to Swiss Academy of Engineering Sciences and was named a fellow of the American Mathematical Society and a distinguished speaker of the European Mathematical Society in 2017.

Yasu Hiraoka

Yasu Hiraoka
Characterizing rare events in persistent homology
Indecomposables obtained through decompositions of persistent homology are regarded as topological summary of real data. However, as is well known, there exist pathologically complicated indecomposables in multi-parameter persistent homology in purely algebraic setting, and this fact makes it difficult to build mathematical theory on that setting. Our fundamental question is, how much should we care about such complicated indecomposables in the real data, and what is a suitable framework to study this question? To this aim, we will show several ongoing results, especially, (1) large deviation principle on 1 parameter persistent homology, and (2) law of large numbers on multi-parameter persistent homology. Then we will discuss how these two results (partially) answer to the original question.
Affiliation Kyoto University
Biography Yasu Hiraoka is a professor of Kyoto University Institute for the Advanced Study (KUIAS) at Kyoto University. He studies mathematical theory and algorithm of TDA, and also works on applying them into scientific and engineering problems. His primary interest is to develop descriptors for characterizing shape of complex and big data. By combining various mathematical theories such as topology, representation theory, probability theory, he succeeded in making TDA powerful and general for practical problems. In applications, he has organized the materials TDA team, and his group achieved several pioneering results on structural analysis in materials science based on TDA. In particular, a series of methods developed by his group using persistent homology, machine learning, and inverse analysis are expected to be a key technology for materials informatics in future. In addition to materials science, he has recently launched TDA projects on other application areas such as life science and economics.

Lida Kanari

A topological insight on neuronal morphologies
The morphological diversity of neurons supports the complex information-processing capabilities of biological neuronal networks. A major challenge in neuroscience has been to reliably describe neuronal shapes with universal morphometrics that generalize across cell types and species. Inspired by algebraic topology, we have developed a topological descriptor of trees that couples the topology of their complex arborization with their geometric structure, retaining more information than traditional morphometrics. The topological morphology descriptor (TMD) has proved to be very powerful in separating neurons into well-defined groups on morphological grounds. The TMD algorithm led to the discovery of two distinct morphological classes of pyramidal cells in the human cortex that also have distinct functional roles, suggesting the existence of a direct link between the anatomy and the function of neurons. The TMD-based classification also led to the objective and robust morphological clustering of rodent cortical neurons. Recently we proved that the TMD of neuronal morphologies is also essential for the computational generation (i.e., synthesis) of dendritic morphologies. Our results demonstrate that a topology-based synthesis algorithm can reproduce both morphological and electrical properties of reconstructed biological rodent cortical dendrites and generalizes well to a wide variety of different dendritic shapes. Therefore it is suitable for the generation of unique neuronal morphologies to populate the digital reconstruction of large-scale, physiologically realistic networks.
Affiliation École polytechnique fédérale de Lausanne (EPFL)
Biography Lida Kanari is a Postdoctoral fellow and Neuromathematics Junior Group Leader in the Simulation Neuroscience Division. The shape of neuronal morphologies constrains among others the connectivity and the functionality of a neuronal network. Lida’s work during her PhD (2018, EPFL) focused on the mathematical description of neuronal shapes as she has been investigating what are the fundamental differences between morphologically distinct cell-types and how different neuronal shapes are generated. She has been developing algorithms based on mathematical principles, including topology, Markov models and stochastic processes, to model and reproduce neuronal shapes. Lida has a MSc in Applied Mathematics and Physics and a MSc Computational Fluid Mechanics from the National Technical University of Athens, Greece. She was awarded her PhD from EPFL in Computational Neuroscience.

Manohar Kaul

Manohar Kaul
Solving Partial Assignment Problems using Random Simplicial Complexes
This talk will introduce a partial quadratic assignment problem (QAP) posed as finding a bijective-mapping between random simplicial complexes. This problem will be further motivated with examples from computer vision literature. We study the asymptotic behavior of the affinity matrix associated with the QAP in this random setting. Additionally, we show empirical results of our matching algorithm on real-world datasets, containing severe occlusions and distortions.
Affiliation IIT Hyderabad
Twitter @ManuKaul1
Biography Manohar Kaul is an Assistant Professor in the Computer Science and Engineering Department at the Indian Institute of Technology (IIT) Hyderabad, India. He received his Ph.D. from Aarhus University, Denmark and his Masters from Uppsala University, Sweden. He also worked at DIMA, TU Berlin as a Postdoc. His key research interests span machine learning, graph theory, and applied algebraic topology.

Leland McInnes

Leland McInnes
Combining UMAP and MAPPER to provide richer representations of naturally filtered datasets.
Affiliation Tutte Institute for Mathematics and Computing
Twitter @leland_mcinnes
Biography Leland McInnes is a researcher at the Tutte Institute for Mathematics and Computing working on topologically motivated methods in data science. He balances his time between theoretical research, software engineering and implementation, and domain specific problems.

Facundo Mémoli

Facundo Mémoli
Spatiotemporal Persistent Homology for Dynamic Metric Spaces
Characterizing the dynamics of time-evolving data within the framework of topological data analysis (TDA) has been attracting increasingly more attention. Popular instances of time-evolving data include flocking/swarming behaviors in animals and social networks in the human sphere. A natural mathematical model for such collective behaviors is a dynamic point cloud, or more generally a dynamic metric space (DMS). We show how to extend the Rips filtration stability result for (static) metric spaces to the setting of DMSs, while retaining stability. We identify polynomial time computable invariants and associated pseudodistances, and provide software implementations which can be used for practical tasks. This is joint work with Woojin Kim (Duke) and Nate Clause (OSU); see
Affiliation Ohio State University
Biography Facundo Mémoli is a professor in the Department of Mathematics and in the Department of Computer Science and Engineering at the Ohio State University. his research interests include topics in the intersection of metric geometry, topology, optimal transport, and applications to science and engineering such as topological data analysis, and networks.

Liz Munch

Liz Munch
Persistent homology of complex networks for dynamic state detection in time series
There has been extensive work performing time series analysis with persistence diagrams via the Takens embedding. In this talk we develop an alternative approach for studying graph representations of time series of dynamical systems. Specifically, we show how persistent homology can be used to yield a compressed, multi-scale representation of the ordinal partition network of the time series that can distinguish between dynamic states such as periodic and chaotic behavior. By replacing the network with its persistence diagram, we then extract existing as well as new geometric and entropy point summaries from the persistence diagram and compare to other commonly used network characteristics. Our results show that persistence-based point summaries yield a clearer distinction of the dynamic behavior and are more robust to noise than existing graph-based scores, especially when combined with ordinal graphs.
Affiliation Michigan State University
Twitter @elizabethmunch
Biography Liz received her PhD from the Department of Mathematics at Duke University in May 2013. Prior to joining the faculty of Michigan State University, she was an Assistant Professor in the Department of Mathematics and Statistics at the University at Albany - SUNY, and a Postdoctoral Fellow at the Institute for Mathematics and its Applications at the University of Minnesota for the 2013-2014 thematic year on applications of topology. She also holds a Master of Arts in Mathematics from Duke University, a Bachelor of Science in Mathematics from the University of Rochester, and a Bachelor of Music in Harp Performance from the Eastman School of Music. When she isn’t thinking about TDA, she’s chasing her two sons around, playing video games, or watching bad sci-fi.

Vidit Nanda

Vidit Nanda
Singularity detection in data
I will describe a new and efficient topological technique to detect manifold-intersections and other singularities directly from finite point samples. The method makes use of local persistent homology computations around tiny annular neighborhoods of sampled data points, and works even when none of those points have been sampled from the singular regions. All of this is joint work with B Stolz, H Harrington and J Tanner.
Affiliation University of Oxford
Twitter @viditnanda
Biography Vidit Nanda is an Associate Professor of Mathematics at Oxford, working broadly in the field of applied algebraic topology. Before coming to Oxford, he held postdoctoral positions at the Institute for Advanced Study, the Alan Turing Institute, and the University of Pennsylvania.

Jose Perea

Jose Perea Munch
TALLEM – Topological Assembly of LocalLy Euclidean Models
We present here a new unsupervised topological data analysis tool for data coordinatization, fusion and distributed nonlinear dimensionality reduction. This new methodology, called TALLEM, assembles a collection of local Euclidean coordinates on the data, and leverages ideas from the theory of fiber bundles to yield a global map consistent with the underlying data topology.
Affiliation Michigan State University
Biography I got my Ph.D. in Mathematics from Stanford University in 2011, and a B.Sc. in Mathematics from Universidad del Valle (Summa cum laude and Valedictorian) in 2006. I was a postdoc in the department of Mathematics at Duke from 2011 to 2015, and a member of the Institute for Mathematics and its Applications (IMA) at the University of Minnesota during the Fall of 2014. In August of 2015 I joined Michigan State University as an Assistant Professor with joint appointments in the department of Computational Mathematics, Science & Engineering (CMSE), and the department of Mathematics. My work has been supported by grants from DARPA, the National Science Foundation, the Center for Business and Social Analytics, and an NSF CAREER award.

Laxmi Parida

Laxmi Parida
TDA on Covid 19 OMICS data
I will talk about the use of Topological Data Analysis (TDA) in the context of omics. I will also expand on a few different applications of TDA in a range of problems primarily arising in the Covid-19 pandemic.
Twitter @LaxmiParida16
Affiliation IBM Research
Biography Dr. Laxmi Parida is an IBM Fellow, Master Inventor and heads the Computational Genomics at the IBM Thomas J. Watson Research Center, USA. She is a visiting professor at the Courant Institute of Mathematical Sciences, New York. Over the last 10 years, she has led the IBM Science team in the Cacao Consortium (with MARS, USDA), the Genographic Project with National Geographic, the Bioinformatics team in the “Sequence the Food Supply Chain Consortium” across multiple global IBM labs, and the science team in the personalized cancer medicine system. Her research areas include population genomics, cancer genomics, metagenomics, algorithms (including AI) and topological data analysis. She has published over 200 peer-reviewed research papers; edited 10 volumes and authored a monograph on pattern discovery in bioinformatics. She holds over 40 US patents. She is on the advisory board of NYU Engineering School and editorial board of BMC Bioinformatics, Journal of Computational Biology and an Associate Editor, IEEE/ACM Transactions on Computational Biology and Bioinformatics and SIAM Journal of Discrete Mathematics.

Don Sheehy

Don Sheehy
A Sparse Delaunay Filtration
In this talk, I will discuss how a filtration of Delaunay complexes can be used to approximate the persistence diagram of the distance to a point set in $\R^d$. Whereas the full Delaunay complex can be used to compute this persistence diagram exactly, it may have size $O(n^{\lceil d/2 \rceil})$. In contrast, our construction uses only $O(n)$ simplices. The central idea is to connect Delaunay complexes on progressively denser subsamples by considering the flips in an incremental construction as simplices in $d+1$ dimensions. This approach leads to a very simple and straightforward proof of correctness in geometric terms, because the final filtration is dual to a $(d+1)$-dimensional Voronoi construction similar to the standard Delaunay complex.
Affiliation North Carolina State University
Biography Don Sheehy is an Associate Professor of Computer Science at North Carolina State University. He completed his undergraduate studies at Princeton University and received a PhD in Computer Science from Carnegie Mellon University. His research focuses on the interplay of geometry, topology, and algorithms.

Katharine Turner

Affiliation Australian National University
Biography Katharine Turner was an undergraduate at the University of Sydney and then did her PhD with Shmuel Weinberger at the University of Chicago (2015). After a postdoc at EPFL working with both Kathryn Hess and Victor Panaretos, she returned back down under and joined the Mathematical Sciences Institute at the Australian National University from 2017 as a Lecturer. In 2020 she was awarded an Discovery Early Career Research Award by the Australian Research Council.
Wasserstein Stability for Persistence Diagrams
The stability of persistence diagrams is among the most important results in applied and computational topology but most results are with respect to the bottleneck distance between diagrams. This has two main implications: it makes the space of persistence diagrams rather pathological and it is often provides very pessimistic bounds with respect to outliers. In this talk I will discuss new stability results with respect to the p-Wasserstein distance between persistence diagrams. The foundations are cellular stability functions on sufficiently finite spaces in terms of the p-norm of the perturbations. This has applications to image analysis, persistence homology transforms and Vietoris-Rips complexes. This is joint work with Primoz Skraba.

Bei Wang

Bei Wang
Topology and Neuron Activations in Deep Learning
Deep convolutional neural networks have become ubiquitous in image classification tasks thanks to architectures such as GoogLeNet and ResNet. However, we do not quite understand how these networks achieve their impressive performance. The main challenge in deep learning is the interpretability: How can we make the representations learned by these networks human interpretable? Given a trained deep neural network, we can address the interpretability issue by probing neuron activations, that is, the combinations of neuron firings, in response to a particular input image. With millions of input images, we can obtain a global view of what the neurons have learned by studying neuron activations at a particular layer and across multiple layers. We aim to shed light on the following questions using topological tools: What is the shape of the activation space? What is the organizational principle behind neuron activations?
Affiliation University of Utah
Twitter @beiphillips
Biography Bei Wang is an assistant professor at the School of Computing and a faculty member in the Scientific Computing and Imaging (SCI) Institute, University of Utah. She received her Ph.D. in Computer Science from Duke University. Her research interests include topological data analysis, data visualization, computational topology, machine learning, and data mining. Her work spans both theoretical and applied research. Some of her recent research activities draw inspirations from topology, geometry, and machine learning, in studying brain networks, vector fields, tensor fields, and high-dimensional point clouds that arise from scientific simulations.

Yusu Wang

Yusu Wang
Discrete Morse-based graph reconstruction and data analysis
In recent years, topological and geometric data analysis (TGDA) has emerged as a new and promising field for processing, analyzing and understanding complex data. Indeed, geometry and topology form natural platforms for data analysis, with geometry describing the ‘‘shape’’ and ‘‘structure’’ behind data; and topology characterizing / summarizing both the domain where data are sampled from, as well as functions and maps associated to them. In this talk, I will show how the topological objects from discrete Morse theory and persistent homology can be used to reconstruct hidden geometric graphs, and how they can be then combined with machine learning pipelines for further data analysis tasks in both low and high dimensions. This talk is based on multiple projects with multiple collaborators and references will be given during the talk.
Affiliation University of California, San Diego
Biography Yusu Wang obtained her M.S and Ph.D degrees from Duke University (receiving the Best PhD Dissertation Award at the CS Department), and her B.S. degree from Tsinghua Univ (graduated with First Class Honors). From 2004-2005, she was a post-doctoral researcher at the Geometric Computing lab in Stanford. Prior to joining UCSD, Yusu Wang was a Professor in the Computer Science and Engineering Department at the Ohio State University. She co-directed the Foundations of Data Science Research Community of Practice (CoP) at the Translational Data Analytics Institute (TDAI@OSU) from 2018-2020. In 2006, she received a DOE (Dept. of Energy) Career award, and an NSF (National Science Foundation) Career award in 2008. She is on the editorial board for SICOMP, JoCG, and CGTA; moreover, she currently serves on the Computational Geometry Steering Committee. Yusu primarily works in the fields of Computational geometry, applied topology, as well as topological and geometric data analysis (TGDA). Her work lies at the intersection of computer science (especially algorithms) and applied mathematics (especially applied topology, discrete and combinatorial geometry). She is particularly interested in developing effective and theoretically justified algorithms for data / shape analysis using geometric and topological ideas and methods, as well as to integrate such methods with machine learning frameworks. Yusu aims to both provide theoretical understanding of various computational methods developed, and to apply them to practical domains, including computational biology, computational neuroanatomy, material science, computer graphics and visualization.

Yuzuru Yamakage

Yuzuru Yamakage
Industrial Application of TDA-ML technology: Achievement so far and expectations of future
As DX (Digital Transformation) has been becoming a big key word among almost all industrial players, AI practitioners got many chances to tackle more complex data, which they’ve not met before. Topological Data Analysis has been recently attracting attention of AI practitioners/Data scientists. Because, TDA would have a great potential to extract hidden features, which current technologies have not successfully found, from complex data by combining with ML technologies. In this session, I present several industrial use cases of TDA-ML and show TDA-ML capability, which would provide great benefit to DX industries.
Affiliation Fujitsu Ltd.
Biography Yuzuru Yamakage received his Ph.D. in 1997 from Tohoku University. He then joined Fujitsu Laboratories Ltd. with a focus on Data Analytics Technology. In 2015, he moved to the AI Service Business Unit at Fujitsu Ltd. and became the Director of the AI Service Dept. of the Software Technology Business Unit of Fujitsu Ltd. in 2020.