in machine learning, statistics and computer vision
Saturday, December 13, 2014
at NIPS 2014 in Montreal, Canada
Traditional machine learning and data analysis methods often assume that the input data can be represented by vectors in Euclidean space. While this assumption has worked well for many applications, researchers have increasingly realized that if the data is intrinsically non-Euclidean, ignoring this geometrical structure can lead to suboptimal results.
In the existing literature, there are two common approaches for exploiting data geometry when the data is assumed to lie on a Riemannian manifold.
In the first direction, often referred to as manifold learning, the data is assumed to lie on an unknown Riemannian manifold and the structure of this manifold is exploited through the training data, either labeled or unlabeled. Examples of manifold learning techniques include Manifold Regularization via the graph Laplacian, Locally Linear Embedding, and Isometric Mapping.
In the second direction, which is gaining increasing importance and success, the Riemannian manifold representing the input data is assumed to be known explicitly. Some manifolds that have been widely used for data representation are: the manifold of symmetric, positive definite matrices, the Grassmannian manifold of subspaces of a vector space, and the Kendall manifold of shapes. When the manifold is known, the full power of the mathematical theory of Riemannian geometry can be exploited in both the formulation of algorithms as well as their theoretical analysis.
Successful applications of these approaches are numerous and range from brain imaging and low rank matrix completion to computer vision tasks such as object detection and tracking.
This workshop focuses on the latter direction. We aim to bring together researchers in statistics, machine learning, computer vision, and other areas, to discuss and exchange current state of the art results , both theoretically and computationally, and identify potential future research directions.