eye
Dimension Reduction and Feature Selection
Linear and nonlinear dimension reduction methods can allow high-dimensional data to be visualized in 2 or 3 dimensions, as well as create more compact features for a variety of purposes.
FASTlab logo GT

FASTlab Home Papers/Code Team
Non-Negative Matrix Factorization, Convexity and Isometry
Nikolaos Vasiloglou, Alexander Gray, and David Anderson
SIAM Data Mining (SDM) 2009


A non-negative matrix factorization (NMF) method which also preserves distances like a manifold learning method, based on semidefinite programming. We also show how NMF can be performed as a convex optimization for the first time. [pdf]

Abstract: Traditional Non-Negative Matrix Factorization (NMF) is a successful algorithm for decomposing datasets into basis functions that have reasonable interpretation. One problem of NMF is that the original Euclidean distances are not preserved. Isometric embedding (IE) is a manifold learning technique that traces the intrinsic dimensionality of a data set, while preserving local distances. In this paper we propose a hybrid method of NMF and IE, IsoNMF. IsoNMf combines the advantages of both NMF and Isometric Embedding and it gives a much more compact spectrum compared to the original NMF.

@Inproceedings{vasiloglou2009isonmf, Author = "Nikolaos Vasiloglou and Alexander Gray and David Anderson", Title = "{Non-Negative Matrix Factorization, Convexity and Isometry}", Booktitle = "{SIAM International Conference on Data Mining (SDM)}", Year = "2009" }
See also


QUIC-SVD: Fast SVD Using Cosine Trees
We have developed a method for fast approximate singular value decomposition and demonstrated it for principal component analysis. [see full entry here]


FuncICA: ICA for Time Series
We have developed a variant of independent component analysis for functional data such as time series. [see full entry here]


In preparation


Volumetric Manifold Learning
We have developed a new approach to manifold learning based on ideas from Riemannian geometry.


Decision Trees for Density Estimation
We have developed the analog of classification or regression trees, for density estimation, which yields automatic feature selection among other benefits.
Scalable Semidefinite Manifold Learning
Nikolaos Vasiloglou, Alexander Gray, and David Anderson
Machine Learning and Signal Processing (MLSP) 2009


Fast algorithms for maximum variance unfolding and related manifold learning methods. We've demonstrated up to 1 million points recently. [pdf]

Abstract: Maximum Variance Unfolding (MVU) is among the state of the art Manifold Learning (ML) algorithms and experimentally proven to be the best method to unfold a manifold to its intrinsic dimension. Unfortunately it doesn't scale to more than a few hundred points. A non convex formulation of MVU made it possible to scale up to a few thousand points with the risk of getting trapped in local minima. In this paper we demonstrate techniques based on the dual-tree algorithm and L-BFGS that allow MVU to scale up to 100,000 points. We also present a new variant called Maximum Fur- thest Neighbor Unfolding (MFNU) which performs even better than MVU in terms of avoiding local minima.

@Inproceedings{vasiloglou2009mvu, Author = "Nikolaos Vasiloglou and Alexander Gray and David Anderson", Title = "{Scalable Semidefinite Manifold Learning}", Booktitle = "{IEEE International Workshop on Machine Learning For Signal Processing (MLSP)}", Year = "2009" }
projection from 3d to 2d Learning Dissimilarities by Ranking: From SDP to QP
Hua Ouyang and Alexander Gray
International Conference on Machine Learning (ICML) 2008


The first manifold learning method to preserve the ordering of the distances rather than their numerical values. The result can often work where the standard approach cannot. [pdf]

Abstract: We consider the problem of learning dissimilarities between points via formulations which preserve a specified ordering between points rather than the numerical values of the dissimilarities. Dissimilarity ranking (d-ranking) learns from instances like "A is more similar to B than C is to D" or "The distance between E and F is larger than that between G and H". Three formulations of d-ranking problems are presented and new algorithms are presented for two of them, one by semidefinite programming (SDP) and one by quadratic programming (QP). Among the novel capabilities of these approaches are out-of-sample prediction and scalability to large problems.

@Inproceedings{ouyang2008metric, author = "Hua Ouyang and Alexander Gray", title = "{Learning Dissimilarities by Ranking: From SDP to QP}", booktitle = "Proceedings of the Twenty-fifth International Conference on Machine Learning", year = "2008" }