Awesome ! As we all know clustering is at the heart of Machine Learning and especially in unsupervised learning setting. But given the diverse set of algorithms, it is generally difficult to figure out which one should be applied and when. The authors of the following paper look at different instances or relaxation of the k-means/k-median algorithm and eventually get a sense of their respective strength through a vizualisation of their respective phase transitions. The map makers story continues:

Relax, no need to round: integrality of clustering formulations by Pranjal Awasthi, Afonso S. Bandeira, Moses Charikar, Ravishankar Krishnaswamy, Soledad Villar, Rachel Ward

In this paper, we initiate the study of exact recovery conditions for convex relaxations of point cloud clustering problems. We focus on two of the most common optimization problems for unsupervised clustering: k-means and k-medians clustering. Motivations for focusing on convex relaxations are that (a) they come with a certificate of optimality, and (b) they are generic tools not tailored to the specific recovery-guarantee conditions. More precisely, consider the distributional setting where there are k clusters and data from each cluster consists of n points sampled from a symmetric distribution within a ball of unit radius of dimension m. We ask: what is the minimal separation distance between cluster centers needed for various convex relaxations to exactly recover these k clusters as its optimal integral solution? For the k-median linear programming relaxation we show a tight bound: exact recovery is obtained given arbitrarily small cluster separation Δ>2+ϵ, for any ϵ>0. Under the same distributional model, the k-means LP relaxation fails to recover such clusters at separation as large as 2+2√. Yet, if we enforce PSD constraints on the k-means LP, we get exact cluster recovery at separation as low as Δ>2+2k/m−−−−−√+ϵ. In contrast, common heuristics such as Lloyd's algorithm (a.k.a. the k-means algorithm) can fail to recover clusters in this setting, even just three clusters and arbitrarily large cluster separation distance. To complement the theoretical analysis, we provide an experimental study of the recovery guarantees for these various methods. Our work provides new insights into the power of these relaxations and we believe that this line of research will lead to a deeper understanding of such methods in the future.

__unknown D and X__, solve for XX^T = I and X_i = 0 or 1"

Relevant previous discussion on a related subject:

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment