Skip to yearly menu bar Skip to main content


Poster

Statistically Optimal $K$-means Clustering via Nonnegative Low-rank Semidefinite Programming

Yubo Zhuang · Xiaohui Chen · Yun Yang · Richard Zhang

Halle B #261
[ ]
Fri 10 May 1:45 a.m. PDT — 3:45 a.m. PDT
 
Oral presentation: Oral 7A
Fri 10 May 1 a.m. PDT — 1:45 a.m. PDT

Abstract: $K$-means clustering is a widely used machine learning method for identifying patterns in large datasets. Recently, semidefinite programming (SDP) relaxations have been proposed for solving the $K$-means optimization problem, which enjoy strong statistical optimality guarantees. However, the prohibitive cost of implementing an SDP solver renders these guarantees inaccessible to practical datasets. In contrast, nonnegative matrix factorization (NMF) is a simple clustering algorithm widely used by machine learning practitioners, but it lacks a solid statistical underpinning and theoretical guarantees. In this paper, we consider an NMF-like algorithm that solves a nonnegative low-rank restriction of the SDP-relaxed $K$-means formulation using a nonconvex Burer--Monteiro factorization approach. The resulting algorithm is as simple and scalable as state-of-the-art NMF algorithms while also enjoying the same strong statistical optimality guarantees as the SDP. In our experiments, we observe that our algorithm achieves significantly smaller mis-clustering errors compared to the existing state-of-the-art while maintaining scalability.

Live content is unavailable. Log in and register to view live content