Researchers at the Pittsburgh Supercomputing Center and HP Labs achieve unprecedented speedup in a key machine-learning algorithm.

PITTSBURGH, May 23, 2011 — Computational scientists at the Pittsburgh Supercomputing Center (PSC) and HP Labs are achieving speedups of nearly 10 times with GPUs (graphic processing units) versus CPU-only code (and more than 1000 times versus an implementation in a high-level language) in k-means clustering, a critical operation for data analysis and machine learning.

A branch of artificial intelligence, machine learning enables computers to process and learn from vast amounts of empirical data through algorithms that can recognize complex patterns and make intelligent decisions based on them. For many machine-learning applications, a first step is identifying how data can be partitioned into related groups or “clustered.”

Ren Wu, principal investigator of the CUDA Research Center at HP Labs, developed advanced clustering algorithms that run on GPUs, which have advantages for many data-intensive applications. PSC scientific specialist Joel Welling recently applied Wu’s innovations to tackle a real-world machine-learning problem. Using data from Google’s “Books N-gram” dataset and working together, Wu and Welling were able to cluster all five-word sets of the one thousand most common words (“5-grams”) occurring in all books published in 2005. With this project, representative of many research efforts in natural-language processing and culturomics, the researchers demonstrated an extremely high performance, scalable GPU implementation of k-means clustering, one of the most used approaches to clustering.

Wu and Welling ran on the latest “Fermi” generation of NVIDIA GPUs. Using MPI between nodes (three nodes, with three GPUs and two CPUs per node), they observed a speedup of 9.8 times relative to running an identical distributed k-means algorithm (written in C+MPI) on all CPU cores in the cluster, and thousands of times faster than the purely high-level language implementation commonly used in machine-learning research. Using their GPU implementation, the entire dataset with more than 15 million data points and 1000 dimensions can be clustered in less than nine seconds. This breakthrough in execution speed will enable researchers to explore new ideas and develop more complex algorithms layered atop k-means clustering.

“K-means is one of the most frequently used clustering methods in machine learning,” says William Cohen, professor of machine learning at Carnegie Mellon University. “It is often used as a subroutine in spectral clustering and other unsupervised or semi-supervised learning methods. Because some of these applications involve many clustering passes with different numbers of means or different randomized starting points a greatly accelerated k-means clustering method will be useful in many machine learning settings.” Cohen co-leads the Never-Ending Language Learning (NELL) and Read the Web projects (http://rtw.ml.cmu.edu/rtw/). The goal of NELL is to automate inferences based on continually “reading” natural-language text from the Web.

Machine learning is just one example of the exploding field of data analytics, notes PSC scientist Nick Nystrom. Other data-analytic applications range from understanding the results of traditional high performance computing (HPC) simulations of global climate, engineering, and protein dynamics to emerging fields that need HPC such as genomics, social network analysis, and mining extensive datasets in the humanities.

“A substantial body of major application codes is already being developed specifically for NVIDIA GPUs,” notes Nystrom, PSC director of strategic applications. “Because NVIDIA GPUs are so widespread, those codes can run well on anything from a supercomputer to a netbook.” Nystrom has been instrumental in PSC’s work with advanced technologies for scientifically important, data-intensive problems. This application of NVIDIA GPUs to k­­-means clustering, he notes, is one example of how a pervasive technology that leverages broad markets can benefit important algorithms in science and data analysis.

This advanced clustering algorithm, notes Wu, also has the advantage of being easy to use, which facilitated rapid implementation with Welling. “I think that the CUDA programming model is a very nice framework,” says Wu, “well balanced on abstraction and expressing power, easy to learn but with enough control for advanced algorithm designers, and supported by hardware with exceptional performance (compared to other alternatives). The key for any high performance algorithm on modern multi/many-core architecture is to minimize the data movement and to optimize against memory hierarchy. Keeping this in mind, CUDA is as easy, if not easier, than any other alternatives.”

Nystrom concurs and sees an exciting future for software developers: “There’s a rich software ecosystem supporting NVIDIA’s GPUs, ranging from easy-to-use compiler directives to explicit memory management to powerful performance tools. Add to that integration of general-purpose processors in this successful line of architectures, and the potential for developing transformative software architectures is extraordinary.”