Qing Qu receives CAREER award to explore the foundations of machine learning and data science

His research develops computational methods for learning succinct representations from high-dimensional data.
Qing Qu

Prof. Qing Qu received an NSF CAREER Award to develop a principled and unified mathematical framework for learning succinct data representation in high-dimensional space via nonconvex optimization methods. The project is titled From Shallow to Deep Representation Learning: Global Nonconvex Optimization Theories and Efficient Algorithms.

The five-year grant, administered by the NSF Division of Computer and Information Science and Engineering, will support his exploration of the foundations and frontiers of machine learning and data science. This project will not only enrich the mathematical theory of signal processing, optimization, and machine learning, but also have broad impacts on many other practical areas in engineering and science where representation learning methods have already made significant advances. 

Today we are in an era of data revolution, says Qu. As engineering and the sciences become increasingly data and computation driven, the importance of seeking succinct data representation and developing efficient optimization methods has expanded to touch almost every stage of the data analysis pipeline, ranging from signal and data acquisition to modeling and prediction. There are fundamental reasons (“the curses of dimensionality”) why learning in high-dimensional spaces is challenging. Thanks to “the blessing of dimensionality,” in practice, the intrinsic dimension of high-dimensional data is often much lower than its ambient dimension. 

In the past decades, low-dimensional signal modeling has driven developments both in theory and in applications to a vast array of areas, from medical and scientific imaging, to low-power sensors, to the modeling and interpretation of bioinformatics data sets, etc. However, as datasets grow, data collection techniques become increasingly uncontrolled – most of the underlying low-dimensional models of the data are highly nonlinear, whereas classical techniques break down completely in this setting. It raises tremendous challenges in optimization and guaranteeing their correctness.

To meet those new challenges, together with his students, Qu will develop efficient and guaranteed computational methods for learning low-complexity representations from high-dimensional data, leveraging tools from machine learning, numerical optimization, and high dimensional geometry. More specifically, the project will demystify the fundamental nonconvex optimization problems in learning low-dimensional representations from extracting low-level features to learning high-level information, and study the generalization and robustness of these learning procedures through the understanding of those learned low-dimensional representations. 

In related efforts, Qu successfully organized annual virtual workshops on Seeking Low-dimensionality in Deep Neural Networks (SLowDNN). He will be teaching an educational short course at the ICASSP’22 conference, titled “Low-Dimensional Models for High-Dimensional Data: From Linear to Nonlinear, Convex to Nonconvex, and Shallow to Deep.” Qu is also developing a suite of new machine learning courses, particularly for ECE students at both undergraduate and graduate levels, including EECS 453: Principles of Machine Learning and EECS 559: Optimization Methods for Signal & Image Processing and Machine Learning at the University of Michigan.

April 17, 2023 :

Qing Qu receives Amazon Research Award

Qu's research project in the area of machine learning algorithms and theory is called "Principles of deep representation learning via neural collapse." Awardees, who represent 54 universities in 14 countries, have access to Amazon public datasets, along with AWS AI/ML services and tools.