Generally, my research interests span the spectrum of information theory, statistical signal processing, and decision theory. In addition to applications to wireless networks and sensor fusion, I study high-dimensional machine learning with an emphasis on the performance limits of classification and learning algorithms.

Active Research Topics

Fundamental Limits of Machine Learning: The availability of massive datasets and the emergence of sophisticated algorithms have precipitated unprecedented success in machine learning, but in many cases their performance is poorly understood. My research characterizes the fundamental limits of machine learning problems. How far can we compress a high-dimensional signal and still classify it reliably? How many training samples do we need to learn a classifier? I use tools from information theory and statistics to derive rigorous answers to these questions. Collaborators on this project include Robert Calderbank at Duke, Miguel Rodrigues at University College London, and Ahmad Beirami at Harvard.

Algebraic Codes for Wireless Networks: Interference is the perennial challenge in designing wireless networks. Recent results in network information theory show that using lattice codes allows users to "line up" interference and mitigate its effects. I apply these results to practical networks with cooperative relays, designing practical, low-complexity lattice coding schemes that are suitable for cooperative communication. Collaborators on this project include Nuwan Ferdinand at the University of Toronto, Behnaam Aazhang at Rice, and Brian Kurkoski at JAIST.

Multi-scale Distributed Processing in Wireless Sensor Networks: In order to process their data, nodes in a wireless sensor network must propagate their measurements throughout the network, a task that can consume substantial time, bandwidth, and transmit energy. To meet this challenge, I study resource-efficient strategies for distributed signal processing in wireless sensor networks. For wireless networks, I have developed a family of multi-scale schemes for sensor aggregation that can compute a linear functional of data from n sensors using time, bandwidth, and energy scaling no faster than log(n). These schemes can be adapted for subspace tracking and spectrum sensing. Collaborators for this project include Waheed Bajwa at Rutgers, Urbashi Mitra at USC, and Nicolo Michelusi at Purdue.

Presentations

[P2] "Information-theoretic Performance Limits in Machine Learning," Presented at University of Illinois-Chicago, Michigan State University, Wayne State University, and the University of Toronto, 2016-2017.

[P1] "Rate-distortion Bounds on the ell-1 Bayes Risk," MIT, Sept. 2015