Postdoc Sushrut Karmalkar: In his own words

Postdoc Sushrut Karmalkar has joined UW-Madison this fall as a research associate with Professor Ilias Diakonikolas.

In his own words:

My choice of UW-Madison had to do with its excellent research atmosphere and a very active theory and machine learning community that works on robust statistics and related fields. Additionally, the Institute for the Foundations of Data Science at Madison was an important factor in my decision. I see the institute as an excellent environment to further explore my research interests. I was also named a 2021 Computing Innovation Fellow by the Computing Research Association (CRA), the Computing Community Consortium (CCC).

Until this summer, I was a Ph.D. student in the Computer Science Department at the University of Texas at Austin advised by Adam Klivans. Prior to my work at UT-Austin, I was a student at the Chennai Mathematical Institute in India.

My work falls broadly into the area of the theory of machine learning. Some examples of the kinds of research questions I have worked on in the past include:

  1. Outlier-robust Regression. Can we recover a high dimensional linear function or univariate polynomial function from a constant fraction of the samples being arbitrarily corrupted? What about half or more than half? We explore these problems and either show that these are possible or demonstrate that it is impossible to achieve this kind of tolerance due to either information theoretic or computational constraints. 
  2. Learning one-layer neural networks. Is it possible to learn a single-layer neural network even under the benign assumption that the samples are drawn from a Gaussian distribution? We show that even for the Gaussian distribution, it is computationally hard to agnostically learn a one-layer neural network. We also show that this is possible if we weaken the objective. 
  3. Inverse problems under generative model assumptions. Can we sample-efficiently recover signals which are sampled from a “natural-looking”  distribution from their linear measurements? We demonstrate an instance optimal algorithm for this problem when the measurements are drawn from the output of a Generative Adversarial Network (GAN).

During my postdoc, I hope to work with Prof. Ilias Diakonikolas to greatly expand the algorithmic toolkit that deals with robust statistics.