A diverse collection of undergraduates and graduate and professional students, as Cornellians they’ve explored courses far beyond their academic disciplines; pursued firsthand research in labs and in... Read more about Class of 2020 spotlights shared on Medium
Acharya receives Award for Paper at International Conference on Machine Learning
Cornell ECE Assistant Professor Jayadev Acharya recently received an honorable mention award at the International Conference on Machine Learning (ICML 2017), held in Sydney, Australia.
Cornell ECE Assistant Professor Jayadev Acharya recently received an honorable mention award at the International Conference on Machine Learning (ICML 2017), held in Sydney, for the paper, “A Unified Maximum Likelihood Approach for Estimating Symmetric Properties of Discrete Distributions”. His co-authors include Hirakendu Das (Yahoo!), Acharya’s Ph.D. Advisor Professor Alon Orlitsky (University of California San Diego) and Ananda Theertha Suresh (Google Research).
ICML is the leading academic conference in machine learning and their paper was among four papers awarded from nearly 1700 submissions. Machine learning is the study of algorithms that can learn from data to drive various tasks such as prediction (weather), recommendation (Netflix), classification (spam or not spam), etc.
In many modern applications, obtaining large amount of data is not always feasible. Scientists often look to characterize the fundamental limits of how much one can learn from a given amount of data, in particular whether we can solve problems without even observing all possibilities. For example, in genetics, there is a desire to predict how many rare genetic variants are there across a population, without ever observing most of them. In paleontology, scientists are very interested in estimating the number of dinosaur species to measure ecological diversity. In information theory, we are interested in estimating entropy and mutual information to characterize the fundamental limits of data compression and data communication. For problems like these and many others, researchers have come up with estimators suited to the task. However, each problem requires a different method.
Acharya and his research partners show in their paper that a single estimating framework can be used for a wide range of problems. Because there are no tuning parameters for their framework, one method can work for all problems. Their approach also has strong statistical underpinnings motivated by the “principle of maximum likelihood”. Maximum likelihood is a staple statistical algorithm, which often suffers from over fitting. They analyze a maximum likelihood technique that avoids over fitting. The approach was first proposed by Orlitsky, Santhanam, Viswanathan, and Zhang, but it was unknown whether their method actually requires less data than the regular maximum likelihood. In their work, Acharya and his research partners show that this method is optimal since it requires the least amount of data possible and that their unified technique works for a number of problems.
Jayadev Acharya joined the School of Electrical and Computer Engineering at Cornell University in 2016 as an Assistant Professor after spending two years as a postdoc at MIT. He obtained a Ph.D. from University of California, San Diego where he worked on compression and statistical estimation, with particular emphasis on problems over large domains. His research interests are in information theory, algorithmic statistics, and machine learning. He is particularly interested in understanding the trade-offs between resources (e.g., data, memory, time, etc.) for problems in statistical learning. One of the focuses of his work is understanding and achieving the fundamental tradeoffs between resources that go into a machine-learning problem.