Efficient and Elastic Large Models

Generative LLMs are transforming multiple industries and have proven to be robust for multitude of use cases across industries and settings. One of the key impediments to their widespread deployment is the cost of serving and it’s deployability across multiple devices/settings. In this project, we are developing multiple techniques in this domain including Matformers, Treeformer, HIRE, Tandem.

Collaborative Reinforcement Learning

The goal of project is to enable collaborative exploration and exploitation amongst multiple users, to enable low sample complexity recommendation system or multi-user RL.

End-to-end Adaptive Retrieval

Provable Non-convex Optimization for Machine Learning

Several ML problems can be posed as non-convex optimization problems which in general are hard to solve. The goal of this project is to explore certain problem structues to solve these “hard” non-convex optimization problems efficiently. Click here for more information and links to talks, our publications etc.

Intelligent Tiny Devices

Can we devise tiny ML models that can fit in 2KB of RAM and enable tiny devices to be “intelligent”? Click here for more information.

Robust ML Models

Traditionally, ML models have been designed assuming “benign” i.i.d. data. However, in practice, one often encounters datasets with several outliers (possibly malicious/adversarial). The goal of this project is to develop ML methods that are robust to such outliers but still efficiently learns nearly optimal models.