I work with Prateek and Harsha in the Machine Learning and Optimisation (MLO) group and Intelligent Devices Expedition group where I do research on bringing state-of-the-art machine learning to severely resource-constrained devices. I am slated to finish my research fellowship at Microsoft Research in Spring 2019.
My current research follows a bottom-up approach in which I go back to the drawing board to design machine learning algorithms that can achieve state-of-the-art performance while being aware of the resource constraints imposed. For instance,
- With my EMI-RNN work (accepted at NIPS '18), we were able to speed-up RNN inference by up to 72x while gaining on performance. This helped bring RNN based real-time keyword spotting to devices like Raspberry Pi0 and MXChip. A demonstration will be be a part of the MLPCD workshop at NIPS '18.
- The ProtoNN (prototypes + kNN) based machine learning pipeline I developed for GesturePod (in submission, CHI '19) is able to do robust and accurate real-time gesture recognition on 32kB of RAM and a 48MHz processor. GesturePod has been widely covered by the media and will also be Microsoft's demonstration at NIPS '18.