Gurshaant Malik

ML Scientist 3 @ Applied Brain Research || PhD @ University of Waterloo

news

Dec 8, 2021 Happy to announce that our “Hoplite-ML” work has been accepted in TRETS, one of the most prestigious venues for research related to reconfigurable computing! We overcome pessimism in worst-case routing latency analysis of timing-predictable interconnect workloads by single digit factors. We learn interconnect parameters through a novel evolutionary algorithm based on Maximum Likelihood Estimation (MLE) and Covariance Matrix Adaptation (CMA-ES). We also propose nested learning that learns switch configurations and regulation rates in-tandem. Compared to standalone switch learning, this symbiotic nested learning helps achieve ≈1.5× lower cost constrained latency, ≈3.1× faster individual rates and ≈1.4× faster mean rates. We also evaluate improvements to vanilla NoCs’ routing using only standalone rate learning (no switch learning); with ≈1.6× lower latency across synthetic and real world benchmarks.
Oct 5, 2021 Happy to announce our latest breakthrough in Language Modelling using Legendre Memory Units (LMUs)! Our new architecture attains the same accuracy as transformers with 10x fewer tokens. For the same amount of training, our model improves the loss over transformers about as much as transformers improve over LSTMs. Additionally, adding global self-attention complements our architecture and the augmented model improves performance even further.
Mar 11, 2021 Happy to announce that our paper about state-of-the-art neural networks for Keyword Spotting applications has been accepted at tinyML, 2021!! In this work we use hardware aware training (HAT) to build new KWS neural networks based on the Legendre Memory Unit (LMU) that achieve state-of-the-art (SotA) accuracy and low parameter counts. Furthermore we also outperform other general purpose and specialized backends for KWS by 16-24x! Congrats to my fellow collegues at ABR who were integral to this work. Paper Link!
Jan 1, 2021 Happy new year! Starting this year, I have been promoted to Machine Learning Scientist 3 at Applied Brain Research. This company has been a lot of fun to work with and I am truly proud of the next generation of products that we have been building in the domain of Machine Learning.
Sep 9, 2020 Happy to announce our breakthrough results on Keyword Spotting using Legendre Memory Units based recurrent connections! See what I was upto for the past year or so by reading the paper here! Check out the press release here.
Aug 13, 2020 Glad to share that our paper, titled “Learn the Switches: Evolving FPGA NoCs with Stall-Free and Backpressure based Routers” has been nominated for the “Stamatis Vassiliadis Memorial” Best Paper Award at FPL 2020. Checkout our paper here. Update: Congrats to “LogicNets: Co-Designed Neural Networks”, a work by my talented friends and collegues at Xilinx Ireland, Yaman and Michaela, for winning the award. Well deserved!
May 19, 2020 Happy to announce that our paper focussed on dynamically learning NoC switches titled, “Learn the Switches: Evolving FPGA NoCs with Stall-Free and Backpressure based Routers”, has been accepted at FPL 2020. A big thanks and congratulations to my co-authors: Ian Elmor Land, Prof. Rodolfo Pellizoni and Prof. Nachiket Kapre.
Mar 20, 2020 Glad to announce that our work around distributed Machine Learning for training DNNs using NeuroEvolution, titled “DarwiNN: Efficient Distributed Neuroevolution under Communication Constraints” has been accepted at The Genetic and Evolutionary Computation Conference, 2020. A big thanks and congratulations to my co-authors: Lucian Petrica and Michaela Blott (Both at Xilinx Ireland RnD).

selected publications