Siddharth Swaroop
Machine Learning Postdoctoral Fellow at Harvard University.
I am a postdoctoral fellow in the Data to Actionable Knowledge group, working with Prof Finale Doshi-Velez.
I obtained my PhD in the Computational and Biological Learning lab at the University of Cambridge, supervised by Prof Richard E Turner and advised by Prof Carl Rasmussen. During my PhD, I designed algorithms for large-scale machine learning systems that learn sequentially without revisiting past data, are private over user data, and are uncertainty-aware. My PhD was funded by an EPSRC DTP award and a Microsoft Research EMEA PhD Award. I also held an Honorary Vice-Chancellor’s Award from the Cambridge Trust.
Quick Links: Publications, Google Scholar, CV.
Research summary
My goal is to enable human control over AI systems. I will do this across the full pipeline, from AI systems, to effective human-AI teaming, to the impact of deployed AI systems on society. This involves (1) understanding and adapting the knowledge in machine learning (ML) models; (2) improving performance of human-AI teams by personalising AI systems to different humans; (3) translating AI policies into technical requirements for ML systems.
News
Jun 2024 | Invited speaker at the 2nd Bayes-Duality Workshop in Tokyo, Japan. |
May 2024 | Paper published in International Conference on Intelligent User Interfaces (IUI 2024), Accuracy-Time Tradeoffs in AI-Assisted Decision Making under Time Pressure. |
May 2024 | Paper published in International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024), Reinforcement Learning Interventions on Boundedly Rational Human Agents in Frictionful Tasks. |
Apr 2024 | Gave talks at the SPIRAL Seminar Series (Northeastern University, USA) and TUFTS CS Colloquium (TUFTS, USA) on “Quick and accurate knowledge adaptation in machine learning”. |
Feb 2024 | Book chapter published in ‘Towards Human Brain Inspired Lifelong Learning’, Lifelong Learning for Deep Neural Networks with Bayesian Principles. |
Nov 2023 | Paper published in Transactions on Machine Learning Research, Improving Continual Learning by Accurate Gradient Reconstructions of the Past. |
Jul 2023 |
Paper at Advances in Approximate Bayesian Inference Symposium 2023, Improving Continual Learning by Accurate Gradient Reconstructions of the Past. Paper at Duality Principles for Modern ML Workshop at ICML 2023, Memory Maps to Understand Models. Two papers at the AI&HCI workshop at ICML 2023, Adaptive interventions for both accuracy and time in AI-assisted human decision making, and Discovering User Types: Mapping User Traits by Task-Specific Behaviors in Reinforcement Learning. Paper at Challenges of Deploying Generative AI Workshop at ICML 2023, Soft prompting might be a bug, not a feature. |
Jun 2023 | Invited speaker at the Bayes-Duality Workshop in Tokyo, Japan. |
Apr 2023 | Paper published in Transactions on Machine Learning Research, Differentially private partitioned variational inference. |
Dec 2022 | Organised the Continual Lifelong Learning Workshop at the Asian Conference on Machine Learning, 2022. As part of the conference, I also was a mentor in the Mentorship Program, chaired an invited talk session, and chaired a paper talks session. |
Jul 2022 | I have started as a postdoctoral fellow in Harvard University, with Prof Finale Doshi-Velez in the Data to Actionable Knowledge group. |
Jun 2022 | Invited talk at the Workshop on Continual Learning in Computer Vision at CVPR 2022 on “Knowledge-adaptation priors for continual learning”. |
Mar 2022 | I successfully defended my PhD thesis, “Probabilistic Continual Learning using Neural Networks”, available online here. |
Dec 2021 |
Paper at NeurIPS 2021, Knowledge-Adaptation Priors. Second paper at NeurIPS 2021, Collapsed Variational Bounds for Bayesian Neural Networks. Gave part of an invited talk at Bayesian Deep Learning workshop (at NeurIPS 2021), jointly with Emtiyaz Khan and Dharmesh Tailor, Adaptive and Robust Learning with Bayes. |
Nov 2021 | I have written two blog posts on natural-gradient variational inference (NGVI). The first part motivates and derives equations for NGVI on neural networks. The second part scales to large datasets/architectures such as ImageNet/ResNets, following Osawa et al. (2019). |
Jul 2021 |
Invited oral at Theory and Foundations of Continual Learning workshop (ICML 2021), “Continual Deep Learning with Bayesian Principles”. Two invited talks at Microsoft Research Cambridge, UK, at the Machine learning reading group and Healthcare intelligence reading group, on “Continual Deep Learning with Bayesian Principles”. |
Jun 2021 | Invited talks at University of Toronto, Canada; DtAK lab, Harvard University, USA; and OATML, University of Oxford, UK, on Continual Deep Learning by Functional Regularisation of Memorable Past (Pan et al., 2020). |
Jan 2021 | Paper at ICLR 2021, Generalized Variational Continual Learning. |
Dec 2020 |
Oral presentation at NeurIPS 2020, Continual Deep Learning by Functional Regularisation of Memorable Past (top 1% of submissions, 105/10K). Paper at NeurIPS 2020, Efficient Low Rank Gaussian Variational Inference for Neural Networks. |
Jul 2020 |
Oral at LifeLongML Workshop (ICML 2020), Combining Variational Continual Learning with FiLM Layers.
Second oral at LifeLongML Workshop (ICML 2020), Continual Deep Learning by Functional Regularisation of Memorable Past. |