Amit Sharma
Principal Researcher | Microsoft Research India

I’m a machine learning researcher working on improving the reasoning of AI systems.
My work combines two seemingly incompatible ideas: the messy but generalizable capabilities of language models with the principled but rigid capabilities of causal models (or formal reasoning models). Early in 2023, I saw the potential of large language models (LLMs) for inferring causal relationships, a key part of scientific discovery. This has led to LLM-based algorithms that achieve up to 96% accuracy on inferring cause and effect across scientific fields, including medicine (Covid-19), climate science (Arctic sea ice coverage), and engineering. I am now extending this work to build a causal AI assistant for science (see PyWhy-LLM).
On the other end, I work on how causality can help improve reliability of AI models. This has led to open-source tools such as DoWhy for causal reasoning and DiCE for counterfactual explanation that are widely used around the world. These days, I’m most excited by Axiomatic Training, a framework for building reasoning verifiers that can correct a language model’s output in real-time. Early results on causal reasoning tasks show that even a small model with 8 billion parameters can achieve nearly double the accuracy compared to frontier LLMs.
I’m also passionate about designing technology interventions that can have a positive societal impact (e.g., see MindNotes app). If you are interested in working with me at MSR India, drop me an email. We hire interns throughout the year. There are also postdoctoral positions available. Additionally, if you are an undergraduate or a masters student, our lab runs an excellent pre-doctoral Research Fellows program.
-
[2015] Ph.D. in Computer Science, Cornell University
[2010] B.Tech. in Computer Science, IIT Kharagpur
-
Causal inference | Causality and machine learning
AI reasoning | Accelerating scientific discovery
news
Jul 25, 2022 | Talk on necessity of causal inference for out-of-distribution generalization in prediction and decision-making at the Technion, Israel. [Slides] |
---|---|
May 31, 2022 | DoWhy library for causal inference evolves to an independent py-why org to foster wider collaboration. Contributions welcome! [Blog][Github][Arxiv] |
Dec 02, 2021 | Talk on Causal Inference for Machine Learning: Generalization, Explanation and Fairness; at the UK Office for National Statistics. [Slides] |
Dec 03, 2020 | Emre and I gave a Microsoft Research webinar on causal inference and its implications for machine learning. [Video] |
Aug 13, 2020 | Featured on the Humans of AI podcast. [Apple Podcasts][Spotify] |
Jul 23, 2020 | Session on causal machine learning with Elias Bareinboim, Susan Athey, and Cheng Zhang. [Video] |
May 30, 2019 | DiCE: Using counterfactual examples to explain machine learning. [Paper][Python Library][Blog] |
Aug 19, 2018 | Emre and I gave a tutorial on causal inference at KDD. [Slides] |
latest posts
selected publications
- TMLR 2024Causal reasoning and large language models: Opening a new frontier for causalityTransactions on Machine Learning Research, Aug 2024
- FAccT 2020Explaining machine learning classifiers through diverse counterfactual explanationsProceedings of the 2020 ACM conference on Fairness, Accountability and Transparency (FAccT), Aug 2020
- SciencePrediction and explanation in social systemsIn Science, Aug 2017