Amit Sharma

Principal Researcher | Microsoft Research India

prof_pic.jpg

🚀"Making causal AI a reality"

đź’»DoWhy/PyWhy OSS ecosystem


đź“–Why Amit Sharma created DoWhy

🎧Causal Science|Humans of AI

I’m a machine learning researcher focused on improving the reasoning abilities of AI systems.

My work combines two seemingly incompatible ideas: the messy but generalizable capabilities of language models with the principled but rigid capabilities of causal reasoning. Early in 2023, I saw the potential of large language models (LLMs) for inferring causal relationships, a key part of scientific discovery. This has led to LLM-based algorithms that achieve up to 96% accuracy on inferring cause and effect across scientific fields, including medicine (Covid-19), climate science (Arctic sea ice coverage), and engineering. This work has now expanded into the PyWhy-LLM project, an open-source tool for building a causal AI assistant for science.

On the other end, I work on how causality can help improve AI model reliability. My research has led to open-source software such as DoWhy for causal reasoning and DiCE for counterfactual explanation that are used by millions globally. These days, I’m most excited by Axiomatic Training, a framework for building reasoning verifiers that can correct a language model’s output in real-time. This approach has shown that even a small 8B-parameter model can achieve nearly double the accuracy of frontier LLMs on causal reasoning tasks.

I’m also passionate about designing technology interventions that can have a positive societal impact (e.g., see MindNotes app). If you are interested in working with me at MSR India, drop me an email. We hire interns throughout the year. There are also postdoctoral positions available. Additionally, if you are an undergraduate or a masters student, our lab runs an excellent pre-doctoral Research Fellows program.

  • [2015] Ph.D. in Computer Science, Cornell University

    [2010] B.Tech. in Computer Science, IIT Kharagpur

  • Causal inference | Causality and machine learning

    AI reasoning | Accelerating scientific discovery

news

Feb 22, 2025 Travelling to Barbados for the 4th Bellairs workshop on causality.
Dec 08, 2024 Invited talk at the NeurIPS 2024 workshop on Causality and Large Models [Slides].
Jul 24, 2024 Awarded the NASSCOM AI Gamechangers Award 2023-24 [Details][Paper].
Jul 24, 2024 Nominated as Action Editor for Transactions on Machine Learning Research (TMLR).
Sep 14, 2023 Keynote speaker at the AI, Causality and Medicine Symposium (AICPM 2023) organized by Leibniz AI Lab, Germany [Talk].
Jun 08, 2023 My work on causal reasoning and large language models featured on the Microsoft Research AI Frontiers podcast [Apple Podcasts][Spotify].
May 05, 2023 Notable Top-25% paper at ICLR 2023, International Conference on Learning Representations, for providing ”a new understanding of the out-of-distribution generalization problem” [Paper][CACM Algorithm Code].
Dec 06, 2022 Invited talk at the Causal Methods in Environmental Science workshop at University of Cambridge, UK [Slides].

latest posts

selected publications

  1. TMLR 2024
    Causal reasoning and large language models: Opening a new frontier for causality
    Emre Kıcıman, Robert Ness, Amit Sharma, and Chenhao Tan
    Transactions on Machine Learning Research, Aug 2024
  2. FAccT 2020
    Explaining machine learning classifiers through diverse counterfactual explanations
    Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan
    Proceedings of the 2020 ACM conference on Fairness, Accountability and Transparency (FAccT), Aug 2020
  3. Science
    Prediction and explanation in social systems
    Jake M Hofman, Amit Sharma, and Duncan J Watts
    In Science, Aug 2017