Deep reasoning
I study how causality and other formal reasoning systems can improve AI reasoning and help solve complex problems that require exploration over thousands of steps. See Plan*RAG for an example on how solving complex tasks can be abstracted out as planning over a directed acyclic graph.
Improving AI reasoning
- A planning algorithm that utilize LLMs for completing complex tasks (Verma et al., 2024)
- Axiomatic training framework for teaching compositional reasoning to AI models (Vashishtha et al., 2025)
- LiveDRBench: A benchmark for Deep Research (Java et al., 2025)
Text-based optimization for generative AI
- Optimizing prompts for large language models (Juneja et al., 2025; Srivastava et al., 2024)
References
- ArxivPlan*RAG: Planning-guided Retrieval Augmented GenerationarXiv preprint arXiv:2410.20753, 2024
- ICML 2025Teaching Transformers Causal Reasoning through Axiomatic TrainingIn Forty-second International Conference on Machine Learning, 2025
- ArxivCharacterizing Deep Research: A Benchmark and Formal DefinitionarXiv preprint arXiv:2508.04183, 2025
- ACL 2025Task Facet Learning: A Structured Approach to Prompt OptimizationIn ACL Findings, 2025
- ACL 2024NICE: To optimize in-context examples or not?In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024