The absolute trainer to light up AI agents.
Join our Discord community to connect with other users and contributors.
- Turn your agent into an optimizable beast with ZERO CODE CHANGE (almost)! 💤
- Build with ANY agent framework (LangChain, OpenAI Agent SDK, AutoGen, CrewAI, Microsoft Agent Framework...); or even WITHOUT agent framework (Python OpenAI). You name it! 🤖
- Selectively optimize one or more agents in a multi-agent system. 🎯
- Embraces Algorithms like Reinforcement Learning, Automatic Prompt Optimization, Supervised Fine-tuning and more. 🤗
Read more on our documentation website.
pip install agentlightningPlease refer to our installation guide for more details.
To start using Agent-lightning, check out our documentation and examples.
- 8/11/2025 Training AI Agents to Write and Self-correct SQL with Reinforcement Learning Medium.
- 8/5/2025 Agent Lightning: Train ANY AI Agents with Reinforcement Learning arXiv paper.
- 7/26/2025 We discovered an approach to train any AI agent with RL, with (almost) zero code changes. Reddit.
- 6/6/2025 Agent Lightning - Microsoft Research Project page.
- DeepWerewolf — A case study of agent RL training for the Chinese Werewolf game built with AgentScope and Agent Lightning.
- AgentFlow — A modular multi-agent framework that combines planner, executor, verifier, and generator agents with the Flow-GRPO algorithm to tackle long-horizon, sparse-reward tasks.
Agent Lightning keeps the moving parts to a minimum so you can focus on your idea, not the plumbing. Your agent continues to run as usual; you can still use any agent framework you like; you drop in the lightweight agl.emit_xxx() helper, or let the tracer collect every prompt, tool call, and reward. Those events become structured spans that flow into the LightningStore, a central hub that keeps tasks, resources, and traces in sync.
On the other side of the store sits the algorithm you choose, or write yourself. The algorithm reads spans, learns from them, and posts updated resources such as refined prompt templates or new policy weights. The Trainer ties it all together: it streams datasets to runners, ferries resources between the store and the algorithm, and updates the inference engine when improvements land. You can either stop there, or simply let the same loop keep turning.
No rewrites, no lock-in, just a clear path from first rollout to steady improvement.
| Workflow | Status |
|---|---|
| CPU Tests | |
| GPU Tests | |
| Examples Integration | |
| Latest Dependency Compatibility | |
| Legacy Examples Compatibility |
If you find Agent Lightning useful in your research or projects, please cite our paper:
@misc{luo2025agentlightningtrainai,
title={Agent Lightning: Train ANY AI Agents with Reinforcement Learning},
author={Xufang Luo and Yuge Zhang and Zhiyuan He and Zilong Wang and Siyun Zhao and Dongsheng Li and Luna K. Qiu and Yuqing Yang},
year={2025},
eprint={2508.03680},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2508.03680},
}This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
This project has been evaluated and certified to comply with the Microsoft Responsible AI Standard. The team will continue to monitor and maintain the repository, addressing any severe issues, including potential harms, if they arise.
This project is licensed under the MIT License. See the LICENSE file for details.