AI safety research based in Tokyo.
We are a research lab focused on AI safety, alignment, and governance. Our work spans evolutionary approaches, multi-agent coordination, and building interpretable AI systems.
News
Article
Why We Study Multi-Agent Coordination
A look into why multi-agent coordination is central to AI alignment research.
Preprint
Evolving Interpretable Constitutions for Multi-Agent Coordination
We introduce a method for evolving constitutions in multi-agent systems to improve interpretability and alignment.