My name is Chandler Smith, and I am a Research Engineer at the Cooperative AI Foundation where I conduct technical research on multi-agent systems. I am delighted to have been selected as a 2025 Cooperative AI PhD Fellow. I am also a Foresight AI Safety Grant Recipient, working on multi-agent security, steganography, and AI control, including our recent piece, "Secret Collusion: Will We Know When to Unplug AI?". Additionally, I consult with IQT's applied research and technology architect teams on AI, multi-agent systems, and AI infrastructure. Previously, I was a MATS scholar collaborating with Jesse Clifton on multi-agent systems research. In the past, I worked as an engineer for Dimagi on global health and COVID response projects.
As AI systems become increasingly interconnected and autonomous, they introduce unprecedented risks that extend beyond the safety challenges of individual systems. My research focuses on understanding, measuring, and mitigating these risks via benchmarking, oversight, and security in both LM and MARL environments.
I recently presented "Better Benchmarks: A Roadmap for High-Stakes Evaluation in the Age of Agentic AI" at the IASEAI '25 Safe & Ethical AI Conference. You can watch the presentation on the OECD channel (at 02:53:00).
You can find me on Twitter, LinkedIn, GitHub, Google Scholar, and Email.
