LIST and ADIA Lab Launch Joint Effort to Stress-Test Multi-Agent AI Systems

Published on 15/05/2025

In a new international collaboration, the Luxembourg Institute of Science and Technology (LIST) and ADIA Lab, the independent Abu Dhabi-based data and computational sciences research institute, are joining forces to take on one of artificial intelligence’s toughest blind spots: how to test, trust, and govern multi-agent AI systems.

From content creation to customer support, agent-based systems are rapidly becoming the norm. But today’s testing methods still focus on individual agents, leaving a critical gap when it comes to understanding how these systems behave. The ADIA Lab–LIST collaboration aims to close that gap by creating a sandbox that supports safe experimentation, stress-testing, and governance of complex AI workflows.

Commenting on the announcement, Dr. Horst Simon, Director of ADIA Lab, said: “The advent of agentic AI brings enormous promise, but also complex risks that cannot be addressed by current single-agent testing models. This collaboration with LIST allows us to take a critical step toward building the necessary infrastructure — both conceptual and technical — for evaluating multi-agent behavior holistically. Our joint sandbox will serve as a safe, controlled environment for companies and researchers to test these systems before real-world deployment.”

LIST already operates a cutting-edge AI Sandbox — a unique environment designed to stress-test AI models for bias, robustness, multilingual performance, and more.

“We’ve built an operational AI Sandbox that moves beyond lab benchmarks to test how AI behaves in realistic, multilingual, and often unpredictable contexts,” says Francesco Ferrero, Head of LIST’s Flagship Initiative on Artificial Intelligence. “We’ve used it to assess AI models for fairness, transparency, robustness, and performance in languages like Luxembourgish — which are too often overlooked. What’s been missing is a structured, secure environment to test multi-agent AI systems — where agents interact, evolve, and sometimes conflict. That’s what we’re building with ADIA Lab. Together, we’re extending the sandbox to support systemic testing — enabling researchers and companies to experiment safely before real-world deployment.”

The joint sandbox will support research into emergent behaviors, coordination strategies, risk mitigation, and prompt-based control techniques. It will also serve as a neutral, safe environment where companies and institutions can test new models before deployment. Results will be shared openly through publications and joint activities, contributing to global efforts around safe and transparent AI development.

Jurgen Joossens, Deputy CEO of LIST, concludes: “This collaboration underscores Luxembourg’s ambition to play a constructive role in the global dialogue on trustworthy AI. By joining forces with ADIA Lab, we build bridges across borders, foster scientific exchange, and contribute to shaping international standards for the governance of emerging technologies.”

This initiative reinforces Luxembourg’s growing position as a trusted AI testbed and a leader in applied multilingual AI research — as seen in LIST’s recent work on AI testing in Luxembourgish. The collaboration with LIST also adds to ADIA Lab’s expanding portfolio of global scientific collaborations, which includes institutions such as ETH Zurich, the University of Toronto, University of Granada, Minsait, and Rigetti Computing.

Share this page:

Contact

 Jurgen JOOSSENS
Jurgen JOOSSENS
Send an e-mail
 Francesco FERRERO
Francesco FERRERO
Send an e-mail
 Jordi CABOT SAGRERA
Jordi CABOT SAGRERA
Send an e-mail