Kale-ab Tessera
PhD Candidate, University of Edinburgh.
Seeking research internships (Summer/Fall 2026) in multi-agent systems, open-endedness, cooperative AI, and world modelling. kaleabtessera@gmail.com · resumé
I study how multi-agent systems can cooperate robustly in dynamic, open-ended environments, spanning multi-agent reinforcement learning and foundation models. My work addresses a core question: as agents are deployed in increasingly open-ended settings, when and why do they fail to cooperate, and how do we ensure they don’t?
I am a third-year PhD candidate in the Bayesian and Neural Systems group at the University of Edinburgh, advised by Amos Storkey, Tim Rocktäschel (UCL), and Aris Filos-Ratsikas. I’m also affiliated with MARBLE (Multi-Agent, Reinforcement, Behaviour and Learning), where I co-organise the 🤖 RL & Agents Reading Group.
Before my PhD, I spent 2.5 years as a Research Engineer on the MARL team at InstaDeep and 2 years as an ML Engineer (4.5 years total in industry ML), as well as 3 years in software engineering.
Research Interests:
- Multi-agent cooperation in foundation models: evaluating and benchmarking coordination capabilities of LLM-based agents in open-ended settings.
- Adaptive cooperation in MARL: learning adaptive policies that dynamically adjust to changing coordination demands across diverse tasks (HyperMARL, NeurIPS 2025).
- Probing coordination requirements: information-theoretic tools for evaluating whether benchmarks require genuine Dec-POMDP reasoning (Probing Dec-POMDP Reasoning, Oral, AAMAS 2026).
For more information, you can view my resumé.
news
| May, 2026 | 🌟 Presenting Probing Dec-POMDP Reasoning in Cooperative MARL at AAMAS 2026 in Paphos, Cyprus. |
|---|---|
| Dec, 2025 | 🌟 Presented HyperMARL: Adaptive Hypernetworks for Multi-Agent RL at NeurIPS 2025 in San Diego, US. |
| Sep, 2025 | 🗣️ Talk on “Algorithms and Benchmarks for Robust Multi-Agent Coordination” at the RAIL Lab, University of the Witwatersrand. |
| Aug, 2025 | 🏅 Remembering the Markov Property in Cooperative MARL won best poster (1st place) out of 278 submissions at the Deep Learning Indaba in Kigali, Rwanda. |
| Aug, 2025 | 📅 Co-Programme Chair for the Deep Learning Indaba and Head of Practicals and Tutorials in Kigali, Rwanda. |
| Aug, 2025 | Our reading group is back – 🤖 RL & Agents Reading Group. |
| Aug, 2025 | 🌟 Presented Remembering the Markov Property in Cooperative MARL and HyperMARL: Adaptive Hypernetworks for Multi-Agent RL at RLC workshops in Edmonton, Canada. |
| Mar, 2025 | 🌟 Attended UK Multi-Agent Systems Symposium 2025 at King’s College London. |
| Aug, 2024 | 🗣️ Taught “Introduction to ML” at DLI. |
| Jul, 2024 | 🏅 Awarded a scholarship to attend the CIFAR Deep Learning and Reinforcement Learning (DLRL) Summer School in Toronto, Canada. |
| Jan, 2024 | 🗣️ Begin co-hosting the UOE RL reading group, YouTube. |
| Sep, 2023 | 🎓 Started my PhD at the University of Edinburgh (UOE), through the Informatics Global PhD Scholarship. |
| Aug, 2023 | 🛠️ PC member and Practicals Chair of DLI - notebooks 2023, RL Prac. |
| May, 2023 | 🗣️ Talk on “Introduction to Deep Reinforcement Learning” at the University of Pretoria and Indaba X Ghana. |
| Apr, 2023 | 🌟 Attended ICLR in Kigali, Rwanda. |
| Aug, 2022 | 🛠️ Co-Organiser of the ML Efficiency Workshop at the DLI. |
| Aug, 2022 | 🛠️ Programme committee member and Practicals Chair of Deep Learning Indaba (DLI) – notebooks 2022, ML Prac, RL Prac. |
| Jun, 2022 | 🗣️ Taught an “Introduction to Machine Learning” course at Africa to Silicon Valley. |
| Mar, 2021 | 🤖 Joined the Multi-Agent RL research team at InstaDeep. |
| Dec, 2019 | 🌟 Attended NeurIPS in Vancouver, Canada. |
| Aug, 2019 | 🏆 Won Best Poster (1 out of 194) at the Deep Learning Indaba, sponsored by Microsoft. |