Skip to main content

Cooperation between artificial agents and human subjects


The future of AI technology is an ecosystem of many artificial agents acting with autonomy on behalf of human subjects. Examples include, a plethora of self-driving cars, or hordes of automated trading bots in a market. In these scenarios, interactions between artificial agents and human decision makers are abundant [1]. The core of AI has so far focused on applications in which agents help humans (aligned interests), or agents completely oppose humans (zero-sum games). However, there are many interactions in which agents may be competing with humans, but where there are also incentives to cooperate or coordinate actions in the pursuit of better outcomes for everybody [2]. This social behaviour is best modelled by using social preferences [3]. The purpose of this project is to design algorithms that can learn to cooperate and coordinate their actions with humans subjects. To do that we rely on the theory of social preferences, and use standard AI techniques such as reinforcement learning and neuroevolution, in the context of Multiagent Systems. The project will use computer simulations to test the proposed algorithms, and in turn use these in specific real-life applications.

Our research group studies how groups of agents can learn to cooperate. Most of our research focuses on social dilemmas, i.e., situations where poor group outcomes arise from optimal individual choices. We use this framework to study: Multi-agent Systems and AI, Social Systems, and Models in Biology and Evolution. Please check our publications for more details: http://garciajulian.com

[1] Anastassacos, Nicolas, Julian García, Stephen Hailes, and Mirco Musolesi. “Cooperation and Reputation Dynamics with Reinforcement Learning.” In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, 115–23. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems, 2021.

[2] Dafoe, Allan, Yoram Bachrach, Gillian Hadfield, Eric Horvitz, Kate Larson, and Thore Graepel. “Cooperative AI: Machines Must Learn to Find Common Ground.” Nature 593, no. 7857 (May 6, 2021): 33–36. https://doi.org/10.1038/d41586-021-01170-0.

[3] Fehr, Ernst, and Klaus M. Schmidt. “A Theory of Fairness, Competition, and Cooperation.” The Quarterly Journal of Economics 114, no. 3 (1999): 817–68.

Required knowledge

  • Have an excellent academic track record in computer science or a cognate field. An Honors degree with HD/H1 or equivalent is essential;

  • A strong interest in the topic of the research;

  • Excellent written and verbal communication skills;

  • An interest in game theory, solid skills in mathematical modelling, and good programming skills.

Project funding

Other

Learn more about minimum entry requirements.