Abstract
Markov games is a framework which can be used to formalise n-agent reinforcement learning (RL). Littman (Markov games as a framework for multi-agent reinforcement learning, in: Proceedings of the 11th International Conference on Machine Learning (ICML-94), 1994.) uses this framework to model two-agent zero-sum problems and, within this context, proposes the minimax-Q algorithm. This paper reviews RL algorithms for two-player zero-sum Markov games and introduces a new, simple, fast. algorithm, called 2L(2).2L(2) is compared to several standard algorithms (Q-learning, Minimax and minimax-Q) implemented with the)ash library written in Python. The experiments show that 222 converges empirically to optimal mixed policies, as minimax-Q, but uses a surprisingly simple and cheap updating rule. (C) 2009 Elsevier B.V. All rights reserved.
Original language | English |
---|---|
Title of host publication | Proceedings of the 16th European Symposium on Artificial Neural Networks |
Pages | 137-142 |
Number of pages | 6 |
Publication status | Published - 2009 |
Externally published | Yes |
Event | 16th European Symposium on Artificial Neural Networks - Advances in Computational Intelligence and Learning, ESANN 2008 - Bruges, Belgium Duration: 23 Apr 2008 → 25 Apr 2008 |
Conference
Conference | 16th European Symposium on Artificial Neural Networks - Advances in Computational Intelligence and Learning, ESANN 2008 |
---|---|
Country/Territory | Belgium |
City | Bruges |
Period | 23/04/08 → 25/04/08 |
Keywords
- Reinforcement Learning Q-learning Markov Games Two-player Zero-sum Games Multi-agent