Neural MMO 2.0: A Massively Multi-task Addition to Massively Multi-agent Learning

Abstract

Neural MMO 2.0 is a massively multi-agent and multi-task environment for reinforcement learning research. This version features a novel task-system that broadens the range of training settings and poses a new challenge in generalization; evaluation on and against tasks, maps, and opponents never seen during training. Maps are procedurally generated with 128 agents in the standard setting and 1-1024 supported overall. Version 2.0 is a complete rewrite of its predecessor with three-fold improved performance, effectively addressing simulation bottlenecks in online training. Enhancements to compatibility enable training with standard reinforcement learning frameworks designed for much simpler environments. Neural MMO 2.0 is free and open-source with comprehensive documentation available at neuralmmo.github.io and an active community Discord. To spark initial research on this new platform, we are concurrently running a competition at NeurIPS 2023.

Publication
In The Thirty-Seventh Annual Conference on Neural Information Processing Systems
Ryan Sullivan
Ryan Sullivan
Computer Science PhD Student
rsulli@umd.edu

My research interests include reinforcement learning, curriculum learning, and training agents in multi-agent or open-ended environments.