StarCraft AI Competition
Organizers: Cheong-mok Bae, Seonghun Yoon, and Seok Min Hong (Dept. of Computer Engineering, Sejong University, Seoul, South Korea), Kyung Joong Kim (Sejong University)
IEEE CIG StarCraft competitions have seen quite some progress in the development and evolution of new StarCraft bots. For the evolution of the bots, participants used various approaches for making AI bots and it has fertilized game AI and methods such as HMM, Bayesian model, CBR, Potential fields, and reinforcement learning. However, it is still quite challenging to develop AI for the game because it should handle a number of units and buildings while considering resource management and high-level tactics. The purpose of this competition is developing RTS game AI and solving challenging issues on RTS game AI such as uncertainty, real-time process, managing units.
Competition deadline: July 20th
Fighting Game AI Competition
Organizers: Ruck Thawonmas, Intelligent Computer Entertainment Lab., Ritsumeikan University
Participants develop their AI controller for Java based fighting-game FightingICE. Submitted AIs will compete one another in each of three round-robin tournaments: Zen, Garnet, and Lud. Zen and Garnet character data are available in advance while the character data of Lud will not be revealed. However, after submission, each AI in Lud Tournament will be provided training time of 10 games, each having three rounds, against our sample MCTS AI (to be released soon). In each tournament, the AIs are ranked according the number of winning rounds. If ties exit, their total scores will be used. Once the AIs are ranked in each tournament, tournament points are awarded to them according to their position using the 2015 Formula-1 scoring system. The winner is decided by the sum of tournament points across all three tournaments.
- Midterm deadline: July 2, 2016(23:59 JST)
- Final deadline: September
310, 2016 (23:59 JST)
Geometry Friends Game AI Competition
Organizers: Rui Prada, Francisco Melo
The goal of the competition is to build AI agents for a 2-player collaborative physics-based puzzle platformer game (Geometry Friends). The agents control, each, a different character (circle or rectangle) with distinct characteristics. Their goal is to collaborate in order to collect a set of diamonds in a set of levels as fast as possible. The game presents problems of combined task and motion planning and promotes collaboration at different levels. Participants can tackle cooperative levels with the full complexity of the problem or single-player levels for dealing with task and motion planning without the complexity of collaboration.
Competition deadline: September 12th
The General Video Game AI (Single and Two-Player Planning Tracks)
Organizers: Raluca Gaina, Diego Perez-Liebana, Simon M. Lucas, Spyridon Samothrakis, Julian Togelius, Tom Schaul
The GVG-AI Competition explores the problem of creating controllers for general video game playing. How would you create a single agent that is able to play any game it is given? Could you program an agent that is able to play a wide variety of games, without knowing which games are to be played? CIG 2016 will host two tracks of this competition: the single-player and two-player tracks. The former one will provide 60 games for training, 10 for validation and 10 for test. The latter track will count 20 two-player games for training, 10 for validation and 10 for test to compute the final rankings.
Competition deadline: September 1st
Angry Birds AI Competition
Organizer name: Jochen Renz, Julian Togelius, Lucas Ferreira, Matthew Stephenson, XiaoYu (Gary) Ge
The Angry Birds AI competition is now in its fifth year. Its main goal is to build AI agents that can play new game levels better than the best human players. At CIG'16 our focus will be for the first time on generating new game levels that are both interesting and fun for human players and hard for AI agents to solve. Participants of the level generation track will develop procedural game level generators for Angry Birds that take as input desired level characteristics and output a game level satisfying these characteristics. The winner will be determined through a combination of live play by humans and AI agents. Participants will be provided with a platform that allows level generation and testing as well as with a baseline game level generator.
Ms. Pacman Vs Ghosts
Organizers: Piers R. Williams, Diego Perez-Liebana, Simon M. Lucas
The aim of this competition is to promote research through the development of software controllers for both Ms Pac-Man and the Ghost Team. This competition is a revival of the previous Ms Pac-Man versus Ghost Team competition that ran for many successful years. Three additional tracks over the previous two have been added covering partial observability (PO). The three new tracks are for controlling Pac Man with PO, the original ghost controller with PO and a track requiring 4 controllers (one per ghost) with PO. In the latter track, communication is controlled by the competition. Ms Pac-Man is a highly complex game to write AI for, and the addition of PO greatly increases the challenge.
Competition deadline: September 1st
Artificial Text Adventurer
Organisers: Tim Atkinson, Sam Devlin and Jerry Swan - University of York, Tejas Kulkarni - Massachusetts Institute of Technology
Before the widespread availability of graphics displays, text adventure games such as Colossal Cave, Adventure and Zork were popular in the role-playing gaming community. Due to the richness in text adventure games, such games offer a useful testbed for AI research. Building a fully autonomous agent for an arbitrary text-adventure game is AI complete. However, we plan to provide a graded series of test cases, allowing competitors to gradually increase the sophistication of their approach to handle increasingly complex games.
One of the most challenging problems in AI is still that of automatic model acquisition. In 1959, Newell and Simon's famous `General Problem Solver' (GPS) famously solved a variety of planning problems such as "The Monkey and Bananas" and "Tower of Hanoi". However, the domain had to be entirely operationalized beforehand by researchers, leaving GPS only the task of finding the appropriate actions. Of much greater importance is therefore the questions of "how can an agent automatically acquire the necessary knowledge to successfully solve a simple domain?". We believe that our test bed is appropriate to build towards such an agent and it may also shed light on the relative merits of model-based and model-free approaches.
The competition will be scored according to two independent criteria:
- C1: Score on an unseen game instance (objective, built-in to the instance).
- C2: Freedom from a priori bias (subjective decision by the judges).
C1 is the dominant criterion, with C2 deciding in the event of a tie. C2 is intended to motivate participants to favour agents that have no (or less) prior knowledge of the problem domain built in.
Competitors will be provided with an API allowing agents implemented in Java or Python to be submitted and some initial training instances. Harder training instances will be released incrementally during the course of the competition. A public leaderboard will be maintained to encourage competitiveness and we are in early discussions with a major commercial partner that may sponsor prizes.
Competition deadline: August 31st
Visual Doom AI Competition
Organizers: Wojciech Jaśkowski, Michał Kempka, Marek Wydmuch and Jakub Toczek
Doom has been considered as one of the most influential titles in the game industry, since it popularized the first-person shooter (FPS) genre and pioneered immersive 3D graphics. Even though more than 20 years have passed since Doom’s release, the methods for developing AI bots have not improved significantly in newer FPS productions. In particular, bots have still to “cheat” by accessing game’s internal data such as maps, locations of objects and positions of (player or non-player) characters. In contrast, a human can play FPS games using a computer screen as the only source of information. Can AI effectively play Doom from the raw visual input?
The participants of the Visual Doom AI competition are supposed to submit a controller (C++, Python, or Java) that plays Doom. The provided software framework gives a real-time access to the screen buffer as the only information the agent can base its decision on. The winner of the competition will be chosen in a deathmatch tournament.
Although the participants are allowed to use any technique to develop a controller, the design and efficiency of the Visual Doom AI framework allows and encourages participants to use machine learning methods such as reinforcement deep learning.
Competition deadline: August 15th