There are many efforts in the academia that aims for creating universal agent architectures or that try to develop a kind of agent oriented languages that would speed up the development process of agents in various environments. Despite the years of efforts in this area there hasn't been a real breakthrough in kind of agent language or architecture. The most wide used solutions are still finite state machines or behavior trees. Complex agent architectures/languages from academia were often used only by their creators in proof-of-concept examples and more complex analysis if the approach really offers benefits to programmers is lacking. And if the analysis is performed, the results are usually mixed [Pibil, JASON paper; our JAVA vs POSH paper]. In this document we will try to pin out several reasons for this according to our experience with intelligent virtual agents development.
- The environments nowadays available are not complex enough to really benefit from complex agent languagens such as JSoar, Jason, etc.
- Nowadays environments usually define few specific problems that if you solve properly the agent behavior then is perceived like intelligent, even if other problems are not solved that well - nice example is path finding in FPS games.
- Which behaviors even require complex decision-making-architecture in 3D FPS games? Picture yourself limited view of the player (for instance Oblivion, where she cannot see the whole world, only small part), thus interacting with only a few agents at once, where you can display intelligent behavior there when all you have is “move”, “say”, “attack” actions? All what can player possibly see is just “execution” of one intention, she cannot perceive complex decisions behind them …
Research in CS disciplines like machine learning, computer vision, planning or database systems relies heavily on methodology where one:
- implements new algorithm
- tests its performance on some STANDARDIZED dataset/task/domain
This enables one to compare performance of different approaches and evaluate usefulness of new features and possibly advance the research faster. One common task used in agent programming languages is “Tower of Hanoi” but it doesn't reflect complexity and constraints of decision making in complex virtual environments.
In order to define such standardised environments/tasks we have to:
- identify key components of all agent systems - e.g. environment, decision making, middleware connecting ENV and DM
- identify features of those components - e.g. does the middleware make some ontology driven inference on percepts, or is this functionality realised by DM? …
- define standardised tasks/environments and provide them in an open source package that will enable others to make REPRODUCIBLE results - e.g.
- environment1 can be: house with 6 rooms, 8 people and 50 objects, after 3 minutes fire emerges in one
- task1: just get out, don't try to rescue others
- task2: rescue all 8 people
With this environment and standardised tasks we can eventually measure performance of different components.
Maybe check out http://www.robocuprescue.org/agentsim.html what they have.
- One danger can be that the set of standardised tasks shapes the future research. E.g. in planning community majority of published papers uses domains from the ICAPS planning competitions but less research focuses on problems that are not covered by planning problems used in the competition.
- Other example of existing testing environments - http://www.multiagentcontest.org/
- Reproducible science - http://www-stat.stanford.edu/~donoho/Reports/1995/wavelab.pdf
Motto: one or two GOOD agent programming languages are enough, we don't need 20.
Different components of multiagent systems aren't easily interchangeable, connecting new environment to existing middleware (e.g. Pogamut and UDK) needs several weeks of work of experienced programmer. The same applies for connecting agent programming languages. It is much easier to build on work of others in e.g. math where you can take newly proved lemma and start your own theory on it.