Differences
This shows you the differences between two versions of the page.
— |
guidelines:universal_agent_architectures [2011/12/22 15:15] (current) michal.bida created |
||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== Introduction ====== | ||
+ | There are many efforts in the academia that aims for creating universal agent architectures or that try to develop a kind of agent oriented languages that would speed up the development process of agents in various environments. Despite the years of efforts in this area there hasn't been a real breakthrough in kind of agent language or architecture. The most wide used solutions are still finite state machines or behavior trees. Complex agent architectures/ | ||
+ | |||
+ | ====== Problems ====== | ||
+ | - The environments nowadays available are not complex enough to really benefit from complex agent languagens such as JSoar, Jason, etc. | ||
+ | - Nowadays environments usually define few specific problems that if you solve properly the agent behavior then is perceived like intelligent, | ||
+ | - Which behaviors even require complex decision-making-architecture in 3D FPS games? Picture yourself limited view of the player (for instance Oblivion, where she cannot see the whole world, only small part), thus interacting with only a few agents at once, where you can display intelligent behavior there when all you have is " | ||
+ | |||
+ | ====== Methodology ====== | ||
+ | Research in CS disciplines like machine learning, computer vision, planning or database systems relies heavily on methodology where one: | ||
+ | - implements new algorithm | ||
+ | - tests its performance on some STANDARDIZED dataset/ | ||
+ | This enables one to compare performance of different approaches and | ||
+ | evaluate usefulness of new features and possibly advance the research | ||
+ | faster. One common task used in agent programming languages is "Tower | ||
+ | of Hanoi" but it doesn' | ||
+ | decision making in complex virtual environments. | ||
+ | |||
+ | In order to define such standardised environments/ | ||
+ | - identify key components of all agent systems - e.g. environment, | ||
+ | - identify features of those components - e.g. does the middleware make some ontology driven inference on percepts, or is this functionality realised by DM? ... | ||
+ | - define standardised tasks/ | ||
+ | * environment1 can be: house with 6 rooms, 8 people and 50 objects, after 3 minutes fire emerges in one | ||
+ | * task1: just get out, don't try to rescue others | ||
+ | * task2: rescue all 8 people | ||
+ | |||
+ | With this environment and standardised tasks we can eventually measure | ||
+ | performance of different components. | ||
+ | |||
+ | Maybe check out http:// | ||
+ | |||
+ | Note: | ||
+ | * One danger can be that the set of standardised tasks shapes the future research. E.g. in planning community majority of published papers uses domains from the ICAPS planning competitions but less research focuses on problems that are not covered by planning problems used in the competition. | ||
+ | * Other example of existing testing environments - http:// | ||
+ | * Reproducible science - http:// | ||
+ | |||
+ | Motto: one or two GOOD agent programming languages are enough, we don't need 20. | ||
+ | |||
+ | ===== Inherent obstacle ===== | ||
+ | |||
+ | Different components of multiagent systems aren't | ||
+ | easily interchangeable, | ||
+ | middleware (e.g. Pogamut and UDK) needs several weeks of work of | ||
+ | experienced programmer. The same applies for connecting agent | ||
+ | programming languages. It is much easier to build on work of others in | ||
+ | e.g. math where you can take newly proved lemma and start your own | ||
+ | theory on it. |