A picture instead of million words

Extending events

So how to handle reset() or freeze()/unfreeze() of worldview. ISensors method reset() is not enough as we may need to just fade out and moreover reset() really means “synchronize with logic” so the name is wrong. It should be something like “doLogicEnded” or “worldViewUnfrozen” so the sensor module can listen for it / do something as this event arises. Thus we should provide such methods in general and let people listen for them soing whatever they need when such event is received.

Thus we should implement more events in core in general. And we need probably new layer for this virtual events / virtual commands.

MODULES - LAST PROPOSITION

Memory modules will be only simple - almost just a sensors. They will work in sync with doLogic() methods. “Memory” will registers various listeners to WorldView and store events between doLogic() calls.

A) doLogic is called B) doLogic is running and querying memory modules which are frosen C) doLogic ends D) memory modules are cleared and new events that came during A/B/C are commited to the modules E) goto A

We're assuming we have only SyncThreaded bot (no ASync threaded bot)

interface ISensoryModule {
   public void reset(); // this will wipe the sensory readings in the module
}
ISensoryModule.reset(); 
//will be called in D before the WorldView is "resumed" (not to wipe fresh informations).

This leads to the problem that we need registerModule() method that will allow user to subscribe his/hers module.

BOT types

1) event-driven bot … has listener on END message that starts the doLogic, doLogic will be at first implemented by us and will check whether last doLogic didn't took too much time 2) concurrent bot … has second thread must have LockableWorldView!

Body module

Questions: ergonomy of names 1) you don't know what you need and want to seek body.simple.locomotion.move() 2) or you know what you're doing and have just loc.move() or body.move()

interface SimpleModyle interface AdvancedModule extends SimpleModule (only if viable)

IMap, IPathPlanner, IPathExecutor

Executor —- calls —→ PathPlanner —- uses information from —→ Map

interface IMap {
// already done I think as a specific type of event
}
 
interface IPathPlannner {
    public void getPathTo(ILocation location);
} // can be GameBots / Floyd-Warshall / A  * / etc.
// two types of IPathPlanner ... sync (A  *, etc...) / async (GB)
interface IPathExecutor {
    public void goTo(ILocation location);
}

- stricly event based - produce events for everything (path started, path can't be obtained, path broken, location reached, etc. … doesn't matter whether it uses sync path planner or async path planner

IPathExecutor register itself to WorldView watching for events that needs to be handled (e.g. steering / dodging ~ hear noises…)

IPathExecutor defines new events (is source of events): path broken / location reached

We should have also path executor decorators like “InterruptiblePathExecutor” that will watch for user-sent commands RunTo and TurnTo and stops itself upon such action.

RayTracingManager

Implements: ISensoryModule Upon calling reset() clears sensory readings of all rays it has

Methods:

interface IRayTracingModule {
  IRay addRay(way, distance)
  // returns IRay object that represents the new ray sensor
 
  removeRay(IRay ray)
 
  clearRays()
}
interface IRay {
  boolean get() // whether the ray bounces
  AutoTraseMessage getMessage() // details of the sensor reading
}

Interface IPickupableObject

has own id has type extends InfoObject ? why? IInventory object in the base … creates new type of event: pickup (lost?)

GesturesManager

has 2 binary connection to GameBots. Through this connection it can trigger animations for the bot. generate event BMLAct finished. it can also sends regular GameBots commands through normal GB connection (such as MESSAGE or TURNTO) - this commands are necessary for more complex BMLActs that would feature synchronization of speech with gesture or with gesture and some simple movement. works in two modes - good and evil. In evil mode, when some BMLAct is executed, the module will take completely over the Bot control (logic will freeze, or no commands could be run by logic). Logic is resumed after the act is finished. In good mode, when some BMLAct is executed, the logic is not stopped. It can ask the module if the gesture is still running. The event BMLAct finished is called when the act is finished. When some other command is run by the logic when we are executing the Act, we will interrupt the BML act and perhaps we can generate some ActInterrupted event…

Modules descriptions

You can add/change the description of proposed modules. The image source can be found in the SVN repository in \branches\devel\docs\documentation\architecture\media\ - UT2004Agent.svg.

Questions: Where and how will be implemented good old Pogamut methods?

The protocol HandShake is a bit messy in our core (as Jakub noted), what we will do with it?

How we should store packages in the agents - body, sensors, memory, worldview - what should be the hierarchy? Ondra they should be as independent as possible.

General remark. There are too many methods/commands for a beginner, I would do almost everywhere sort of SimpleMemory, SimpleCommands, SimpleNavigation etc. and create a few (lets say 2-3) bots using strictly those interfaces. So the beginner won't be overwhelmed and disgusted by a hundred of methods he should know about.

Commands module

- in fact two modules - one for beginners, one for experts

IRayTracing

INavigation

IMap

IListeners

ISensors

IMemory

IGestures

IInventory

I would give a possibility to add listeners for Weapon, Ammo, other events:

User defined modules

Episodic Memory / Emotions / etc. We can clearly see on the picture where can our kind user put them:).

Remote Control

remotecontrol.jpg