Damian Isla, Peter Gorniak
Damian Isla and Peter Gorniak gave examples and describe ways to put knowledge representation into games – representing the world so the AI can use it, basically.
Damian’s example of an NPC searching for a player is great with such simple rules, while Peter’s example of an AI being able to compete puzzles and do actions contextually given limited input from the player (eg; by speech recognition and position).
- GDC Page
- There seems to be some issue with the slides entry at the GDC site – 404 errors for movies, and no powerpoints. The videos would be of the items I mention above too, which are pretty neat to watch.
We spend a lot of time on what the AI’s do, but not what they know and what they base their decisions on.
Really coming to the conclusion that FSM’s, HFSM’s, Behaviour Trees are all pretty much the same.
Behavioural knowledge – the knowledge of how to run, when to shoot, flanking versus the “knowledge” of where a location is?
Perception of the thing is not equal to the thing itself. The knowledge representation is a layer between the agent’s behaviour and the objects in the world they need to check for information.
Why is KR interesting? It is fun to play against an actor, it is lifelike – the player can make up a lot about what they think the AI is thinking, especially the emotional reactions they make up.
You want to have some better primitives – comparison to writing in Java or Csharp, compared to writing in assembly. Need to be more high level – the conceptual representation not the real object.
There are timescales of knowledge – “Dogs are animals” versus instant things like “I have three bullets left”. Different time horizons. There are things true for longer periods but not forever – “Bobby is 5 years old”. I don’t know what these representations look like, just this is one possible way to divide them up.
3 key concepts:
- Confidence – How sure am I about the knowledge?
- Salience – How important is the sensory data I’m getting?
- Prediction – What do I think will happen in the future given what I know?
The demo showed a simple move to point which then is able to guess where the player is. There is also a short delay when confused – if the player isn’t where the AI thought it was.
Expected emotions might well be confusion – something I was confident was true is false, where as I will be surprised if something happens I don’t predict.
Can you put that into a FPS? I don’t know but maybe there are games you can put it into. He is endorsing the interesting properties of the algorithm not necessarily the algorithm itself.
Having the targeting data the AI receives a copy of the world data (eg: position, health) in which confidence decays in time (done with some other function). Add also in derived data like thread level. Shared computation + expressive power.
For instance, to attack a player that hides – might draw knife out when near the hiding spot, but the player moves so the player surprises the AI who has his knife out.
With memory, not that knowledgeable about it. Some thoughts might be to keep a graph of old data for short term memory. Working memory is super fast. Episodic memory – ??? – but may be “remember that” for a big event.
Two obvious challenges:
- Representational versus utility. Use polymorphism to represent relevant data not everything in the game world which is wasteful. The object can work out if it is worth representing or not.
- Performance. Perhaps can do a shared KR, perhaps with zombies or hive mind objects. Perhaps split up KR into static and dynamic data with some sharing. Some categories could be shared like “location of a crate”, but enemies might be individual representations for each AI. Perhaps even one attribute is shared – like their current weapon.
There are limitations of target lists:
- Doesn’t do relational information. Where does “behind” live? (there is no way to say “object A is behind object B”).
- Wholes and parts – does a car’s wheel deserve it’s own representation? What about a mob of guys which are never represented in the game code itself?
Wild speculation – maybe some form of lazy representation – a small semantic network rather then a huge one just done on the fly.
For KR it is perfectly fine to have different kinds of KR in your game. You can keep them separate for ones like quick reactions and so forth.
Where possible it is a good idea to have separate KR data from the game engine itself. It also isn’t just a bunch of facts, but reasoning behind it.
Predicates help prove preconditions for doing a behaviour. Does a depth-first search to find all the preconditions and so backtracking can help find a valid behaviour. Efficiency is not as good as just doing a non-backtracking A*.
To help with efficiency you can spread out the frames that are used to process plans, allocate static memory and use enums. To sort the depth first search efficiency you can make the search interruptible to spread it out. Not the best for purely reactive AI but there is a lot of AI problems that are not purely reactive.
Going onto another topic – situated knowledge for companions – is about having natural language processing for companions to understand commands. Recognising intention is harder for AI then a human who already is trained in a situation (knows how to play the game).
Knowing about the opportunities for action is key (not just knowing there are objects, colours and spatial relations). Having plan grammar for all the kinds of actions so that the events (such as “Open chest” being changing rooms and pulling leavers) is necessary.
Representing this knowledge is key to having companions actually work out what to do just like players do, rather then having to click something and hoping the companions do the right thing.