Impressive starting slide
A two person lecture – Alex deals with animation such as walking and the prototyping and work gone into find a solution, and Christian looks at Uncharted, and practical pathfinding and action choosing when linked with animation.
Christian and Alex
Part 1: Alex
NPC Locomotion. Can do it in a way that is responsive and looks really good. It is the system that moves objects around in space. Part of the layer between AI and animation.
Will show off the quick to implement and simple to understand, onto the more complex methods. A simple problem is patrolling which is then interrupted. The locomotion is simple, but the dynamic factors can complicate it.
Sliding and blending – idle, walking, running motions and the blending can be done between them. Assumptions are that the world is only given as a set of things that need to be checked, the second being the way the locomotion is stored/run is to set a point somewhere that will be moved to. Thirdly, a blend tree will be used to go between animations.
Lesson 1: You can use standard AI logic to build a simple locomotion system.
Lesson 2: Everything blends, though it might not look good. It needs responsiveness of the blending, but sometimes needs to be used less.
Lesson 3: Procedural steering/slide controllers can move anywhere in space.
With simple sliding though, animation problems can occur. You can then try and use movement driven by animation. We assume there is some lining up of animations and distances. You have a cycle of animations to continually run – run another motion once the current one stops. This has some advantages, but needs careful checking to make sure the animations are bullet proof when checking spatial awareness.
Transitions can be used to better blend animations – because from idle to walking can look really bad. You have to have a locomotion system that is able to get information from the animation system to pick the right transition.
Moving onto Parametric animations, which can have animations take parameters so steering and changing direction looks much better when moving. Still has some issues transitioning from movement to idle which reaches the point exactly.
A lot of work to implement this – need tools (such as mirroring so symmetrical animations can be copied), it is complex to implement a graph around fully parametric motions – the different changes between states, a nightmare to manage.
Adding a step-based planner on top of it can help choose the logical animation to run. This has advantages and disadvantages – it does a lot of searching to plan what to do with longer paths.
Near-Optimal Controllers are a method to use memory – lookup table – to precomputer things to search faster. This is reinforcement learning. Full Continuous Planning is the holy grail from this.
Part 2: Christian Gyrling
Doing the practical implementation in a world. “Controlling animation directly from the AI often creates a messy interface and ugly code”. Looking at a practical implementation on Uncharted 1 and 2.
Classically it is puppet and puppet master – the AI tells the locomotion logic what to do directly. Tightly integrated between What and How.
A new method is the relationship of Drill Sgt. and Private. AI is the pro active Sgt while the character is reactive – it carries out the orders. No coupling at all between them – the orders are descriptive but not direct.
First the decision engine AI gives the order. The autonomous character gives back request handles – and the AI decision engine can check the status of any particular order. Now there is AI, Character and Animation.
The navigation solution is thus very separate. The character does the second level stuff – handling orders, world and navigation queries, path finding and animation modules. It also deals with voluntary and involuntary movement – walk vs. hit reaction.
The necessary building blocks
This has big advantages – behaviour logic is separate, allows the designers see nearly word for word what they want. Navigation and animation is taken out of the AI arena, and AI logic can also be decoupled and be asynchronous in updates (1 second versus 1/30th second). The AI logic doesn’t need to worry about how an order is performed.
Integrating scripting is a necessity – designers need it, it can be part of the system. 3 ways – exposed parameters (% chances or whatever), script behaviours (exactly do something when something specific happens), and take 100% control over the character – not the AI, but animate the character specifically in cutscenes. The AI will query and the character will say “I’m busy now” so it doesn’t start.
Navigation resources – using navmeshes, large polygons of where you can move to. However it only defines static geometry. Also on top use a dynamic navigation map – high resolution axis-aligned grid. The grid contains static objects and dynamic objects – hazards, grenades, the player, blockages, etc.
The process is an order is given – the order may be “Move to cover”. The character then takes over the decision making – check if they can make it to cover, using the static navmesh, then doing full pathfinding on the dynamic navmesh to find the actual path. AI is waiting for the order to complete in this time.
The character deals with cover resources too. You can’t have two characters in the same cover, so the character part reserves the spot. The character system then moves up to near to the cover, the animation system says there is a special jump – so it moves to that point, then (against what Alex says) just does a quick blend into the jump due to the speed it isn’t noticed.
The AI can take over while the animations are blending out so that actions can immediately occur and blend rather then it entirely blending out.
The separation of Character versus AI is very helpful – the work is hard, this allows it to be abstracted more easily, not just AI and animation. Scripting is still essential in the game too for more control.
What did you investigate Christian? – Made it good enough so the player is satisfied, on a “need to do” basis.
Is there any way the character can provide feedback that they are doing a bad order? – There is no bad orders, the character does no reasoning.
Can the order be interrupted? – Yes, the character supports changing an order at any point. Good idea to check how long an order will take.
What about non-orthagonal AI – two orders at once, something to do with your hands and with your feet for instance? – The character feeds back what they are doing. Can order them to look a certain way when moving for instance, but the character can abort things so it doesn’t look like ass, and tells the AI.
How do you deal with dynamic and moving objects? – Need to replan at every opportunity, since things are changing so frequently. If things are not moving too fast it should look good enough.