The first AI roundtable – tons of topics covered, opinions put forward – and I’ve no idea who said each item, since no one really mentioned their name
Is anyone making emotional characters in conversational systems?
I am – building “artificial emotions” – an engine for AI for emotional responses to situations. Having back and forth information simulating the thinking and empathy of characters.
Someone working on dynamic conversations. When you get down to implement it, it’s not obvious how to get it to work. Predicate calculus is not easy “Are you the son of the terrorist” – getting a design interface is hard.
There seems to be a lot of topical stuff on things that have legs. I work on things that move around in space.
You can apply the work of Chris Journey on vehicle movement and predicting paths which can be possibly applied to 3 dimensions.
From someone else’s experience you move the normal direction from something in your way right in front of you. You have smoothing from constant checking.
For getting to move to specific combat positions applying Craig Reynolds to get to specific points in space.
Nitpicking combat with vehicles which move in 3 dimensions – either they always miss or always hit.
You can hit anything by calculating it – so it needs to be toned down.
For tank combat first shot hit after the gulf war is now realistic.
Optimality, authenticity and fun for AI.
For sports games someone who goes away from state machines (with some deficiencies added in for fun) is just harder for the player to master it seems since there is nothing to play off, no predictability.
Why can’t you add slider bars to categorise some things for AI.
Fuzzying things up with fuzzy distributions for the previous point on shooting.
What things do you see design constraining AI in good and bad ways?
..no answer apart from was discussed at AI summit
How many people are dealing with statistical processes (pathfinding) versus knowledge representation?
He means the difference between the decision making and methods like pathfinding.
Finally pulling knowledge representation out of the box – architectural reasons to do this. This will be used more in the future.
Had argument with boss, why bother building the complex stuff designers won’t use?
Only works if the AI programmer and designer is friends. Same office.
You deploy knowledge representation to make the design easier – like Damians demo of AI which used influence maps.
Building a learning machine to do tactical air combat – stay alive, take down enemies. The anecdote just lead to the plane taking off then immediately landing.
Not much point putting much AI in unless it is visible.
Different form doing dumb things is making AI do different things.
Informing the player that the AI is being smart is important so if there is FOW the player knows you’re being intelligent.
Something needs to be done signalling an AI decision – icons, animation – such as if something seems odd and can’t be guessed, debug needed.
Intelligent failure – an example being an AI crashes and dies if it was near death which was cool. Any examples of that?
Error in the AI – investment in work, got the knowledge then fuzzy it up – butterfly chaos model, like changing a random seed almost. You change the seed of behaviour even though it is deterministic.
Chaos theory. Can duplicate it done like that.
If you don’t choreograph things too much you can have players ascribe the AI with tons of cool stuff. FEAR faked a lot of the responses and just looked for times to say things.
The dichotomy of cutscenes and the AI can sometimes be big and traits might be something to sort with that.
We’re really building behaviour and getting things to behave the right way.
Has anyone used STRIPS in their game? Designers can’t really see the states and state changes.
The designer didn’t get it in my experience. Takes a technical designer. Can get them on board if not much control over the actions – just the goals. Don’t allow them to dynamically change the action set you can get them to alter just the goals.
What about allowing invalidating actions?
No, don’t allow that. Can only invalidate goals – if you can cancel actions, it becomes planning for a goal they can never achieve. Failsafes added in and no rope given to the designers so there is no negative feedback.
How would you make AI be stressed?
We have a simple index 1-10, and switch to a different set of priorities when it gets to a certain level.
Can have emotional curves to change the levels of emotions over time.
Idea to do bots for a game – personalities who leave if board dying
People are dodging around the fact the game has to streamline into the scripting language. It is a tricky thing to blend, especially if some of the AI is on and some isn’t.
Autonomous orders is a good way to do this – having scripted goals and actions be a higher priority then the normal goals.
LOD for AI to go between scripting and normal AI perhaps?
AI (for sports for instance) cannot all be reactive – need a long term plan, but also need the scripted immediate actions – such as if you suddenly get the ball.
Hard to have scripters turn off the right bits fo the AI (eg: facial animations?)
Let the designers do it but queue it up ready.
What progress made in multithreaded AI?
22 players on the pitch (football) local is distributed across the SPU’s. Doesn’t thrash the memory, works well.
SPU’s are best on singular instruction and multiple data.
50 guys running around. Every frame is “you 5 guys get to update”. A LOD for how many updates. Lots of ciritcal sections.
Someone was recalculating everything every frame – very slow. Do they all need to know the most up to date data all the time.
Strategic AI level did it every 20-30 seconds in another game.