GDC – AI Summit – Day 2 – Architecture Tricks: Managing Behavior in Time, Space and Depth

Dino Dini

When is better than If

Message handling leads to polling. Would like co-routines instead of messages. Nestable coroutines would allow reusable behaviors

Messages must be able to change the program counter.

Why Game AI is difficult: no direct language support for multitasking, no changing execution state without polling, and complexity issue.

Polling is the heart of the problem.

Light switch with polling – easy for simple case, gets exponentially more complicated very quickly. Even with messages, it turns to polling.

PROC system – light switch with fade. Started as coroutine system in 68000 assembler. Later implemented in C and C++. Most recently implemented in C# in Unity. Implements HFSM with message handling, nested coroutines. Message handling terminates children who can’t handle the message.

At the heart of the difficulty of robust and rich behavior in games is the limitation of popular programming languages. Either make additions to languages, or systems to fake them.

The core problem is how to manage the response to input data that changes over time.

Design your architecture to hide the polling problem as much as possible.

What about Functional Reactive Programming? What about Monads?

Fast? Or Good?

Luke Dicken

Good enough is good enough.

Battlefield – average lifespan of NPC is 5 seconds.

But there are outlier cases. Companions, for one.

GOAP! F.E.A.R’s AI system, Jeff Orkin, based on STRIPS. Creates novel solutions to problems. Adopted by other games – Just Cause 2, Deus Ex. Falling out of favor. GOAP only allows for abstract planning about broad strategies.

Search in the best case is NP-Hard.

GOAP was always the lowest hanging fruit of planning. Academic planning has moved on, focuses on large-scale dedicated planning, but uses slow large algorithms.

I2A – Integrated Influence Architecture. Perform a chunk of work during development time. Allow agent to be informed by reasoning while reacting to the world.

Influence mapping allows for simple contingencies. Fuzzy clustering to find focal nodes, structurally significant. These are the nodes that fall outside any cluster.

Avoiding search as much as possible, smart abstraction and horizon management, reaction and deliberation simultaneously inform decisions.

Can merge location and conceptual reasoning together. Navigation problem, but navigating through concepts.

Infinite Axis Utility System

Dave Mark

Behavioral Mathematics for Game AI talks about utility systems.

For each action, there are multiple considerations (axis), each has input and parameters. List of Axis, so easy to add more, even an infinite number.

Actions with targets are scored per target. Push final scores onto potential action list. Pass things through response curves. Curve types: linear, exponential, logistic, logit. Type, m, k, b, c. Take tuple, clamp input, calculate, clamp output. Multiply actions to get one output value, normalized 0..1. Define bookends, anything outside is clamped.

Input clearing house to normalize inputs. Run through all actions generically.


GDC – AI Summit – Day 1 – Rant

Rant 1 – Brian Schwab – AI – no I, where is me. Put me in the game, notice me, remember me, be my Moriarty, my best friend, my white whale.

Rant 2 – Feedback. If the player doesn’t notice it, it didn’t happen. Put AI in the spotlight.

Rant 3 – Luke Dicken – Secret Guide to the Ivory Tower. Some academics decide your problem and your solution before they talk to you. Some academics do want to do work that is useful and relevant. Academic AI is not Game AI, they only share two letters. Lots of stuff came from academia and were polished by game developers. Academia matters – game develoment is a business, academia isn’t. It has scope for crazy ideas that don’t go anywhere, and occasionally ones that do. Caring for your pet academic. Academic projects are warm and fuzzy, last years, work really done by a grad student, different schedule, think in semesters, arrange research around teaching. Redefine the fitness function to optimize fun or believability.

Rant 4 – Dino Dini – Dear compiler don’t be my nanny. Don’t you ever use goto – sometimes goto is the best thing to use. Regard preprocessors as bad – sometimes macros are good, conditional compilation is good, helps get stuff done. Might we suggest always saying new – programs have to allocate memory which is slow, but rubbing it in is insane. Fact is OOP should make you glad – didn’t turn out to be the great solution to our ills, data and functionality kept separate. So you want to overload – javascript doesn’t support them. La la land’s where weak typing goes – the only reason for strong typing is performance. Ti’s right to make everything private – seen as too easy by the C# designers, public must go in front of everything we want to make public, get sick of compiles failing, then put public in front of everything, C# has no friend keyword so you need public everywhere anyway. Brings us back to don’t. Don’t think you know my needs better than me. Don’t bring me down.

Rant 5 – Ben – Emergent Behavior. Wanting emergent behavior means you’re not doing your job. You make a mistake, something emerges you like, then you – get proud. We value emergent behavior not because it’s useful but because we say it’s important, and we become beholden to emergent AI. Sometimes we even keep from doing improvements because they break emergent behavior. Less emergent can be more efficient. We feel that emergent is a reward for good AI. Emergent behavior is not your friend. You can’t easily change it. You are the mercy of emergent behavior. Regular behavior works for you.Your AI is not your child, it is a mindless automaton servant. If you cant trust it to do it before you test for it, then you didn’t do it, even if it does it by accident – or emergence.

Rant 6 – Andrew Fray – A SOLID Punch right in the Megaclasses. Mega Class versus Giant Hacktopus – whoever wins, we lose! SOLID principles by Robert C Martin. Single Responsibility Principle – one reason to change. Open/closed principle – Open for extension, closed for modification. Liskov Substitution Principle – code using a base class must be able to use any derived class without knowing it. Interface Segregation Principle – no class should depend on internals of types. Dependency Inversion Principle – high-level systems do not depend on low-level systems.

Rant 7 – Steve Rabin – Why so Fragile? Fragile things don’t like volatility. Fix humans by making linear game play or on rails. Want open games. During development we are antifragile. When something breaks, we fix it. After the game is released? The AI is fragile. We patch it. Why is antifragile so difficult to program? Black swan events. Outlier, extreme impact, but retrospectively predictable. Unknown unknowns – things we don’t know we don’t know. Survive this by being antifragile. Fragile: procedural architecture, state machines. Robust. Antifragile. GDC 2003 sandbox learning in sports games. Problem: players exploit a weakness in the AI. Solution: determine when a weakness occurs, when it happens again alter the AI in a slight way. Butterfly effect. Use a flow field to see it happen and maybe have AI in place to adjust. If you are antifragile it means you are alive – Nassim Taleb. Avoid being Fragile, deliver robustness, aspire to be antifragile.

Rant 8 – Dave Mark – Foxes and Hedgehogs. The fox knows many things, but the hedgehog knows one big thing. Archilochus, adapted by Nate Silver. Fox is multi-disciplinary, hedgehog is stalwart stubborn order-seeking. Hedgehogs make grand confident pronouncements in their area. Foxes are better at analysis, prediction and adaptation, they know about all sorts of things. Political pundits are hedgehogs, more often wrong than the public. Financial managers are hedgehogs. Confirmation Bias – people prefer reassurance to research. Evian backwards is naive. “I can’t wait to hear about the latest advancement in…” hedgehog. “My PhD research for the past 4 years…” where have you been? “Behaviors need to be predictable so QA can test…” A* – why are we hyper-focused on the shortest path, people don’t always follow them. If you want to be a better game designer, get a good liberal arts education. Look around you once in a while, get out of your comfort zone. Don’t be a hedgehog stuck in a tunnel.

GDC – AI Summit – Day 1 – Crowd Simulation through Steering Behaviors and Flow Fields

Graham Pentheny (note to self: subscribe to Game AI Pro)

Weighted component architecture, specialized behaviors, arbitration function to combine behaviors intelligently. Stop when max vector output is found, or all vectors are processed.

Traditional Crowd Simulation. One path per agent (redundant calculations, waypoint fighting). Local collision avoidance (RVO/movement planning, expensive at scale).

RVO – reciprocal velocity obstacles.

Follow path within specified leniency. Breaks for multiple agents following same points; can increase leniency, but too big would mean hitting obstacles, and agents can’t always get to each objective. This creates artificial choke points.

Flow fields can fix this. Discrete approximation of a flow function, best path from every cell to closest goal, agents lookup path direction in flow field. Essentially has pathfinding information baked for whole world. Flow fields need to change only when world changes, or when goals change. Best used in relatively static environment when lots of actors share few goals. Can store palettized (index into palette of vectors), or store as polar coordinates.

Flow field generation via Dijkstra’s – A* optimization interferes with flow-field generation.

Agents modeled as point masses, collision circles with maximum force, max speed and neighbor radius.

Flocking: separation + cohesion (clustering) and alignment (face in common direction). Separation is sum of separation forces between agents, and cohesion is steering towards center of mass.

Visual debugging mandatory.

Steering has some stigma attached at the moment: unstable, tweaking and bugfixing needed. This is from poor implementations.

“Artificial Intelligence for Games” by Millington and Funge got Seek right. Seek should be defined in terms of max force.

Steering is generic enough to model any vector based system.

Steering with Context Behaviors

Andrew Fray

Steering behaviors are good. Lightweight framework. Simple behaviors, quick to iterate, stateless, fast. Emergent behavior because they compose well.

Drawbacks. No guaranteed movement constraint. Inconsistent decisions. OK when programming a flock, but not as a member of a flock.

Balanced vectors problem – one inconsistency. Chase behavior – pick target and follow it. Avoid behavior – look at obstacles and produce vector to avoid them. Steer – resolve vectors. Sometimes all vectors subtract out and we don’t move.

Solutions. Ignore chase. Chase both targets. Or validate target before chase.

Central design flaw in steering behaviors – asking behaviors to make decisions, but not enough information to evaluate/merge decisions.

Merge contexts, not decisions. Context Behavior System, with Context Controller class, with set of context behaviors and two context maps – Interest Map and Danger Map. Each behavior writes to context maps. Danger maps are filter on interest map, make decision on that filtered interest map.

Can be used for many kinds of multi-dimensional decisions. Project from decision space to 1D context map. Can smooth slots to pick “virtual slots” through interpolation.

Performance is linear to context map size, linear to number of behaviors. Can use LOD to remove low priority behaviors. And can vectorize.

Preprocessing context maps – smooth or blend with previous frame.

GCC – AI Summit – Day 1 – Animation and AI

Animgraphs – define a set of steps to get an animation pose, directed graph, leaf nodes are animations, branch nodes are filters etc. Control parameters like blend weights are how animgraphs are controlled.

State machine to control transitions between animgraphs. Transitions between states, the other way to control animgraphs.

Animated actor – interface for behaviors, and scheduler for animation programs. Was single-threaded, one program at a time, one animation output.

NPC Network controller was the wrapper for the animation graph.

Drawback – monolithic design, NPC controller had global control, not data driven, no termination sequences. Unclear responsibilities, heavyweight orders. Animation orders not designed to allow failure. Rigid system. But the division and layering made sense.

If it isn’t broken, fix it until it is.

Made controllers more specific – locomotion, aiming, acting, etc.

Think about responsibilities of components: component role clear? overlap in responsibilities?

decouple animation system from AI – speed up prototyping, minimize code disruption for animgraph changes, localizes bugs/simplifies debugging.

Prepare for change

Layering steering animation (steering and animation working together)

RVO is bad, HRVO is a hack, ORCA is absolutist

Navigation. Stop doing path following (string pulling). It’s inflexible. False corners at bottlenecks. Non-adpative – if not on path not clear where to go. Very bad for high-latency animations.

Edge following instead of path following. Traverse list of nav mesh edges in path corridor. Figure out next corner to turn at, direction to go. Let the animation stray from shortest path, even into other areas. More information about upcoming corner.

Character position should be smoothed, tweaked motion track; limit sway, stay to outside of turns, smooth acceleration during idle-to-run.

Character velocity; velocity is communication, and filtering adds noise and lag, noise and lag leads to poor communication. Reporting desired rather than actual eliminates noise, but it does add prediction errors. Last resort, but decent last resort.

The utility of steering – nothing is absolute, will do trade-offs, so use utility functions. Get to goal, avoid conflicts, make animations happy – all are constraints to trade off.

Detect current passing side, penalize velocities that would change side. Weight by time to approach (higher penalties for close agents). Weight by passing distance (higher penalties if we’ve already settled on a side).

Overly high penalties lead to wandering. Under-steering, transition lag, velocity from hip position – address on their own.

We don’t notice penetrations, we notice collision reactions. Physics: bounce off. Steering: steer out,  slower to resolve, but smoother, so noticed less. So make physics bound smaller than steering bounds.

Energy minimization (from biomechanics research). Humans try to minimize energy expenditure, square of speed. So weight by negated squared speed.

If ideal speed is less than maximum speed, we can also avoid collisions by speeding up.

Unit Testing of character animation systems. Steering controlled – hard to see what’s going wrong. Manual controlled – not a good analog for steering. Suite of tests independent of AI steering and user input and character internals.

Three pieces: input generator (feed-forward), measuring tools (statistical), scoring metric. Example: Sideways sway. Input generator is normal speed. Measurement tool is sample root position, fit line, find perpendicular RMS error. Metric is RMS error > radius/4. Only tests for the one problem – unit tests need to be small and separate.


GDC – AI Summit – Day 1 – Tactical awareness in an open world


Understand the environment – connectivity and visibility.

Spatial memory – persistent (investigated etc), private or shared.

Don’t use nav meshes, missing visibility, not uniform in coverage. Also no real-time ray-casting, due to false negatives and false positives (unless ray casting were very fine grained which is slow).

Uniform grid for world, fast and regular. Nodes aren’t always in the center of the cell to maximize connectivity. Navmesh based solution to generate nodes and connections, visibility easy but slow. 15 min to generate visibility. Find bounding box, fixed cell size, mark cells containing polys, sub-sample cells to rate connectivity. Pick highest rating that is closes to the center for the node. Then go to each node and connect them. Compare raycasting to A* and connect a node even if they are connected indirectly, just want “close” and pathable. Node-to-node visibility with two source/target points (eye to eye, eye to waist).

Runtime. Query API has to be async, easy to make jobs, easy to query jobs. Need timeslicing because of number of AI agents running.

Job – custom struct for each input, output is either one node, a list of nodes, or a bitfield of all nodes. Use node iterator to hold traversal state, and make it easy to time-slice.

Use chaining to make more complicated operations out of basic blocks, always in the form of input-job-output. Outputs can be inputs for a next step, and several jobs can provide input for the same job.

Combat – uses a lot of chaining, most jobs are shared data between NPCs, because NPCs have a common target (the player).

Line of sight – node to node versus node to area (list of nodes).

Distance to threat – pathfinding distance and not euclidean distance. Defensive combat happens at a larger distance than aggressive combat.

Occupancy – leave room to maneuver, allow flanking. Allows emergent behavior, no explicit code for this.

Cover – has influence in movement, but no explicit go-to-cover. Just goes to cover if it is available.

Hazards – taint radius into a hazard map, add and remove hazards just adds and removes from hazard map. Defensive areas – level designers can declare defend or guard areas, this is just an input into the decision tree.

Investigation system uses heatmap. Add heat to last known position of threat and when hear/see distractions, heat cools over time, cools faster in light-of-sight, pick node with most interest.

Flee – maintain distance field from player, light-of-sight from player, find node away from line-of-sight and away from player.

Pre-calculated visibility.

Position Selection in the Sandbox

Middleware Kithera

Generated on the fly, no precomputation. Also use query language.

Query Language – beyond hard-coded and beyond parameter sets. Fast to change, efficient and powerful.

Generation: “cover from target around agent = 20″. Then conditions: “visible from target = true”. Then weights: “distance from agent = -1.0″

Syntax – operation object value: “min distance from target = 7″. Syntax – operation hideTarget center distance: “cover from target around agent=20″

Can make new objects, operations and generation techniques. DLS can be parsed to bytecode, dynamic evaluation order and partial evaluation to still get correct results.

Directness – closeness to a goal. Find points near target. Close isn’t always good for movement. Can just use repeated query, which would be points near agent to points near target. Want convergence to goal and obey constraints at the same time. Directness is (A-B)/C which is “if I go C, how closer am I to A?” Negative directness means to retreat or flee.

How do you determine “forward”? Not look, not average velocity – use shortest path to current objective, AI is now gameplay compass, or golden path. Generate golden path every so often from player current location to objective. Then game play adjusts to current golden path.

Companions and squadmates. Idea of the relaxed companion. Runs ahead of us on our golden path to the objective, if move away from golden path will wait a little while before following, and will try to keep in visual contact.

Build a language.

GDC – AI Summit – Day 1 – AI Postmortems

Real-time notes at first, then I’ll fill in with analysis and thoughts later.



Procedural level generation and how the AI works with procedurally generated levels.

There are multiple environments, all work the same under the hood. Designers create chunks, procedural level generation picks blocks and connects them together. Blocks are

  • start blocks
  • connector blocks
  • objective blocks (end blocks)

Some levels are linear, others are branching (requiring backtracking in exploration).

No scripting, no triggers, because order of play is not known at game design time.

Tac Area Map – coarse-grained room graph – created automatically at build time, generation is related to Voronoi graph. Find ridges, consolidate areas and corridors. At runtime, tac area maps are connected together by the level generation code.

Distance Map generated, for distance to important points (e.g. how far to objective). Used for agents to simulate intelligent behavior.

Influence maps are used to help spawn enemies ahead of the player’s direction of movement. Player influence increases ahead of them, and decreases behind them. Also, can spawn more enemies in paths towards the objective, to subtly guide players (conflict is an attractor). Tac Area generates bubble of active areas around player. Areas behind player decay towards deactive (thus not taking computation time, they are paused but not removed).

Pacing – want a roller coaster effect, not just infinite battle. So measure intensity of action – should start with small encounters, build up to large encounter, then have mop-up and quiet time. Need measurement metric! Shots fired, special powers used, enemies nearby – too inconsistent. Instead, use normalized damage to player and number of enemies killed nearby; scales as player power level increases.

So spawn, fight, decay/regroup, then spawn again. Also want idea of moving through a ship, so when combat peaks, the area is marked as exhausted, meaning no new spawns for a while. Not infinite, because want the idea of enemy re-supply.

Exterminate missions – linear gauntlet, kill specific number of enemies, make a population graph using distance and tac area maps, to make a corresponding population graph to make sure that enough enemies are encountered, and can also control pace.

XCOM Enemy Unknown

Capture essence of original game, and have distinct AI behavior for different types.

Two action system – move has a priority for AI

Utility-based scoring system = offense + defense + intangible; normalized and weighted; one score per ability type, unique character abilities weighted higher. Abilities with score > 0 are considered (some factors are negative, so utility function often results in negative scores).

Movement direction is utility-based too. Destination search based on type. Full cover, half cover, none. Pick “best cover point”, which is subjective, so use utility system – distance, flanking, angle, cover bonus, visibility, proximity, alien-specific behavior.

Only ranged units care about cover, melee don’t – obvious when you think about it. Most flying units don’t care about cover either, but used cover points to do movement anyway.

Assassin’s Creed 3

Ground Navigation, Climbing, Running, and new ability Forest Traversal. Change everything and change nothing.

Ground Navigation – procedural animation. Uneven terrain, foot connectivity. Old system avoided 30 degrees to 70 degrees (0-45 was ground nav, 70+ was climb). Now 45-70 is scramble/slide behavior. Stepping – predictive foot placement. Predictive landing for foot and then combine with swing paths, feed into IK solver.

Front strafing to change the look of how player is navigating. Start by strafing player in direction and then change player direction after a little bit.

Spent 3 years researching procedural animation.

Free Running – variety, more diversity. Short jumps as before, long jumps are one-offs, not blended with other jump animations.

Jump animations – world is now dynamic, changes as player goes through it, or have giant moving platforms (ships). Pose adjustment for head and legs. Dynamic moving anchors (e.g. windmill with constant moving anchors). Didn’t end up using dynamic objects much (production constraints), but helped to make more fluid movement overall.

Tree Navigation.  No hugging-climbing or swinging on vines. Trees as elevator to get above ground level and be a predator. Unclimbable tree, branches as climb anchors, or v-shaped tree for fast ascent. Can run across tree tops, and jump around trunks. Kick-jump moves to get between trees. Lots of injection and reception animations to make sure that any kind of tree transition works and looks good. Also animations called trunk-around that link free running and trunk circling.

C++11 – remove_const, add_const

remove_cv reference

Templates can manipulate types. It’s straightforward to add adornment to types, but you can also remove them. The basic ones to remove are const and volatile. Fist, let’s look at remove_const:

template <class T> struct remove_const          { typedef T type; };
template <class T> struct remove_const<const T> { typedef T type; };

This works by partial specialization. For a type E that can be defined as typedef const T E, remove_const<E> will match const T, and thus T will be the type E with const removed. Otherwise, the default template will match, and T will be the type E.

The other qualifier is volatile, and it works exactly the same way.

template <class T> struct remove_volatile             { typedef T type; };
template <class T> struct remove_volatile<volatile T> { typedef T type; };

Combining them together to remove both const and volatile is only slightly tricky, and not because there is any ordering of const and volatile.

You might assume this would work

template <class T>
struct remove_cv {
    typedef remove_volatile<remove_const::type>::type type;

This fails, but for the simple reason that the compiler doesn’t know that remove_const::type is actually a type at the point of parsing. A GLR parser could figure this out, but an LL or LR parser has an even harder time than normal; even with ordinary typedefs, there is unavoidable feedback between syntactic parsing and semantic parsing, almost always accomplished by giving the syntactic parser access to the symbol table. Templates complicate this, and most C++ compilers need hinting. This is why the typename keyword has to be employed at times; this tells the compiler that the next element is a type. So we need to put typename in two strategic places, like so:

template <class T>
struct remove_cv {
    typedef typename

The indentation is to show each piece of the template more clearly.

While the standard says “removes the topmost const, volatile or both”, you can only have one each of const and volatile attached to a type – there is no “const const T”.

Adding const is easier than removing it

template <class T> struct add_const { typedef const T type; };

Since you can only have one const at any level of a type, adding const to const is harmless, and thus there is no need for multiple specializations of add_const. Similarly, add_volatile and add_cv will look familiar:

template <class T> struct add_volatile { typedef volatile T type; };

template <class T>
struct add_cv {
    typedef typename add_volatile<typename add_const<T>::type>::type type;

C++11 – enable_if

I’m going to try to write readable and yet performant (and conformant) implementations of a few of the most useful templates.

enable_if from cppreference
enable_if Boost documentation

The enable_if template is used with SFINAE – “substitution failure is not an error” – in order to be able to pick which template to expand. This can be used in simple cases to allow overloading on types that would otherwise have implicit conversion (int to float, for example), or to avoid integral promotion (short to int, for example) that would otherwise prohibit template specializations, or even as a return type (allowing specialization on return type that would otherwise be impossible).

template<bool CONDITION,
         class T = void>
struct enable_if {};

template<class T>
struct enable_if<true, T> { typedef T type; };

The specialized version enable_if<true, T> declares a type enable_if<T>::type, which is just T.

The non-specialized enable_if doesn’t declare a type, which means that code that selects the non-specialized version won’t instantiate. Of course, if no other templates match, you’ll get a compile error, but in general the point is to write useful code. I cribbed the “class T = void” bit from the definition of enable_if, and I need to figure out why it’s necessary. All I can see is that it makes it legal to write enable_if<false> and not need to write enable_if<false, T>. There must be a usage case where that is convenient to do. And, it seems like the answer is from Boost. Once upon a time, there were three variants of enable_if, but the C++ committee picked just one, what Boost called non-lazy. Also note that Boost used enable_if_b for the variant that in C++11 is called enable_if. This means that Boost.MPL needs the Boost version of enable_if, not the standard C++11 version. Although, is it possible that enable_if<class Cond> can interoperate with enable_if<bool B>?

And so, we find out that the default void parameter allows us to write enable_if as a dummy parameter to a function, which allows for template SFINAE to accept or reject that entire function definition based on whether the dummy parameter compiles or not. That would look like this:

template <class T>
T foo(T t, typename enable_if<BOOLEXPR>::type * dummy = 0);

This lets one omit the type parameter, which is a small thing but might increase clarity. Presumably with a decent compiler, the unreferenced parameter is compiled out, resulting in no extra code.

This is one of the simplest useful template metaprogramming methods, and is the basis of things like type traits. Also, enable_if combined with recursion is what turns templates into a Turing-complete language.


SipHash: a fast short-input PRF

This is a hash function with a 64-bit output that is designed to be resistant to hash-flooding attacks. The authors are Jean-Philippe Aumasson and Daniel J. Bernstein. The algorithm is supposed to be competitive in speed with MurmurHash.

HASH-FLOODING DOS RELOADED: ATTACKS AND DEFENSES – talk on how to attack common hash functions. Video.

Related page: someone implementing it in Python, and re-implementing it in C. SipHash.