Return to “Technical”

Post

Unit/Group AI In Halo

#1
Interesting interview with Damian Isla, formerly with Bungie, on the AI used (and evolved) in the Halo games: https://www.pcauthority.com.au/feature/ ... alo-477480

Other than a couple of gratingly incorrect uses of the term "strategic," it's an engaging read. There are some sharp insights about the importance of how NPC behaviors are perceived as AI, and even a nuts-and-bolts section that might be of interest to, oh, I don't know, someone making a game with a variety of units? ;)

(Also, no, I will not stop advocating for an accurate use of the meaningful terms "tactical" and "strategic," ever, because that distinction has value in game design. Nyah.)
Post

Re: Unit/Group AI In Halo

#2
Great interview! I was always really fascinated by the tactical (hur hur) decisions that enemies made in these games, and it is cool to read about it. I first played these games when I was much younger, and going back to them now I can appreciate how amazing the AI was, and still is.

I do think that beyond the few really obvious relationships between unit types, the game/s could have used the subtle relationships they talk about avoiding. Jackles forming shield walls in front of their allies instead of wandering around in a loose group almost all the time could have added a lot to the game, and it almost feels like they are actively avoiding working with other units half the time. (Specifically, even when they do set up a shield wall to cover up weak spots, it feels very accidental, and they never do so to provide cover to their allies. Instead, they are often in their own pod tens of meters away from other units or mixed in with other units haphazardly.) Hunters don't really work as well in pairs as they "should", allowing themselves to be separated and enraged easily, instead of working as a team.

I think now that hardware is much better adding more subtle relationships between units is a better idea than is said to be in the interview, and I think that players would notice those relationships, especially on higher difficulties.
Libertas per Technica
Post

Re: Unit/Group AI In Halo

#3
One of the several interesting observations in that interview is that, if you can blow it up in mere seconds, its AI is irrelevant. Systemic behavior is only visible/interesting if it can last long enough to be perceived by the human player.

I think there's a lesson in practical game design there....
Post

Re: Unit/Group AI In Halo

#4
The biggest problem for (combat centered) AI is not a technical one, but: "how does the player perceive it".
And that is strongly correlated with the gameplay the designers have in mind.
Noticeable to the player are obvious failures of the AI (pathfinding - getting stuck, not reacting to obvious threats), as this is something humans clearly wouldn't do.
At best, the AI does everything it is expected to do in the given gameplay situation. But that makes it hard for the player to differentiate a really dynamic, deep and complex AI, in contrast to just having a lot of scripted AI events. (jump through window when player enters trigger T323; shout "we have him", when player enters backalley etc..)

And that's where the gameplay comes in: the faster the Gameplay (Shooting/attacking in short succession, moving fast) is, and the less time could be spend to observe an AIs behavior and the larger the amount of AIs are (think about a fish-swarm, hiding individuals in a crowd), the less it matters that an AI is doing things dynamically. Simple triggers, state-machines and scripts can simulate a conscious enemy without the risk of losing control over it. (bugs)
Linear games (progression through a prefixed set of levels) benefit from scripted events, open world game benefit from simple deterministic (debuggable) state-machines and a large number of opponents.

Thats why "slow" games, like the original Thief, where the play was moving and observing slowly, had to have a much nicer (perceived) AI than just the Doom clones of its time.

The AI in Thief (guards) where also announcing to the player what they thought. A neat trick to make their thought process less opaque, and give the player a chance to preemptively react on their intended behavior. Similarly, the search teams in Farcry where shouting commands like "swarm out" when heading towards the players approximated position.

In the end, there is only so much, that a combat AI can do to behave plausible (human-like). The rest is more about having fancier animations, voice samples and action events for the NPCs to do.

I saw a GDC talk on GOAP (planning) system in Tomb Raider to represent fancy dynamic AI. But watching the game itself, the player was either doing quick sneak attacks or shooting at a bulk of enemies in short succession. There was simply no time (and no need) to see the AI do anything fancy outside of combat-mechanics. They where just targets to be taken down quickly.

And relating that to Limit Theory: there is only so much that can be represented in a combat scenario, such as turning towards the target, and firing the weapon when in range. The combat itself does not need a fancy AI. (also since there is no topology to use any cover as a system, its just free open space and direct movements).
Post

Re: Unit/Group AI In Halo

#5
I am So Ready to be talking actual AI gameplay development in Limit Theory. :D

For now, I really enjoyed your summary, Damocles -- that is some good stuff. I'm not an AI pro or expert, but let me offer a couple of additional thoughts.

1. It seems to me that the optimal point in unit AI is somewhere between predictability and randomness. In a word, good NPC AI demonstrates "intentionality": the perception in players that the NPC has goals but is free to try to achieve those goals in environmentally-reactive ways. That is, NPC AI feels most right when the typical player perceives an NPC's visible behaviors as surprising but still plausible: it's acceptable as a kind of thing that NPC might do, but it's done in a way that doesn't seem hard-coded or scripted.

I'm thinking the way to achieve this is to define NPC tactical AI as a two-level decision system that supports variation within a larger pattern.

The larger pattern could be related to resource-maximizing goals (for strategic play) or well-defined environment-exploiting tactics (for tactical play). Even a solitary NPC ship could have a pattern of hiding behind asteroids and plinking at you, or making fast head-on runs, and so on. But that alone would feel scripted; the perception of intentionality comes when there's individualistic variation within these patterns: this NPC flies straight at you at maximum speed, while that NPC jinks wildly while flying quickly in your direction.

The point I'm laboring to make here is that I suspect players judge any computer game -- and will judge LT -- on the degree of intentionality they perceive in its NPCs. So LT's NPC AI ought to be designed to maximize that perception of intentionality for the kind of gamer who's most likely to play LT.

Which is Josh. So... if he's satisfied, I suspect we will be as well.

But we can still talk about it until then. :lol:

2. As another example of great NPC AI, let me offer this document from the developer of the AI for F.E.A.R., which you will still see discussed as having some of the best tactical AI in any single-player FPS ever. If you haven't played it, you should. No game is perfect, and this one isn't, but it's nearly a masterclass in small-unit urban tactics.

You actually reminded me of F.E.A.R. when you mentioned Thief. I also love Thief (the original of that name, not the new... thing), including the guard alert state barks, but F.E.A.R. went Thief not just one but two better. 1) Its enemies were smarter, both individually and as groups. If you took cover, they'd lob grenades to force you out; if you held up for a moment, they'd send individuals out to flank you while some stayed behind to draw your fire. 2) You could slow down time. It was pretty much just "bullet time" from The Matrix, but it was implemented so well that every use felt fantastic for the entire duration of the game.

And it was the combination of these two features that really worked. By itself, the slow-time would have been overpowered against the usual "rush the player" AI. And the group tactics would have been hard for a normal human character to survive. But using slowed time against clever opponents working together made one feel like a tactical genius. And -- like NPC guard barks in Thief -- enemies in F.E.A.R. would react audibly to your actions. When you used slowed time, one enemy might yell, "He's too fast!" Which would make sense until you slowed time right in the middle of his yell, and then it would sound like a foghorn while you raced behind him and took him out before he could react.

When I ask Josh about how enemy ships will coordinate their actions so that groups become more dangerous than just the sum of their numbers, it's this feeling from F.E.A.R. that I'm thinking about.

I'm not suggesting LT should play like F.E.A.R. -- I'm saying that if LT's grouped enemy tactics versus the player's researched abilities make me feel as clever as F.E.A.R. did, it will have achieved something special.
Post

Re: Unit/Group AI In Halo

#6
I think LT AI have a rather different challenge in addition to combat interestingness and letting the player peek into their thought processes, LT AI need to keep the world running and interesting, at least to a degree where blackbox manipulation and regulation is imperceptible. I think delegation and contracts are a great way to do that, and by itself represents a free-market wetdream, but I think more is needed to make a variety of AI mindsets compelling. We've discussed at length Superstition, Deception, Ideology, Prejudice, and other key elements in creating realistic AI, and I suspect that at least some of that will be in vanilla, and more will come with mods...however as pointed out above, getting a peek into the minds of various agents will be critical to letting the player know that what they just saw wasn't some bug. For example, if one NPC goes crazy and decides to do something totally irrational, perhaps a different NPC should call them out on it, call them crazy or make some sort of signal of disapproval.
Image
Challenging your assumptions is good for your health, good for your business, and good for your future. Stay skeptical but never undervalue the importance of a new and unfamiliar perspective.
Imagination Fertilizer
Beauty may not save the world, but it's the only thing that can
Post

Re: Unit/Group AI In Halo

#7
Thanks Flatfingers for the referenced paper.

Here some more thoughts on combat related AI for a spacesim:

In any case, independent of how the AI chooses what to do (simple state machine or complex dynamic planner), the end result will be a set of actions that translate into the gameworld.
Any sequence of actions form a plan (forming the plan given currently known information about the world is the actual work of the AI module)

Here some categories of actions I can think about:


************************************

Actions: (generic space sim)


# actions using a ships modules and systems
-various attack types (that whole list of pew pew things)
-various active (non permanent) defenses (like flares, cloaking)
-various utility modules (scanners, cargo drop, tractor beams)

# movement in space
-general pathfinding and traversal (goto X, go near Y, orbit around Z)
-taking cover (any obstacle that is large enough,that could be asteroids, big ships, stations, wrecked ships)
-avoiding areas (cluster of enemies, minefields, gasclouds, defense turrets ...)
-hiding in areas (gas clouds, near stations ...)
-retreat to a save-spot when damaged
-observe area (scanning it locally)

# group coordination actions:

(ships that support each other in a role)

-formations (position of ships pre battle)
-realigning formations mid battle
-active support (like healing, restocking, extending shields)
-covering for others (one ship redraws, another takes point)
-coordinated movements (patrols, or searching/scanning for a cloaked ship)

# communication between ships (between different factions)
-hailing ship
-warning
-extortion of goods
-taunting
-policing ships
-sending SOS signal

************************************

Planner:


In an AI planner based on actions, the actions each have a set of preconditions, activity, resulting state changes and cost.
An example here would be an action "refuel ship".

The preconditions would be:
-docked at fuel-connector (station or tank)
-fuel not filled 100%
-drawing fuel allowed
-supply tank contains at least 1 fuel unit

the action would be:
-fuel tank animation / effect
-time in fueling state: 1 second per unit
-shields lowered

the state changes would be
-ship: fuel-level increased by X units
-tank: fuel-level decreased by X units

cost of the action:
100 + 500 * enemies nearby supply tank - 200 * own "strength-level"
(some relative value for the planner heuristics)

The cost values are actually a variable that can be adjusted through learning. Failed plans increase the estimated costs of actions relative
to alternative actions. Successful actions reduce the estimated costs for it.
An AI with a cautious personality would assign a higher cost for actions that pose risks. An explorer type AI would assign lower costs to travel times, etc.


Any action or sub-plan (a precompiled list of actions) can be described in this logic. (preconditions, state changes, costs)
The planner can now search through the set of available actions, and form a sequence of actions to reach a specific goal or just see what interesting goals/world states are attainable currently.
(forward planning, from current situation towards a goal, or backward planning from a goal to a list of actions the align with current world-state)
The costs are added up though the plan. When there are multiple possible plans to reach a goal, the one with the lowest total cost is chosen.

To keep the complexity manageable, the AI can be trained in many situations. Successful plans can be stored for later. When forming a new plan,
its quicker to first check if a previous plan worked, given the same goal and world-state.
Plans can also be combined to "Meta Actions", that further reduce the search space for the planner.

************************************

player feedback: visualization / audio:

Here is where "trickery" comes in, to make the AI more convincing.
The AI ship by itself is a pretty static object (no limbs to wave around), so (apart from visual module-effects) the intention of the AI must be represented in some way.
I would suggest some audio feedback. Such as: "target in range, take position", "search the area", "taking damage, request backup", "hull breached, cover for me", "need to refuel"
That can be played when in range, or when being "scanned" or focused by the player. The voice samples could be heavily distorted (static, robot like), just to get the message across, and to not get into expensive proper dialog. (lore technobabble: universal automatic translator)


And many more ideas, but that should be enough for one post...
Post

Re: Unit/Group AI In Halo

#8
Just thought I'd give a mention to a series on Video Game AI by Siraj Raval. At time of writing, there's only the introductory episode, but he's planning on making it a 10 week course. He has a lot of good technical material on his channel regarding deep learning, genetic algorithms, data analytics, blockchains and so on as well as numerous links to supplementary sources. Here's the Syllabus for the course:
Spoiler:      SHOW
Each week has listed first a shorter, introductory video (released on Friday) and a longer, in-depth video (released on Wednesday)

Week 1
Markov Decision Processes
Policy evaluation, iteration, value iteration

Week 2
Monte Carlo Prediction
Monte Carlo with Epsilon-Greedy Policies and off-policy control with importance sampling

Week 3
Q Learning
Deep Q Learning

Week 4
Policy Gradients
Actor Critic

Week 5
TRPO
PPO

Week 6
Deep RL experimentation
Stochastic Computation Graphs + SVG + DDP

Week 7
Derivative Free methods
HyperNEAT

Week 8
Model based RL
Inverse RL

Week 9
Imitation learning
Open Problems

Week 10
Future Directions for RL
A technique i'll talk about that hasn't even been created yet when i made this syllabus because this field moves fast AF
Image
Challenging your assumptions is good for your health, good for your business, and good for your future. Stay skeptical but never undervalue the importance of a new and unfamiliar perspective.
Imagination Fertilizer
Beauty may not save the world, but it's the only thing that can
Post

Re: Unit/Group AI In Halo

#9
Thanks, its an interesting channel on AI, going into depth without hiding behind the math.

The reinforcement learning methods presented in the video are also applicable to such action planner systems as stated above.
The reward (utility) would then be the successful application of an action-sequence to reach a goal (running simulations against other AIs).
Over time the AI can learn to give a proper (utility maximizing) cost to actions given the world-state during planning.
Post

Re: Unit/Group AI In Halo

#10
Damocles wrote:
Thu Nov 16, 2017 11:03 am
The biggest problem for (combat centered) AI is not a technical one, but: "how does the player perceive it".
And that is strongly correlated with the gameplay the designers have in mind.
Noticeable to the player are obvious failures of the AI (pathfinding - getting stuck, not reacting to obvious threats), as this is something humans clearly wouldn't do.
At best, the AI does everything it is expected to do in the given gameplay situation. But that makes it hard for the player to differentiate a really dynamic, deep and complex AI, in contrast to just having a lot of scripted AI events. (jump through window when player enters trigger T323; shout "we have him", when player enters backalley etc..)
Good points, but I'd like to add that believable weaknesses (that a player might have) are also helpful in creating the perception of a "living" opponent instead of a bot. Such as some reaction time on part of the AI actor, some inaccuracy in aiming its guns and so on.
And since we're in the technical forum here, I'd like to suggest a technical way to achieve it:
In the devlog from Friday, October 13, 2017, Josh writes about using Proportional-Integral-Derivative (PID) Control for AI Maneuvering and how it makes AI piloting really good. Now if it becomes too good at some point (as in AI outflying every human pilot). it might make sense to add some delay to the input of the PID control, and maybe some overlaid randomness.
That could bring an AI opponent with superhuman reflexes and accuracy back to human levels. It could also be used to implement a skill system for bots - newbie bots would have large penalties, more experienced ones would get faster and more accurate.
Post

Re: Unit/Group AI In Halo

#11
Inhumanly capable AI (in terms of reaction speed, accuracy and knowledge about the world) is certainly not fun.
And an action planner should make an AI hard to counter not by making it superhuman, but by having it make less predictable but sill logical decisions.

Since the GOAP system is mainly popularized by FEAR, here a lecture on it.

The playsession in the lecture is actually intended to showcase the capabilities of the FEAR AI, but then another problem is also highlighted:
the AI is not really given time to impress with smart decisions, when playing against an experienced player.
at 43:47
(I guess the lecturer would have liked to pick a less capable player...)

Its hard to show the AI do anything here, since another goal of the FPS genre stands in the way of it: the player should feel almighty, and kills should be fast paced.
The combat is pretty balanced towards the player being able to take out AIs within seconds.

Slower game genres would profit a lot more from smart individual AI. Where the player can observe the AI for longer.

Another interesting thing: In casual reviews I often hear players say: oh that AI is stupid.
For example, because it did not run away quick enough to not get shot, or die after a single shot or sneak attack.
But that has nothing to do with the AI making stupid decisions. But rather with the balancing or animation system giving it a slow response, or bad aim.
So reviewers should separate: what is the AI trying to do (intelligence) vs: how well does is do it. (implemented mechanics / balancing)
And getting stuck while moving somewhere is more often an oversight by the level design, than the ai programmers. (Unless the pathfinding and traversal is terrible in general.)

Online Now

Users browsing this forum: No registered users and 7 guests

cron