游戏邦在:
杂志专栏:
gamerboom.com订阅到鲜果订阅到抓虾google reader订阅到有道订阅到QQ邮箱订阅到帮看

以《AI War》为例阐述AI元素概念与代码(1)

发布时间:2013-07-11 11:13:29 Tags:,,,,

作者:Christopher M. Park

很多人都对《AI War:Fleet Command》中的AI是如何运行表示好奇,而比起大多数RTS游戏中的AI,我们已经获得了更加现实的策略/战略结果。在本系列文章的第一部分中,我将概述我们所使用的设计原理。(请点击此处阅读本文第2部分

决策树:大多数RTS游戏中的AI

首先,大多数游戏中的AI系统是基于巨大的决策树(如果A,然 后C,如果B,然后D等等)而运行。这将在某种程度上帮助人类行为,但这却需要更多的开发并最终获得可利用的缺陷。自从1998年以来,在每一款RTS游戏中我最喜欢的例子便是他们是如何围绕着墙进行探索;如果墙体有一道缝隙,那么AI就会总是努力穿过这个洞。人类玩家会在敌人卡住的这个缝附近集结大量单位,玩家其实是利用这个洞作为陷阱来“欺骗”AI。但AI还是会一波又一波地朝那个洞去送死。

AI War:Fleet Command(from gameris)

AI War:Fleet Command(from gameris)

这一基于规则的决策树不仅编程工作量大,而且容易被玩家作为漏洞加以利用。而为了模仿人类玩家是如何游戏的,我们就更需要使用这一方法。一开始我使用了决策树,但是很快地我便意识到这有点无聊,因为它只停留在基本的概念层次—-如果我想要与人类对抗玩游戏,我只需要与其他人进行游戏便可。我真正希望的是AI能够基于全新方式而行动,即不同于其他人类所做的那样,就像与Skynet或《Ender’s Game》中的Buggers进行对抗一样。玩家会觉得AI足够新奇且聪明,而因为比起CPU,人类的大脑具有不同的优势与劣势,所以AI在面对恐怖场景时的表现也与人类不同。在小说和电影中有许多关于这种情况的例子,但是在游戏中却并不多。

分散的智能

我所采取的方法(并在游戏开发早期阶段快速获得意想不到结果)便是模拟每个个体单位的智能,而不只是模拟一个完整的控制智能。如果你读过Michael Crichton(游戏邦注:美国畅销书作家和影视导演、制片人)写的《猎物》,你就会发现书里的AI有点像成群的纳米机器人。主要的不同在于我的个体单位比起他的纳米机器人更加聪明,因此在我的游戏中,AI群体的数量通常为30至2000艘舰船,而不像纳米机器人那样多大数百万或数十亿。这也意味着我的单位在获得真实感方面具有零风险。对我来说最大的好处便是可以基于较少的代码和代理获得更加智能的结果。

战略层

对于《AI War》中的AI主要存在三个层面:战略,副指挥官和个体单位。所以这并不是一种真正的群体智能,因为它将群体智能(在个体单位层面)与更多全球化规则和行为整合在一起。AI是如何决定巩固哪个星球,或者决定哪个星球发送电波等都是基于逻辑的战略层——全球指挥官。AI在个体星球中决定如何使用舰船进行攻击或防御是基于副指挥官和个体舰船逻辑的结合。

副指挥官

很酷的是,副指挥官的逻辑是完全突发的。基于个体单位逻辑的代码编写,单位可以做些对自己有利的事,但同时也需要考虑群组中的其他成员在做什么。这是一种群集行为理念,但比起移动更适用于战术和目标选择中。所以当你看到AI将舰船送到你的星球上时,将其分化成2至3个群组,并突然撞击你的星球上的各种目标时,你便会意识到副指挥官行为是从未进行明确编程。这里并不存在像游戏代码那样的内容,但是AI却会执行一些类似的行动。AI会基于这种方式做出一些让人惊讶的聪明事,它也从不会做出那些基于规则的AI所做的低能事。

最棒的一点便是这里并不能耍花招。这并不是说系统非常完美,而是如果玩家找到一种方法去欺骗AI,他们就必须全部告诉我,而我可以快速地在代码中添加一个对立内容。因为我们能够即时掌握相关信息,所以游戏中并不存在任何让玩家去欺骗AI的方法。AI只在主机上的一个单独线程中运行着,所以我会分配给它大量的数据处理工作(使用LINQ—-我的背景是数据库程序设计和ERP/财务追踪/TSQL的收益预测应用,许多都与这里的AI相关联)。使用各种变量意味着它可以在不影响双核主机上任何内容运行的前提下做出最聪明的决策。

模糊的逻辑

模糊的逻辑/随机性也是我们AI的另一大重要组件。创造一个不可预测的AI系统的一大环节便是确保它始终能够创造出有效的选择,但却不一定是100%的最佳选择(因为具有重复性,“最佳”选择也会变得不那么完美)。如果一位AI玩家只创造过完美的决策,那么你便可以通过想出自己的最佳决策(或者在你的势力中创造一个伪造的弱点,如在墙上设个缺口)去攻击他,然后你便可以预测带有更高精确度的AI将做什么——在许多其它RTS游戏中的特定例子中,这种方法的有效性接近100%。基于适当的模糊逻辑设置,我想说在《AI War》中你能够预测到AI将会做些什么的机会不会超过50%,我们的游戏中设置的是较难预测的方法。

智能错误

需要记住,较低难度级别会故意创造一些愚蠢的决策,就像新手们可能会做的那样(如不顾一切地追寻最佳目标)。这会让更低级别的AI仍然看起来就像一个真正的对手,但却不那么可怕。对我来说,为了降低难度而设置AI便是个巨大的挑战。在某种程度上,这只是在阻止较低级别AI执行最佳战术,但同时我还要在它的决策(在较低级别中所做出的决策)中植入一些不是很完美的假设。

略过经济模拟

最后,《AI War》中的AI是遵循着与人类玩家完全不同的经济规则(游戏邦注:但是所有帧数和大多数策略规则却是一样的)。举个例子来说吧,在大多数游戏中,AI一开始都是伴随着2万以上的舰船出现,但是每个玩家一开始只带有4搜舰船。而如果AI的优势能够完全将你压倒,你便会立刻被击退。就像在《超级玛丽兄弟》中,如果每个关卡中的所有坏人同时向你展开攻击,你便会立刻死去(因此它们被分设在不同地方)。如果FPS游戏中任何特定关卡的所有敌人都直接走向你并准确地朝你射击,你便没有生存的希望。

想想你所玩过的FPS游戏,即模拟你在军事行动中的表现—-所有敌人并不总知道你和同盟在做什么,所以即使敌人拥有压倒性的优势,你也可以通过打击关键目标等方法而获胜。我认为这与现实中的许多情况非常相似,但是你在其它RTS游戏的小规模战斗模式中却不可能遇到这样的情况。

我将在本系列之后的文章中对该话题进行详细讨论,因为这可能成为我在创造这款游戏时所做出的最富争议的设计决策。一些人可能会将其当成一种欺骗AI的形式,但是我有这么做的理由。AI船舰所获得的奖励永远不会超过玩家,AI不会拥有有关玩家活动的过多信息,AI并不会因为玩家的任何行动获得奖励或惩罚。关于游戏中的AI的策略和战术代码使用的是与人类玩家完全相同的规则,这也是我们的智能系统真正突出之处。

不对称AI

在《AI War》中,为了提供程序活动去呈现出大卫对抗巨人歌莉娅之感(游戏邦注:人类玩家总是扮演着大卫的角色),我为AI的某部分与玩家行为的对抗创造一个单独的规则。AI经济是基于内部加固点,电波倒计时,以及基于玩家行动增加或减少的整体AI进程数。这能让玩家去设置游戏前进的节奏,从而会添加你通常只能在回合制游戏中遇到的另外一个策略层面。这是一种非对称系统,即你在带有类似人类行动的AI的PVP游戏中不可能看到的,但是它却能够有效作用于AI作为敌人的合作类游戏中。

上述内容在读者当中引起很大的反响,然而有人批评我把它写得太入门/太宽泛。确实,作为本系列的开头,我把难度设得比较低。如果你不是程序员或IA爱好者,那么超过这个程度可能不会让你觉得太有趣。

你这是什么意思,你的AI代码就像数据库一样运作?

大部分人提的第一个问题是,我的AI代码怎么可能像个数据库。在之前的文章中,我已经解释了如何使用常见的ROLLUP。这也有助于AI的表现,但那不是我要说的重点。

我使用LINQ来执行游戏AI的目标选择和其他选择,这么做确实相当节省第一层决策的代码(也缩减总体必要代码,我估计我们的游戏AI的决策代码总量不超过20000行)。以下是确定目标选择的一段LINQ查询代码:

var targets =
//30% chance to ignore damage enemy can do to them, and just go for highest-value targets
( unit.UnitType.AIAlwaysStrikeStrongestAgainst ||
AILoop.Instance.AIRandom.Next( 0, 100 ) < 30 ?
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
obj.GetHasAttackPenaltyAgainstThis( unit ) ascending, //ships that we have penalties against are the last to be hit
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
Mat.ApproxDistanceBetweenPoints( obj.LocationCenter, unit.LocationCenter ) ascending, //how close am I to the enemy
obj.UnitType.ShieldRating ascending, //how high are their shields
unit.UnitType.AttackPower ascending, //go for the lowest-attack target (economic, probably)
obj.Health ascending //how much health the enemy has left
select obj
:
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
( chooseWeaklyDefendedTarget ?
obj.UnitType.TripleBasicFirePower >= obj.NearbyMilitaryUnitPower :
( chooseStronglyDefendedTarget ?
obj.UnitType.TripleBasicFirePower < obj.NearbyMilitaryUnitPower : true ) ) descending, //lightly defended area
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
obj.GetHitPercent( unit ) descending, //how likely I am to hit the enemy
unit.GetAttackPowerAgainstThis( obj, false ) descending, //how much damage the enemy can do to me
obj.Health ascending //how much health the enemy has left
select obj
);

以上使用了很多格式化的缩写,但愿读者借助绿色字的注释能够理解那是怎么回事。一定程度上,你可以把上述代码叫作决策树(它确实有很多分支),但总体说来,代码更加简洁(如果能恰当地使用格式化的命名的话)和容易阅读了。但这种结构的优点在于,因为是按分类执行的,而不是通过严格的if/else等条件语句,这就相当于给AI一个做某件事而不做另一件事的倾向。

这段代码考虑到很多情况,有许多可以运行的不同模式,这就使AI本身具有很高的智能。但还不止如此。实际上评估上述逻辑的循环也让AI本身更加智能。

bool foundTarget = false;
foreach ( AIUnit enemyUnit in targets )
{
if ( enemyUnit.Health <= 0 || enemyUnit.CloakingLevel == CloakingLevel.Full )
continue; //skip targets that are already dead, or are cloaked
if ( unit.CloakingLevel == CloakingLevel.Full &&
enemyUnit.UnitType.ShipType == ShipType.Scout )
continue; //don’t give away the position of cloaked ships to scouts
if ( unit.CloakingLevel != CloakingLevel.None &&
enemyUnit.UnitType.TachyonBeamRange > 0 )
continue; //cloaked ships will not attack tachyon beam sources
if ( enemyUnit.UnitType.VeryLowPriorityTarget )
continue; //if it is a very low priority target, just skip it
if ( enemyUnit.IsProtectedByCounterMissiles && unit.UnitType.ShotIsMissile )
continue; //if enemy is protected by counter-missiles and we fire missiles, skip it
if ( enemyUnit.IsProtectedByCounterSnipers && unit.UnitType.ShotIsSniper )
continue; //if enemy is protected by counter-sniper flares and we fire sniper shots, skip it
if ( enemyUnit.GetAttackPowerAgainstThis( unit, false ) == 0 )
continue; //if we are unable to hurt the enemy unit, skip attacking it
if ( unit.EffectiveMoveSpeed == 0 && !unit.UnitType.UsesTeleportation &&
enemyUnit.GetHitPercent( unit ) <>
continue; //stop ourselves from targeting fixed ships onto long-range targets

gc = GameCommand.Create( GameCommandType.SetAttackTarget, true );
gc.FGObjectNumber1 = unit.FGObjectNumber;
gc.FGObjectNumber2 = enemyUnit.FGObjectNumber;
gc.Integer1 = 0; //Not Attack-Moving
unit.LastAICommand = gc.AICommand;
AILoop.Instance.RequestCommand( gc );
foundTarget = true;
break;
}

//if no target in range, and guarding, go back to base if out of range
if ( !foundTarget && unit.GuardingObjectNumber > 0 )
{
Point guardCenter = unit.GuardingObject.LocationCenter;
if ( Mat.ApproxDistanceBetweenPoints( guardCenter, unit.LocationCenter ) >
Configuration.GUARD_RADIUS )
Move( unit, guardCenter );
}

以上代码没什么真正出奇的地方,但它的决策点更多了一些(大多是硬性规定而不是倾向)。另外,在追踪逻辑中,一旦选定目标,船舰就有了一个倾向,因为并非目标都是其实是一样—-它们观察彼此在做什么这个方面确实是必要的,至少在我采用的游戏设计法中是如此,目的是为了让它们高效地分支和拆分和命中更多目标。

我不是一点一点地分析上述代码,而是基本上让代码不言不喻,所以我的注释其实是多余的。

警戒水平

上述代码的一部分重要逻辑是警戒水平,也就是说,以上代码让AI根据受周围舰队保护的情况来决定是否定位目标。所有飞船只都有30%的概率忽略警戒水平,而朝着它们的最佳目标前进,某些类型的飞船(如Bomber)始终是那么执行的,这使得AI更加难以预测。

这种办法的主要优点是,它导致AI大部分时候选择保护不严密的目标(如消灭所有偏远的采集船或保护不当的施工船),然而,这些飞船或部分这些飞船仍然有可能冒险去进攻有价值但受严密保护的目标,如你的指挥站或高级工厂。这类不确定事件通常给人非常人为的感觉。即使AI偶尔携大批飞船做出像集体自杀似的行为,如果你不够留神,这种进攻也可能很有效。

根据玩家反馈改变AI行为

有些读者阅读了本系列的第一部分后,指出我并没有像我所承诺地那样修改他们发现并告诉我的漏洞。言之有理,我确实没有进行解释,所以我接受读者的批评。然而,正如我所强调的,在内测版本我们没有发现任何利用AI的漏洞,所以我并不太担心这会成为普遍性问题。

但我想说的其实是,在你所看到的这种AI系统做修改是相当简单的。在我们的内测版本中,当有人发现欺骗AI的办法时,通常几天内甚至几小时内我就能想到解决方案。原因很简单:LINQ代码很短很直观。我要做的只是决定AI要遵守什么规则或倾向、当必须有倾向时,其相对优先级是什么、以及添加几行代码。仅此而已。

有了这种树形决策方案,我认为我的代码得以保持简洁(我有10类AI,有些基本上像数据库—-如AIUnit,有些是rollup—-如AIUnitRollup)。我提倡使用这种代码的理由是,它不仅运行得好、可以产生相当漂亮的应急行为,而且容易维护和拓展。这是值得考虑的方法—-这类AI容易实验,如果你想在你的下个项目中尝试的话。(本文为游戏邦/gamerboom.com编译,拒绝任何不保留版权的转载,如需转载请联系:游戏邦

Designing Emergent AI, Part 1: An Introduction

by Christopher M. Park

A lot of people have been curious about how the AI in AI War: Fleet Command works, since we have been able to achieve so much more realistic strategic/tactical results compared to the AI in most RTS games. Part 1 of this series will give an overview of the design philosophy we used, and later parts will delve more deeply into specific sub-topics.

Decision Trees: AI In Most RTS Games

First, the way that AI systems in most games work is via giant decision trees (IF A, then C, IF B, then D, etc, etc). This can make for human-like behavior up to a point, but it requires a lot of development and ultimately winds up with exploitable flaws. My favorite example from pretty much every RTS game since 1998 is how they pathfind around walls; if you leave a small gap in your wall, the AI will almost always try to go through that hole, which lets human players mass their units at these choke points since they are “tricking” the AI into using a hole in the wall that is actually a trap. The AI thus sends wave after wave through the hole, dying every time.

Not only does that rules-based decision tree approach take forever to program, but it’s also so exploitable in many ways beyond just the above. Yet, to emulate how a human player might play, that sort of approach is generally needed. I started out using a decision tree, but pretty soon realized that this was kind of boring even at the basic conceptual level — if I wanted to play against humans, I could just play against another human. I wanted an AI that acted in a new way, different from what another human could do, like playing against Skynet or the Buggers from Ender’s Game, or something like that. An AI that felt fresh and intelligent, but that played with scary differences from how a human ever could, since our brains have different strengths and weaknesses compared to a CPU. There are countless examples of this in fiction and film, but not so many in games.

Decentralized Intelligence

The approach that I settled on, and which gave surprisingly quick results early in the development of the game, was simulating intelligence in each of the individual units, rather than simulating a single overall controlling intelligence. If you have ever read Prey, by Michael Crichton, it works vaguely like the swarms of nanobots in that book. The primary difference is that my individual units are a lot more intelligent than each of his nanobots, and thus an average swarm in my game might be 30 to 2,000 ships, rather than millions or billions of nanobots. But this also means that my units are at zero risk of ever reaching true sentience — people from the future won’t be coming back to kill me to prevent the coming AI apocalypse. The primary benefit is that I can get much more intelligent results with much less code and fewer agents.

Strategic Tiers

There are really three levels of thinking to the AI in AI War: strategic, sub-commander, and individual-unit. So this isn’t even a true swarm intelligence, because it combines swarm intelligence (at the individual-unit level) with more global rules and behaviors. How the AI decides which planets to reinforce, or which planets to send waves against, is all based on the strategic level of logic — the global commander, if you will. The method by which an AI determines how to use its ships in attacking or defending at an individual planet is based on a combination of the sub-commander and individual-ship logic.

Sub-Commanders

Here’s the cool thing: the sub-commander logic is completely emergent. Based on how the individual-unit logic is coded, the units do what is best for themselves, but also take into account what the rest of the group is doing. It’s kind of the idea of flocking behavior, but applied to tactics and target selection instead of movement. So when you see the AI send its ships into your planet, break them into two or three groups, and hit a variety of targets on your planet all at once, that’s actually emergent sub-commander behavior that was never explicitly programmed. There’s nothing remotely like that in the game code, but the AI is always doing stuff like that. The AI does some surprisingly intelligent things that way, things I never thought of, and it never does the really moronic stuff that rules-based AIs occasionally do.

And the best part is that it is fairly un-trickable. Not to say that the system is perfect, but if a player finds a way to trick the AI, all they have to do is tell me and I can usually put a counter into the code pretty quickly. There haven’t been any ways to trick the AI since the alpha releases that I’m aware of, though. The AI runs on a separate thread on the host computer only, so that lets it do some really heavy data crunching (using LINQ, actually — my background is in database programming and ERP / financial tracking / revenue forecasting applications in TSQL, a lot of which came across to the AI here). Taking lots of variables into effect means that it can make highly intelligent decisions without causing any lag whatsoever on your average dual-core host.

Fuzzy Logic

Fuzzy logic / randomization is also another key component to our AI. A big part of making an unpredictable AI system is making it so that it always make a good choice, but not necessarily the 100% best one (since, with repetition, the “best” choice becomes increasingly non-ideal through its predictability). If an AI player only ever made perfect decisions, to counter them you only need to figure out yourself what the best decision is (or create a false weakness in your forces, such as with the hole in the wall example), and then you can predict what the AI will do with a high degree of accuracy — approaching 100% in certain cases in a lot of other RTS games. With fuzzy logic in place, I’d say that you have no better than a 50% chance of ever predicting what the AI in AI War is going to do… and usually it’s way less predictable than even that.

Intelligent Mistakes

Bear in mind that the lower difficulty levels make some intentionally-stupid decisions that a novice human might make (such as going for the best target despite whatever is guarding it). That makes the lower-level AIs still feel like a real opponent, but a much less fearsome one. Figuring out ways in which to tone down the AI for the lower difficulties was one of the big challenges for me, actually. Partly it boiled down to just withholding the best tactics from the lower-level AIs, but also there were some intentionally-less-than-ideal assumptions that I also had to seed into its decision making at those lower levels.

Skipping The Economic Simulation

Lastly, the AI in AI War follows wholly different economic rules than the human players (but all of the tactical and most strategic rules are the same). For instance, the AI starts with 20,000+ ships in most games, whereas you start with 4 ships per player. If it just overwhelmed you with everything, it would crush you immediately. Same as if all the bad guys in every level of a Mario Bros game attacked you at once, you’d die immediately (there would be nowhere to jump to). Or if all the enemies in any given level of an FPS game just ran directly at you and shot with perfect accuracy, you’d have no hope.

Think about your average FPS that simulates your involvement in military operations — all of the enemies are not always aware of what you and your allies are doing, so even if the enemies have overwhelming odds against you, you can still win by doing limited engagements and striking key targets, etc. I think the same is true in real wars in many cases, but that’s not something that you see in the skirmish modes of other RTS games.

This is a big topic that I’ll touch on more deeply in a future article in this series, as it’s likely to be the most controversial design decision I’ve made with the game. A few people will likely view this as a form of cheating AI, but I have good reasons for having done it this way (primarily that it allows for so many varied simulations, versus one symmetrical simulation). The AI ships never get bonuses above the players, the AI does not have undue information about player activities, and the AI does not get bonuses or penalties based on player actions beyond the visible AI Progress indicator (more on that below). The strategic and tactical code for the AI in the game uses the exact same rules as constrain the human players, and that’s where the intelligence of our system really shines.

Asymetrical AI

In AI War, to offer procedural campaigns that give a certain David vs Goliath feel (where the human players are always David to some degree), I made a separate rules system for parts of the AI versus what the humans do. The AI’s economy works based on internal reinforcement points, wave countdowns, and an overall AI Progress number that gets increased or decreased based on player actions. This lets the players somewhat set the pace of game advancement, which adds another layer of strategy that you would normally only encounter in turn-based games. It’s a very asymmetrical sort of system that you totally couldn’t have in a pvp-style of skirmish game with AI acting as human standins, but it works beautifully in a co-op-style game where the AI is always the enemy.

Next Time

This provides a pretty good overview of the decisions we made and how it all came together. In the next article, which is now available, I delve into some actual code. If there is anything that readers particularly want me to address in a future article, don’t hesitate to ask! I’m not shy about talking about the inner workings of the AI system here, since this is something I’d really like to see other developers do in their games. I play lots of games other than my own, just like anyone else, and I’d like to see stronger AI across the board.

Designing Emergent AI

by Christopher M. Park

The first part of this article series has been a hit with a lot of people, yet criticized by others for being too introductory/broad. Fair enough, starting with this article I’m going to get a lot lower-level. If you’re not a programmer or an AI enthusiast, you probably won’t find much interest beyond this point.

What Do You Mean, It Works Like A Database?

First question that most people are asking is in what way my AI code can possibly be like a database. In a past article (Optimizing 30,000+ Ships In Realtime In C#) I already talked about how I am using frequent rollups for performance reasons. That also helps with the performance of the AI, but that’s really not the meat of what I’m talking about.

I’m using LINQ for things like target selection and other goal selection with the AI in the game, and that really cuts down on the amount of code to make the first level of a decision (it also cuts down on the amount of code needed in general, I’d estimate that the entirety of the decision-making AI code for the game is less than 20,000 lines of code). Here’s one of the LINQ queries for determining target selection:

var targets =
//30% chance to ignore damage enemy can do to them, and just go for highest-value targets
( unit.UnitType.AIAlwaysStrikeStrongestAgainst ||
AILoop.Instance.AIRandom.Next( 0, 100 ) < 30 ?
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
obj.GetHasAttackPenaltyAgainstThis( unit ) ascending, //ships that we have penalties against are the last to be hit
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
Mat.ApproxDistanceBetweenPoints( obj.LocationCenter, unit.LocationCenter ) ascending, //how close am I to the enemy
obj.UnitType.ShieldRating ascending, //how high are their shields
unit.UnitType.AttackPower ascending, //go for the lowest-attack target (economic, probably)
obj.Health ascending //how much health the enemy has left
select obj
:
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
( chooseWeaklyDefendedTarget ?
obj.UnitType.TripleBasicFirePower >= obj.NearbyMilitaryUnitPower :
( chooseStronglyDefendedTarget ?
obj.UnitType.TripleBasicFirePower < obj.NearbyMilitaryUnitPower : true ) ) descending, //lightly defended area
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
obj.GetHitPercent( unit ) descending, //how likely I am to hit the enemy
unit.GetAttackPowerAgainstThis( obj, false ) descending, //how much damage the enemy can do to me
obj.Health ascending //how much health the enemy has left
select obj
);

Blogger eats a lot of the formatting there, but hopefully you can see what is going on based on the comments in green. In some ways you could call this a decision-tree (it does have multiple tiers of sorting), but the overall code is a lot more brief and (when properly formatted with tabs, etc) easier to read. And the best thing is that, since these are implemented as a sort, rather than distinct if/else or where clause statements, what you arrive at is a preference for the AI to do one thing versus another thing.

There are a lot of things that it takes into consideration up there, and there are a few different modes in which it can run, and that provides a lot of intelligence on its own. But that’s not enough. The loop that actually evaluates the above logic also adds some more intelligence of its own:

bool foundTarget = false;
foreach ( AIUnit enemyUnit in targets )
{
if ( enemyUnit.Health <= 0 || enemyUnit.CloakingLevel == CloakingLevel.Full )
continue; //skip targets that are already dead, or are cloaked
if ( unit.CloakingLevel == CloakingLevel.Full &&
enemyUnit.UnitType.ShipType == ShipType.Scout )
continue; //don’t give away the position of cloaked ships to scouts
if ( unit.CloakingLevel != CloakingLevel.None &&
enemyUnit.UnitType.TachyonBeamRange > 0 )
continue; //cloaked ships will not attack tachyon beam sources
if ( enemyUnit.UnitType.VeryLowPriorityTarget )
continue; //if it is a very low priority target, just skip it
if ( enemyUnit.IsProtectedByCounterMissiles && unit.UnitType.ShotIsMissile )
continue; //if enemy is protected by counter-missiles and we fire missiles, skip it
if ( enemyUnit.IsProtectedByCounterSnipers && unit.UnitType.ShotIsSniper )
continue; //if enemy is protected by counter-sniper flares and we fire sniper shots, skip it
if ( enemyUnit.GetAttackPowerAgainstThis( unit, false ) == 0 )
continue; //if we are unable to hurt the enemy unit, skip attacking it
if ( unit.EffectiveMoveSpeed == 0 && !unit.UnitType.UsesTeleportation &&
enemyUnit.GetHitPercent( unit ) <>
continue; //stop ourselves from targeting fixed ships onto long-range targets

gc = GameCommand.Create( GameCommandType.SetAttackTarget, true );
gc.FGObjectNumber1 = unit.FGObjectNumber;
gc.FGObjectNumber2 = enemyUnit.FGObjectNumber;
gc.Integer1 = 0; //Not Attack-Moving
unit.LastAICommand = gc.AICommand;
AILoop.Instance.RequestCommand( gc );
foundTarget = true;
break;
}

//if no target in range, and guarding, go back to base if out of range
if ( !foundTarget && unit.GuardingObjectNumber > 0 )
{
Point guardCenter = unit.GuardingObject.LocationCenter;
if ( Mat.ApproxDistanceBetweenPoints( guardCenter, unit.LocationCenter ) >
Configuration.GUARD_RADIUS )
Move( unit, guardCenter );
}

Nothing real surprising in there, but basically it has a few more decision points (most of these being hard rules, rather than preferences). Elsewhere, in the pursuit logic once targets are selected, ships have a preference for not all targeting exactly the same thing — this aspect of them watching what each other are doing is all that is really needed, at least in the game design I am using, to make them do things like branching and splitting and hitting more targets, as well as targeting effectively.

Rather than analyzing the above code point by point, I’ll mostly just let it speak for itself. The comments are pretty self-explanatory overall, but if anyone does have questions about a specific part, let me know.

Danger Levels

One important piece of logic from the above code that I will touch on is that of danger levels, or basically the lines of code above where it is evaluating whether or not to prefer a target based on how well it is defended by nearby ships. All ships have a 30% chance to disregard the danger level and just go for their best targets, and some ship types (like Bombers) do that pretty much all the time, and this makes the AI harder to predict.

The main benefit of an approach like that is that it causes the AI to most of the time try to pick off targets that are lightly defended (such as scrubbing out all the outlying harvesters or poorly defended constructors in a system), and yet there’s still a risk that the ships, or part of the ships will make a run at a really-valuable-but-defended target like your command station or an Advanced Factory. This sort of uncertainty generally comes across as very human-like, and even if the AI makes the occasional suicide run with a batch of ships, quite often those suicide runs can be effective if you are not careful.

Changing The AI Based On Player Feedback

Another criticism that some readers had about the first AI article was to do with my note that I would fix any exploits that people find and tell me about. Fair enough, I didn’t really explain myself there and so I can understand that criticism. However, as I noted, we haven’t had any exploits against the AI since our alpha versions, so I’m not overly concerned that this will be a common issue.

But, more to the point, what I was actually trying to convey is that with a system of AI like what you see above, putting in extra tweaks and logic is actually fairly straightforward. In our alpha versions, whenever someone found a way to trick the AI I could often come up with a solution within a few days, sometimes a few hours. The reason is simple: the LINQ code is short and expressive. All I have to do is decide what sort of rule or preference to make the AI start considering, what relative priority that preference should have if it is indeed a preference, and then add in a few lines of code. That’s it.

With a decision tree approach, I don’t think I’d be able t to do that — the code gets huge and spread out through many classes (I have 10 classes for the AI, including ones that are basically just data holders — AIUnit, etc — and others that are rollups — AIUnitRollup, which is used per player per planet). My argument for using this sort of code approach is not only that it works well and can result in some nice emergent behavior, but also that it is comparably easy to maintain and extend. That’s something to consider — this style of AI is pretty quick to try out, if you want to experiment with it in your next project.(source: christophermpark part 1 part2

 


上一篇:

下一篇: