游戏邦在:
杂志专栏:
gamerboom.com订阅到鲜果订阅到抓虾google reader订阅到有道订阅到QQ邮箱订阅到帮看

万字长文,以AI War:Fleet Command为例解析游戏AI设定

发布时间:2015-04-03 12:59:23 Tags:,,

作者:Christopher M. Park

很多人都对《AI War:Fleet Command》中的AI是如何运行表示好奇,而比起大多数RTS游戏中的AI,我们已经获得了更加现实的策略/战略结果。

决策树:大多数RTS游戏中的AI

首先,大多数游戏中的AI系统是基于巨大的决策树(如果A,然 后C,如果B,然后D等等)而运行。这将在某种程度上帮助人类行为,但这却需要更多的开发并最终获得可利用的缺 陷。自从1998年以来,在每一款RTS游戏中我最喜欢的例子便是他们是如何围绕着墙进行探索;如果墙体有一道缝隙,那么AI就会总是努力穿过这个洞。人类玩家会在敌人卡住的这 个缝附近集结大量单位,玩家其实是利用这个洞作为陷阱来“欺骗”AI。但AI还是会一波又一波地朝那个洞去送死。

AI War:Fleet Command(from gameris)

AI War:Fleet Command(from gameris)

这一基于规则的决策树不仅编程工作量大,而且容易被玩家作为漏洞加以利用。而为了模仿人类玩家是如何游戏的,我们就更需要使用这一方法。一开始我使用了决策树,但是很 快地我便意识到这有点无聊,因为它只停留在基本的概念层次—-如果我想要与人类对抗玩游戏,我只需要与其他人进行游戏便可。我真正希望的是AI能够基于全新方式而行动,即 不同于其他人类所做的那样,就像与Skynet或《Ender’s Game》中的Buggers进行对抗一样。玩家会觉得AI足够新奇且聪明,而因为比起CPU,人类的大脑具有不同的优势与劣势, 所以AI在面对恐怖场景时的表现也与人类不同。在小说和电影中有许多关于这种情况的例子,但是在游戏中却并不多。

分散的智能

我所采取的方法(并在游戏开发早期阶段快速获得意想不到结果)便是模拟每个个体单位的智能,而不只是模拟一个完整的控制智能。如果你读过Michael Crichton(游戏邦注: 美国畅销书作家和影视导演、制片人)写的《猎物》,你就会发现书里的AI有点像成群的纳米机器人。主要的不同在于我的个体单位比起他的纳米机器人更加聪明,因此在我的游 戏中,AI群体的数量通常为30至2000艘舰船,而不像纳米机器人那样多大数百万或数十亿。这也意味着我的单位在获得真实感方面具有零风险。对我来说最大的好处便是可以基于 较少的代码和代理获得更加智能的结果。

战略层

对于《AI War》中的AI主要存在三个层面:战略,副指挥官和个体单位。所以这并不是一种真正的群体智能,因为它将群体智能(在个体单位层面)与更多全球化规则和行为整合 在一起。AI是如何决定巩固哪个星球,或者决定哪个星球发送电波等都是基于逻辑的战略层——全球指挥官。AI在个体星球中决定如何使用舰船进行攻击或防御是基于副指挥官和 个体舰船逻辑的结合。

副指挥官

很酷的是,副指挥官的逻辑是完全突发的。基于个体单位逻辑的代码编写,单位可以做些对自己有利的事,但同时也需要考虑群组中的其他成员在做什么。这是一种群集行为理念 ,但比起移动更适用于战术和目标选择中。所以当你看到AI将舰船送到你的星球上时,将其分化成2至3个群组,并突然撞击你的星球上的各种目标时,你便会意识到副指挥官行为 是从未进行明确编程。这里并不存在像游戏代码那样的内容,但是AI却会执行一些类似的行动。AI会基于这种方式做出一些让人惊讶的聪明事,它也从不会做出那些基于规则的AI 所做的低能事。

最棒的一点便是这里并不能耍花招。这并不是说系统非常完美,而是如果玩家找到一种方法去欺骗AI,他们就必须全部告诉我,而我可以快速地在代码中添加一个对立内容。因为 我们能够即时掌握相关信息,所以游戏中并不存在任何让玩家去欺骗AI的方法。AI只在主机上的一个单独线程中运行着,所以我会分配给它大量的数据处理工作(使用LINQ—-我的 背景是数据库程序设计和ERP/财务追踪/TSQL的收益预测应用,许多都与这里的AI相关联)。使用各种变量意味着它可以在不影响双核主机上任何内容运行的前提下做出最聪明的决 策。

模糊的逻辑

模糊的逻辑/随机性也是我们AI的另一大重要组件。创造一个不可预测的AI系统的一大环节便是确保它始终能够创造出有效的选择,但却不一定是100%的最佳选择(因为具有重复性 ,“最佳”选择也会变得不那么完美)。如果一位AI玩家只创造过完美的决策,那么你便可以通过想出自己的最佳决策(或者在你的势力中创造一个伪造的弱点,如在墙上设个缺 口)去攻击他,然后你便可以预测带有更高精确度的AI将做什么——在许多其它RTS游戏中的特定例子中,这种方法的有效性接近100%。基于适当的模糊逻辑设置,我想说在《AI War》中你能够预测到AI将会做些什么的机会不会超过50%,我们的游戏中设置的是较难预测的方法。

智能错误

需要记住,较低难度级别会故意创造一些愚蠢的决策,就像新手们可能会做的那样(如不顾一切地追寻最佳目标)。这会让更低级别的AI仍然看起来就像一个真正的对手,但却不 那么可怕。对我来说,为了降低难度而设置AI便是个巨大的挑战。在某种程度上,这只是在阻止较低级别AI执行最佳战术,但同时我还要在它的决策(在较低级别中所做出的决策 )中植入一些不是很完美的假设。

略过经济模拟

最后,《AI War》中的AI是遵循着与人类玩家完全不同的经济规则(游戏邦注:但是所有帧数和大多数策略规则却是一样的)。举个例子来说吧,在大多数游戏中,AI一开始都是 伴随着2万以上的舰船出现,但是每个玩家一开始只带有4搜舰船。而如果AI的优势能够完全将你压倒,你便会立刻被击退。就像在《超级玛丽兄弟》中,如果每个关卡中的所有坏 人同时向你展开攻击,你便会立刻死去(因此它们被分设在不同地方)。如果FPS游戏中任何特定关卡的所有敌人都直接走向你并准确地朝你射击,你便没有生存的希望。

想想你所玩过的FPS游戏,即模拟你在军事行动中的表现—-所有敌人并不总知道你和同盟在做什么,所以即使敌人拥有压倒性的优势,你也可以通过打击关键目标等方法而获胜。 我认为这与现实中的许多情况非常相似,但是你在其它RTS游戏的小规模战斗模式中却不可能遇到这样的情况。

我将在本系列之后的文章中对该话题进行详细讨论,因为这可能成为我在创造这款游戏时所做出的最富争议的设计决策。一些人可能会将其当成一种欺骗AI的形式,但是我有这么 做的理由。AI船舰所获得的奖励永远不会超过玩家,AI不会拥有有关玩家活动的过多信息,AI并不会因为玩家的任何行动获得奖励或惩罚。关于游戏中的AI的策略和战术代码使用 的是与人类玩家完全相同的规则,这也是我们的智能系统真正突出之处。

不对称AI

在《AI War》中,为了提供程序活动去呈现出大卫对抗巨人歌莉娅之感(游戏邦注:人类玩家总是扮演着大卫的角色),我为AI的某部分与玩家行为的对抗创造一个单独的规则。 AI经济是基于内部加固点,电波倒计时,以及基于玩家行动增加或减少的整体AI进程数。这能让玩家去设置游戏前进的节奏,从而会添加你通常只能在回合制游戏中遇到的另外一 个策略层面。这是一种非对称系统,即你在带有类似人类行动的AI的PVP游戏中不可能看到的,但是它却能够有效作用于AI作为敌人的合作类游戏中。

上述内容在读者当中引起很大的反响,然而有人批评我把它写得太入门/太宽泛。确实,作为本系列的开头,我把难度设得比较低。如果你不是程序员或IA爱好者,那么超过这个程 度可能不会让你觉得太有趣。

你这是什么意思,你的AI代码就像数据库一样运作?

大部分人提的第一个问题是,我的AI代码怎么可能像个数据库。在之前的文章中,我已经解释了如何使用常见的ROLLUP。这也有助于AI的表现,但那不是我要说的重点。

我使用LINQ来执行游戏AI的目标选择和其他选择,这么做确实相当节省第一层决策的代码(也缩减总体必要代码,我估计我们的游戏AI的决策代码总量不超过20000行)。以下是确 定目标选择的一段LINQ查询代码:

var targets =
//30% chance to ignore damage enemy can do to them, and just go for highest-value targets
( unit.UnitType.AIAlwaysStrikeStrongestAgainst ||
AILoop.Instance.AIRandom.Next( 0, 100 ) < 30 ?
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
obj.GetHasAttackPenaltyAgainstThis( unit ) ascending, //ships that we have penalties against are the last to be hit
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy
out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
Mat.ApproxDistanceBetweenPoints( obj.LocationCenter, unit.LocationCenter ) ascending, //how close am I to the enemy
obj.UnitType.ShieldRating ascending, //how high are their shields
unit.UnitType.AttackPower ascending, //go for the lowest-attack target (economic, probably)
obj.Health ascending //how much health the enemy has left
select obj
:
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
( chooseWeaklyDefendedTarget ?
obj.UnitType.TripleBasicFirePower >= obj.NearbyMilitaryUnitPower :
( chooseStronglyDefendedTarget ?
obj.UnitType.TripleBasicFirePower < obj.NearbyMilitaryUnitPower : true ) ) descending, //lightly defended area
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy
out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
obj.GetHitPercent( unit ) descending, //how likely I am to hit the enemy
unit.GetAttackPowerAgainstThis( obj, false ) descending, //how much damage the enemy can do to me
obj.Health ascending //how much health the enemy has left
select obj
);

以上使用了很多格式化的缩写,但愿读者借助绿色字的注释能够理解那是怎么回事。一定程度上,你可以把上述代码叫作决策树(它确实有很多分支),但总体说来,代码更加简 洁(如果能恰当地使用格式化的命名的话)和容易阅读了。但这种结构的优点在于,因为是按分类执行的,而不是通过严格的if/else等条件语句,这就相当于给AI一个做某件事而 不做另一件事的倾向。

这段代码考虑到很多情况,有许多可以运行的不同模式,这就使AI本身具有很高的智能。但还不止如此。实际上评估上述逻辑的循环也让AI本身更加智能。

bool foundTarget = false;
foreach ( AIUnit enemyUnit in targets )
{
if ( enemyUnit.Health <= 0 || enemyUnit.CloakingLevel == CloakingLevel.Full )
continue; //skip targets that are already dead, or are cloaked
if ( unit.CloakingLevel == CloakingLevel.Full &&
enemyUnit.UnitType.ShipType == ShipType.Scout )
continue; //don’t give away the position of cloaked ships to scouts
if ( unit.CloakingLevel != CloakingLevel.None &&
enemyUnit.UnitType.TachyonBeamRange > 0 )
continue; //cloaked ships will not attack tachyon beam sources
if ( enemyUnit.UnitType.VeryLowPriorityTarget )
continue; //if it is a very low priority target, just skip it
if ( enemyUnit.IsProtectedByCounterMissiles && unit.UnitType.ShotIsMissile )
continue; //if enemy is protected by counter-missiles and we fire missiles, skip it
if ( enemyUnit.IsProtectedByCounterSnipers && unit.UnitType.ShotIsSniper )
continue; //if enemy is protected by counter-sniper flares and we fire sniper shots, skip it
if ( enemyUnit.GetAttackPowerAgainstThis( unit, false ) == 0 )
continue; //if we are unable to hurt the enemy unit, skip attacking it
if ( unit.EffectiveMoveSpeed == 0 && !unit.UnitType.UsesTeleportation &&
enemyUnit.GetHitPercent( unit ) <>
continue; //stop ourselves from targeting fixed ships onto long-range targets

gc = GameCommand.Create( GameCommandType.SetAttackTarget, true );
gc.FGObjectNumber1 = unit.FGObjectNumber;
gc.FGObjectNumber2 = enemyUnit.FGObjectNumber;
gc.Integer1 = 0; //Not Attack-Moving
unit.LastAICommand = gc.AICommand;
AILoop.Instance.RequestCommand( gc );
foundTarget = true;
break;
}

//if no target in range, and guarding, go back to base if out of range
if ( !foundTarget && unit.GuardingObjectNumber > 0 )
{
Point guardCenter = unit.GuardingObject.LocationCenter;
if ( Mat.ApproxDistanceBetweenPoints( guardCenter, unit.LocationCenter ) >
Configuration.GUARD_RADIUS )
Move( unit, guardCenter );
}

以上代码没什么真正出奇的地方,但它的决策点更多了一些(大多是硬性规定而不是倾向)。另外,在追踪逻辑中,一旦选定目标,船舰就有了一个倾向,因为并非目标都是其实 是一样—-它们观察彼此在做什么这个方面确实是必要的,至少在我采用的游戏设计法中是如此,目的是为了让它们高效地分支和拆分和命中更多目标。

我不是一点一点地分析上述代码,而是基本上让代码不言不喻,所以我的注释其实是多余的。

警戒水平

上述代码的一部分重要逻辑是警戒水平,也就是说,以上代码让AI根据受周围舰队保护的情况来决定是否定位目标。所有飞船只都有30%的概率忽略警戒水平,而朝着它们的最佳目 标前进,某些类型的飞船(如Bomber)始终是那么执行的,这使得AI更加难以预测。

这种办法的主要优点是,它导致AI大部分时候选择保护不严密的目标(如消灭所有偏远的采集船或保护不当的施工船),然而,这些飞船或部分这些飞船仍然有可能冒险去进攻有 价值但受严密保护的目标,如你的指挥站或高级工厂。这类不确定事件通常给人非常人为的感觉。即使AI偶尔携大批飞船做出像集体自杀似的行为,如果你不够留神,这种进攻也 可能很有效。

根据玩家反馈改变AI行为

有些读者阅读了本系列的第一部分后,指出我并没有像我所承诺地那样修改他们发现并告诉我的漏洞。言之有理,我确实没有进行解释,所以我接受读者的批评。然而,正如我所 强调的,在内测版本我们没有发现任何利用AI的漏洞,所以我并不太担心这会成为普遍性问题。

但我想说的其实是,在你所看到的这种AI系统做修改是相当简单的。在我们的内测版本中,当有人发现欺骗AI的办法时,通常几天内甚至几小时内我就能想到解决方案。原因很简 单:LINQ代码很短很直观。我要做的只是决定AI要遵守什么规则或倾向、当必须有倾向时,其相对优先级是什么、以及添加几行代码。仅此而已。

有了这种树形决策方案,我认为我的代码得以保持简洁(我有10类AI,有些基本上像数据库—-如AIUnit,有些是rollup—-如AIUnitRollup)。我提倡使用这种代码的理由是,它 不仅运行得好、可以产生相当漂亮的应急行为,而且容易维护和拓展。这是值得考虑的方法—-这类AI容易实验,如果你想在你的下个项目中尝试的话。

该方法是有限的?

当然。我不认为任何AI方法都能够用于排除其它内容。我们的AI设计非常杂,所以我在《AI War》中并不可能使用任何单一的方法。我使用的是与自己正在创造的游戏相关的技术 和方法,在某些方面(如探险),我还将其与传统方法混合在一起。

接下来的内容的目的是讨论我所提议的新方法(意外方面和LINQ方面)的局限性,因此我将阐述这些技巧与其它游戏中的技巧相混合的一些方法。让我们开始分析:

在哪里能够快速分解突发性行为

为了获得突发性行为, 你需要拥有足够的代理。为了阻止它们做一些疯狂的事,你需要设置他们可能出现的位置的界限(就像在《AI War》中,这些界限便是关于在哪里攻击或者 跑向哪里。基本上来看就是战术)。以下是一些我认为不适合突发性行为的例子:

1.带有竞争性的益智游戏,如《俄罗斯方块》。因为这类游戏中通常只有一个代理(即光标)。

2.RTS游戏中的经济模拟。这是相互关联的,即任何决策都会对剩下的经济内容产生潜在的影响(如创建一栋建筑并且机会成本非常高的话,那么就非常需要一些持续协调的类)。

3.为了预测玩家行动,任何类型的游戏都需要历史知识。就像包含褶边装置的纸牌游戏如果未分析任何历史知识的话便不可能有效地经营。

4.因为巨大的机遇成本,任何基于回合制且每个回合具有有限移动数的游戏可能都不能处理好这点。尽管《文明IV》属于回合制游戏,它却能够有效地使用突发性技巧,但是象棋 和围棋却做不到。

AI-War(from games.softpedia)

AI-War(from games.softpedia)

为什么突发性行为适用于《AI War》

1.游戏中有许多单位,所以能让玩家基于每个单位的智能进行复合思考。

2.尽管决策空间非常有限(该攻击什么,或者在哪里撤退),同样的决策空间却非常复杂。我们只需要着眼于LINQ逻辑中的所有决策点便可。关于任何特定攻击能够获得某种程度 上的成功通常都存在50种方法,并且关于他们的失败也存在1000种方法。这种精确性能让AI在近乎最佳选择间模糊自己的选择,并最终导致战术的不可预测。

3.每个决策制定的成本都很低。AI可以在1秒内为一组舰船做出100个决策,然后当这些命令获得落实,并且情况开始出现不利的改变,他便能够制定一系列全新的决策。

4.在战术中,不可预测性的价值高于上述所有内容,并且能够与突发性行为相互合作。只要AI不做出一些愚蠢的行为,你就可以让它在可行的选择(可能导致让人惊讶的战术)以 及显然的策略(只是偶然事件)中做出选择。如果游戏中存在一些通向成功的方法(就像在赛车类游戏中),那么我们便会因为很难设定决策的界限而不能为合理的突发性行为创 造任何真正的机遇。

随着时间发展而学习

我在《AI War》中有意创造的一个设计选择是确保其专注于当前的情况。我记得在象棋锦标赛中看过一些大师级象棋选手会同时与50名基于常规排名的玩家进行比赛,在房间中四 处走动并以此分析每个位置。在大多数棋盘上,他们只需要看一眼设置便能够立刻做出移动。

我认为如果能让一个AI具有这些能力的话便是一种很棒的设计。它将着眼于当前的故事板,并忽视之前所掌握的内容(实际上这只是一个论题专家而不是具有任何能力的学习型AI ),然后它将根据自己所看到的做出任何有效的选择。当玩家做出一些意想不到的事时,这种设置便非常有趣,因为AI也会作为回应而做出一些意想不到的事。

这非常适合《AI War》,但是如果玩家不能学习并改变,或者我不能扩展并完善AI,这便不能随着时间的发展而发展。这是我有意识设计的一种系统,但是在带有关于AI和人类对 称规则的游戏中,这种方法带有一些局限性。

在这些情况下,最好的方法便是将突发性决策制定与随着时间发展而收集到并评估的数据结合在一起。这些收集到的数据将成为LINQ查询中的每个代理的决策点。当然了,这要求 更多储存空间以及更加强大的处理能力,但是如果有人能够基于适当的有限评估去完成这一任务,那么他最终所获得的好处将变得更加巨大。

LINQ查询难道不只是一种策略树?

一些AI程序员会抱怨,我在这里所分享的LINQ查询与传统的决策树并无差别。对此我想说:你说得没错,在这方面看来它们并没有差别。LINQ查询的主要优势便是可读性的提高( 假设你知道LINQ,那么你便能够理解一些复杂的描述)。

而其它优势则是关于一些“偏好选择”,即能够轻松地呈现出来。比起拥有一个巨大且分支IF/ELSE陈述句,你在LINQ中拥有一个ORDER BY条款,即能够让树的评估变得更加灵活。 换句话说,如果你拥有如下内容:

ORDER BY comparison 1,
comparison 2,
comparison 3

你便能够设定一种情况,即1为真,2为假,3为真,就像所有的这三个数都是真的或假的,真的,假的或任何其它比较。而在传统代码中,如果没有足够的副本或使用让人恐惧的 goto,我便想不出任何方法去呈现出这种灵活性。

所以,从理论上来看,LINQ查询理念与决策树理念非常相似,而在实践上,它不仅更具有可读性,同时基于你的决策树的复杂性,它也会变得更加有效。你甚至不需要使用LINQ—- 任何足够复杂的分类算法都可以做相同的事。该方法的新颖性并不是源于LINQ本身,而是关于使用分层分类算法去取代决策树。你可以在C#中表达上述的LINQ代码,如下:

someCollection.Sort( delegate( SomeType o1, SomeType o2 ){int val = o1.property1.CompareTo( o2.property1 );if ( val == 0 ) val = o1.property2.CompareTo(
o2.property2 );if ( val == 0 ) val = o1.property3.CompareTo( o2.property3 );return val;} );

实际上,贯穿《AI War》的代码,关于这一分类的陈述其实非常常见。比起同意义的LINQ陈述,还存在其它更有效的执行方法,所以在主线程中(即性能是关键),这便是我所使 用的一种分类方法。在alpha版本中,我直接将其整合到LINQ中,这对于创建方法的原型来说非常棒,但是当我转变了所有内容(除了AI线程)到这些分类中而不是LINQ,对此我是 单纯出于性能原因。如果你是致力于其它语言或不熟悉的SQL类代码,你也可以很轻松地开始并结束这种分类。

结合方法

在《AI War》中,我基本上使用了如下方法:

1.传统探索

2.面向主要战略决策的简单决策引擎类逻辑。

3.面向战术的每单位决策引擎类逻辑。

4.为了降低预测能力并鼓励突发性行为的模糊逻辑(对于决策来说的最小随机性)。

5.基于偏好的代码(以LINQ和分类形式表达出来)而不是强制规则类IF/ELSE决策树逻辑。

如果我想为一款FPS游戏或赛车类游戏编写一个AI,我可能会采取与这些类型游戏执行过的方法相类似的模式。我不能想到更好的方法了。你可以在战争类游戏中将突发性行为与更 大的敌人群组结合在一起,这可能会创造出一些有趣的结果,但是对于个体AI玩家,我就想不出其它不同的做法了。

对于一款益智游戏或像象棋那样的回合制游戏,我也会再次使用来自其它游戏的现有方法,这是关于未来的行动及其结果的深入分析,因此我们需要选择一个最佳方法(游戏邦注 :如将一些启发法带入速度处理过程中)。

一些冒险游戏也可以使用少量群体行为去创造有趣的效果,但是我想许多玩家应该都喜欢这类型游戏中基于规则的传统敌人设置吧。这些游戏并不是真的想要突出模仿人类的AI, 相反地,它们的AI只是遵循着一些能让人类玩家了解并记住的简单规则。

我们能够较轻松地为分类决策树编写代码,并可能会带来一些更有效的结果,但是最终作用于玩家身上的结果却不会有多大差别。而创造出更容易编程与阅读的AI代码则对所有人 来说都是一件好事(低成本/较少的开发时间,并且可能出现更少的漏洞等等)。

现在,我们来探讨一下不对称系统如资源管理在《AI War》中的运用。

对我来说,这个论题很有挑战性,因为它的两方面都受到游戏设计师们的密切关注。最近我想起来我已经就这个论题写过一篇文章了。我当时是为了向内测人员解释我打算改变概 念。那时候,他们质疑我的想法(如果有人建议,我也会怀疑的),那篇文章非常有助于说服他们相信我的概念可能有些优点—-最有说服力的一点当然是,他们可以看到AI在实际 操作中的良好表现。

这款游戏那时已经开发了近三个月,但令人惊喜的是,在正式发布的两个月后,游戏基本上还是使用我在那篇文章中写到的方案(游戏邦注:有所改变的是有些AI的最终表现方式 ,以及AI飞船被分类成进攻型和防御型——在《AI War》中这些都是次要的,玩家会注意到其与本文所述有出路,但整体思路并无差别)。

RTS游戏中的AI

大多数即时策略(RTS)游戏的AI都努力模仿人类玩家的行为。RTS游戏的AI与人类玩家承担相同的责任和义务,这种AI的成败取决于对人类行为的模仿水平。

这类AI严重依赖极其复杂的有限状态机(FSM)——换句话说,“如果遇到情况X,则执行Y,然后执行Z。”这种判断算法耗费大量设计和编程时间,且相当容易被玩家预测到。最 糟糕的是,这些算法往往不能产生有趣的结果——面对这类AI就像对抗其他人类玩家,只是比人类更傻一些。聪明的玩家能够通过寻找算法中的模式和缺陷来欺骗AI,AI通常对玩 家的行为反应迟钝——如果有反应的话。即使如此,这些也让某些可怜的AI程序员辛苦工作了好几个月。

其他游戏中的非确定性AI

在大多数现代AI设计中,非确定性是一个重要目标——也就是,面对相同的输入,AI不能总是产生完全相同的输出。否则,AI的可预测性就太强了。达到这个目标的主要办法是模 糊逻辑(输入是“模糊的”,这样输出就不会那么准确了),或者增加不断成长和变化的学习型AI的变体(游戏邦注:这种AI会根据自己积累的知识做出不同的行为)。

标准的学习型AI的问题是,它很容易学习错误的东西,然后做出一些古怪的举动。排错是很困难的,因为很难知道AI在想什么以及为什么。另外,除非它有一些基本的历史数据, 否则它可能变得非常无预测性和不实用,或者也可能因为确定性方法让它变得容易预测。这就像教一只阿米巴原虫跳踢跶舞,但它却开始放火了,你诧异它为什么这么做,但这就 是它的行为模式的一部分。

因此,甚至有了学习型AI,你的游戏也可能具有很强的可预测性。另外,如果游戏支持保存,那么AI的所有历史数据也得保存下来,否则下次进入游戏,AI就不知道自己要做什么 了(保留它的学习智能)。如此一来,除了其他缺点,又多了保存文件庞大这个问题。

《AI War》中的半无状态AI

对于《AI War》,我希望非确定性AI的行为不依赖任何历史数据。当然,这意味着要定义相当多的模糊逻辑—-那确实是合适的,但我还希望保留一些学习型AI的特征。事实上,我 是希望模拟现代商业的数据挖掘技术(这是通过我的日常工作而精通起来的东西)。我的经验法则是:在任何给定时间,AI应该能够考虑到一系列有意义的变化,然后运用某些规 则和公式,最后得出可能的最佳方案(当然是模糊的)。

举一个人类的例子:在象棋比赛中,通常大师会与大约40个正常人类玩家比赛(纯粹是为了娱乐或宣传)。40个低级选手在屋内围成一圈,他们面前放着棋盘,各自与大师进行比 赛。大师在屋里每走一圈,就在所有人的棋盘上走一步。大师不可能记住40张棋盘上的情况,相反地,他是走到棋盘面前才开始分析,然后走最好的一步。当遇到比较厉害的选手 时,大师就得思考得更严谨一些,但大师通常赢下所有这40盘棋,因为水平毕竟有差距。通常结果是,大师从中挑选最聪明的选手,故意让他赢(如果是为了获得奖励或炫耀,大 师就会打败所有选手)。

《AI War》中的AI与上述大师非常类似——在面对一度看不见的情况时,它能做出可能最理想的选择。AI可以不断积累这些少量的数据(就像大师可能记住最聪明的对手的棋局) ,但总地说来这是不必要的。AI还记住与过去行为有关的数据,目的有二:一是帮助它继续完成上次的决策,除非有强制性理由使它中断;二是防止它的行为变成一种可能被对手 利用的模式。

其他RTS游戏中的指令分散化

组成一个有趣的AI对手的关键因素之一是,它必须能够同时应付多件事。在许多游戏中,AI一次只会移动一个军事单位,而人类玩家往往不是这样的。人类玩家会同时执行绕道、 多场正面进攻和侧面进攻等。

在《AI War》中,战术和战略指挥官必须能够根据手头上的单位执行尽可能多的、合理的不同活动。通常办法是,创造多个独立的AI玩家“代理”,然后把各个单位操作分派给特 定的代理。接着你会遇到这些代理之间必须互相协调的问题,但在《AI War》的半无状态环境中,情况甚至更糟——当分配单位时,你怎么能明智地在这些任意代理之间划分单位 ?怎么知道什么时候刷出更多代理或者合并无用的代理?这些问题都可以解决,但不要低估了它们,毕竟对CPU不利。中心问题是资源管理。不仅授权现存单位的控制,而且要使战 术/策略元素与资源生产和新单位生产保持平衡。

《AI War》中的资源管理

我被指令分散化的问题困扰了好几天,我努力思考能够满足所有指标并最终实现的解决方案:重要的不是AI到底在做什么,而是人类玩家看到它在做什么。如果AI要经过一些玩家 看不到但非常消费编程精力和占用CPU的活动才能得到结果A,那么直接走捷径产生结果A会不会更好?特别是,我意识到就玩家而言,经济模拟的回报是很低的。如果我给AI使用人 类的经济模型——没有资源、技术,只有普通的、线性的飞船生产算法,会产生什么影响?

首先,这种改变导致AI玩家不使用建造船、工厂、反应器或其他与经济有关的舰队。这一定程度上改变了玩法,因为人类玩家无法使用进攻经济建筑的战术来削弱AI。这绝对是一 个缺陷,但无论如何,在大多数RTS游戏中,AI往往掌握大量资源,所以进攻经济建筑也不是一个有效的策略。

确定这种玩法改变的可行性以后,我开始设计飞船的生产算法。说起来容易做起来难,因为各个AI必须知道在游戏的某时它可以生产什么飞船,也要知道每一次生产和保持其他因 素平衡允许使用多少材料。注意,这不是一般意义上的“欺骗性AI”——想做什么就做什么的AI。这里所说的AI玩家遵循的是一套不同的规则,但它们是严格地模拟一般情况下AI 不会做的事。

《AI War》中的指令分散化

解决了经济以后,我们再回头说说分散化问题。现在,在每一次资源事件中,各个AI都允许生产一定数量的某种飞船,但如何智能地选择生产什么飞船?另外,如何处理已经存在 的飞船?

首先,AI必须把它的单位划分成两类——进攻型和防御型。大多数游戏不会这么做,但在《AI War》中这个做法非常有效。各个AI决定它需要多少某种飞船来保护各个星球或它所 控制的中央舰。为了实现这些防御目标,它的首要任务是产生。

所有不需要防御的单位都被当成进攻单位。这就是下面要说的策略选择算法。由AI玩家操作的各个生产单位的飞船会根据复杂的模糊逻辑和加权来生产进攻单位——结果是,AI通 常生产最强的单位(对AI有利的单位),但从来不会完全停止生产较弱的单位(一定程度上,它们在《AI War》中总是仍然有用的)。

《AI War》中的战略路线

《AI War》中的的战略计划包括决定向什么星球派遣什么单位。防御单位往往不会离开它们的星球,除非它们所保护的东西也离开了,这基本上意味着围绕进攻单位选择路线。

这表现为两种一般性活动:1)进攻——在能够造成最大破坏时向某星球派遣飞船;2)如果有可能,在遇到压倒性失败时选择撤退(我见过的其他RTS游戏的AI还没有出现这种行为 )。

AI并不使用诸如玩家得分这样无意义的因素,而是在这类决策中的任何其他非玩法变量。它的所有决策都是根据什么单位在哪里,以及它当前计算出的最可能发生的斗争结果。

《AI War》中的战术决策

就概念而言,战术决策其实是AI中比较简单的部分。各个单位争取在它达到最理想的目标距离时攻击它最可能伤害到的目标。它会一直战斗到它死了或者它的敌人死了或者策略路 线算法要求它撤退。

《AI War》中的AI类型

当我回想大多数RTS游戏中出现的仿人AI时,能令我感到惊喜的AI是非常有限的。在大多数游戏类型中,你对抗的是一群具有不同于人类玩家的力量的不同敌人。有些对手非常强大 ,有些比较脆弱,各有优缺点。在RTS游戏中,所有人都是一样的—-或至少被平衡到接近公平的程度。我认为这么做折损了游戏的寿命。

我觉得更有趣的做法应该是,提供类型更丰富的AI,它们各有强项、弱点以及独特的行为,这也是我在《AI War》所做的。有些AI具有玩家永远得不到的战舰,有些在游戏一开始 就掌控了多个星球,有些从来不占领星球而是像无法无天的突击者在中立星球周围徘徊。可能性是无限的,特别是当玩家在一场游戏中对抗多种类型的AI时。

无限的可能性通常导致有意的不公——就像真实的战争一样。你可以模拟进攻一个由邪恶外星人控制的星系,或相反地,由敌人进攻你。我认为这与《Ender’s Game》类似。有胆 心怕事的AI,也有在你大战一场后趁机进攻的AI。有些AI使用的战术非常怪异,让你防不胜防,还有些AI惯用偷天换日的战术,以隐藏他们真实的意图。有些AI在一开始时技术就 远远强过你,有些AI有大量劳动力但技术低下……

《AI War》的总体目标是为玩家提供一种有趣的、丰富多变的游戏体验。如果所有AI除了行为以外都基本相同(就像大多数游戏),那么玩家的选择就很有限了。《AI War》中丰 富的AI类型不属于传统意义上的“欺骗性AI”——各种AI都有自己遵循的特定规则,它们的规则与人类玩家所遵守的规则并不一样。

类比一下足球游戏中的进攻和防守:各支球队都有一系列不同的目标——一个是自己得分,一个是不要让其他对手得分,但一支球队的总体成功取决于他们对各个目标的执行情况 。在足球中,球队通常也划分出进攻队员和防守队员,这与《AI War》中的AI分类是相似的。在《AI War》中,更确切的类比是,如果一支球队总是进攻,但有三名队员作为后卫 ,也许后卫只能通过换球和传球来得分。但因为队员更多,得分可能容易得多。

《AI War》中的AI类型是,它们故意破坏规则的平衡,但不是通过欺骗,而是提供真正创新丰富的东西。

未来的《AI War》中的AI

现在,AI还没能做出非常有趣的战术决策——侧面攻击、集中火力进攻重点目标等。将来重做AI时,我们将加入这些和其他“行为”。在遭遇战时,AI将评估行为条件,并在满足 条件时做出相应的行为。

事实上,我在这里所说的“行为”与所有类型的AI都有关。现在,AI非常依赖参数,这导致玩家容易预测它的行为,除非通过模糊逻辑增加随机性。这就是许多其他RTS游戏AI的极 限(然而这种程度只花了我几天的时间,而不是几个月几年),但对于《AI War》的结构,我可以继续在AI的方方面面添加新的“行为”,使它越来越强大、越来越多样化、越来 越像人类。比如侦察、狙击、布雷、撤退、利用地势进攻、经济定位、使用运输工具、放弃星球等。所有这些行业都可以通过比较简单的编程加入到现有的结构中;复杂的是行为 本身,而不是这些行为如何交互作用于AI有限状态机的大元素(例如)。这是一种非常对象导向型的AI设计法,符合我的思考习惯。

电脑要求

这是非常模块化的AI系统,可以一直拓展,但它也极其高效—-它已经能够控制成千上万种飞船,且不会严重影响电脑的双核处理器。然而,我们在设计时只考虑双核电脑,所以游 戏很可能在单核电脑中不能运行得同样好。

另一方面,AI逻辑只在游戏主机中运行(这对于RTS游戏是不常见的——也许创造了另一个“第一”),这意味着单线程电脑可以加入其他人的多人模式中。考虑到这是目前最大的 RTS游戏(就游戏玩法中允许使用的活动单位而言),这也是相当了不起的了。

总结

正如本系列开头所说的,本文的主体是在游戏的内测阶段时写的,也就是说比在纯PVP模式下测试机制、联网和图形等早了几个月。我花了大约一周时间制作传统模式的AI,然后发 现那不能达到我的期望,所以我决定回归到根本原则——用更简单的办法实现我的目标。

又思考了几天之后,我写了三天的代码,做出了可用的AI基本原型。为了让内测人员知道我的意图,我写了这篇文章,然后又花了三个月改善、扩展和完成AI(和游戏的其他部分 )。这一切都是在四月的测试前完成的。我们用一个月的时间给这款游戏做测试、修改漏洞和平衡玩法,然后在Arcen网站上推出,5月又在Stardock的Impulse平台上发布。

这种不对称AI是游戏设计中最受没有真正玩过游戏的程序员和游戏设计师批评的一个方面。虽然游戏的玩家越来越多了,但它目前得到的评论还很少(因为宣传还不够多,但多亏 了在Impulse上的热门,玩家的评论越来越多了),正真玩过这款游戏的人似乎并不觉得这是个大问题。再回到我对AI的核心认识:AI真正在做什么并不重要,重要的是玩家看到AI 在做什么以及AI行为对玩家的影响。

这种AI风格产生了许多独特的玩法、看似公平的挑战和你在其他AI上通常看不到的多样性。这虽然距离有感情的机器人还很遥远,但对于创造一款有趣的游戏已经足够了。我们的 目标不就是这个吗?创造一款人们觉得好玩的、有趣的、有挑战性的游戏?我们确实应该尝试更多不同的AI设计,因为我认为AI设计对于决定任何一种类型的游戏是否有趣都有重 要意义。

相关拓展阅读:篇目1篇目2(本文由游戏邦编译,转载请注明来源,或咨询微信zhengjintiao)

Designing Emergent AI, Part 1: An Introduction

by Christopher M. Park

A lot of people have been curious about how the AI in AI War: Fleet Command works, since we have been able to achieve so much more realistic strategic/tactical results compared to the AI in most RTS games. Part 1 of this series will give an overview of the design philosophy we used, and later parts will delve more deeply into specific sub-topics.

Decision Trees: AI In Most RTS Games

First, the way that AI systems in most games work is via giant decision trees (IF A, then C, IF B, then D, etc, etc). This can make for human-like behavior up to a point, but it requires a lot of development and ultimately winds up with exploitable flaws. My favorite example from pretty much every RTS game since 1998 is how they pathfind around walls; if you leave a small gap in your wall, the AI will almost always try to go through that hole, which lets human
players mass their units at these choke points since they are “tricking” the AI into using a hole in the wall that is actually a trap. The AI thus sends wave after wave through the hole, dying every time.

Not only does that rules-based decision tree approach take forever to program, but it’s also so exploitable in many ways beyond just the above. Yet, to emulate how a human player might play, that sort of approach is generally needed. I started out using a decision tree, but pretty soon realized that this was kind of boring even at the basic conceptual level — if I wanted to play against humans, I could just play against another human. I wanted an AI that acted in a new way, different from what another human could do, like playing against Skynet or the Buggers from Ender’s Game, or something like that. An AI that felt fresh and intelligent, but that played with scary differences from how a human ever could, since our brains have different strengths and weaknesses compared to a CPU. There are countless examples of this in fiction and film, but not so many in games.

Decentralized Intelligence

The approach that I settled on, and which gave surprisingly quick results early in the development of the game, was simulating intelligence in each of the individual units, rather than simulating a single overall controlling intelligence. If you have ever read Prey, by Michael Crichton, it works vaguely like the swarms of nanobots in that book. The primary difference is that my individual units are a lot more intelligent than each of his nanobots, and thus an average swarm in my game might be 30 to 2,000 ships, rather than millions or billions of nanobots. But this also means that my units are at zero risk of ever reaching true sentience — people from the future won’t be coming back to kill me to prevent the coming AI apocalypse. The primary benefit is that I can get much more intelligent results with much less code and fewer agents.

Strategic Tiers

There are really three levels of thinking to the AI in AI War: strategic, sub-commander, and individual-unit. So this isn’t even a true swarm intelligence, because it combines swarm intelligence (at the individual-unit level) with more global rules and behaviors. How the AI decides which planets to reinforce, or which planets to send waves against, is all based on the strategic level of logic — the global commander, if you will. The method by which an AI determines how to use its ships in attacking or defending at an individual planet is based on a combination of the sub-commander and individual-ship logic.

Sub-Commanders

Here’s the cool thing: the sub-commander logic is completely emergent. Based on how the individual-unit logic is coded, the units do what is best for themselves, but also take into account what the rest of the group is doing. It’s kind of the idea of flocking behavior, but applied to tactics and target selection instead of movement. So when you see the AI send its ships into your planet, break them into two or three groups, and hit a variety of targets on your planet all at once, that’s actually emergent sub-commander behavior that was never explicitly programmed. There’s nothing remotely like that in the game code, but the AI is always doing stuff like that. The AI does some surprisingly intelligent things that way, things I never thought of, and it never does the really moronic stuff that rules-based AIs occasionally do.

And the best part is that it is fairly un-trickable. Not to say that the system is perfect, but if a player finds a way to trick the AI, all they have to do is tell me and I can usually put a counter into the code pretty quickly. There haven’t been any ways to trick the AI since the alpha releases that I’m aware of, though. The AI runs on a separate thread on the host computer only, so that lets it do some really heavy data crunching (using LINQ, actually — my background is in database programming and ERP / financial tracking / revenue forecasting applications in TSQL, a lot of which came across to the AI here). Taking lots of variables into effect means that it can make highly intelligent decisions without causing any lag whatsoever on your average dual-core host.

Fuzzy Logic

Fuzzy logic / randomization is also another key component to our AI. A big part of making an unpredictable AI system is making it so that it always make a good choice, but not necessarily the 100% best one (since, with repetition, the “best” choice becomes increasingly non-ideal through its predictability). If an AI player only ever made perfect decisions, to counter them you only need to figure out yourself what the best decision is (or create a false weakness in your forces, such as with the hole in the wall example), and then you can predict what the AI will do with a high degree of accuracy — approaching 100% in certain cases in a lot of other RTS games. With fuzzy logic in place, I’d say that you have no better than a 50% chance of ever predicting what the AI in AI War is going to do… and usually it’s way less predictable than even that.

Intelligent Mistakes

Bear in mind that the lower difficulty levels make some intentionally-stupid decisions that a novice human might make (such as going for the best target despite whatever is guarding it). That makes the lower-level AIs still feel like a real opponent, but a much less fearsome one. Figuring out ways in which to tone down the AI for the lower difficulties was one of the big challenges for me, actually. Partly it boiled down to just withholding the best tactics from the lower-level AIs, but also there were some intentionally-less-than-ideal assumptions that I also had to seed into its decision making at those lower levels.

Skipping The Economic Simulation

Lastly, the AI in AI War follows wholly different economic rules than the human players (but all of the tactical and most strategic rules are the same). For instance, the AI starts with 20,000+ ships in most games, whereas you start with 4 ships per player. If it just overwhelmed you with everything, it would crush you immediately. Same as if all the bad guys in every level of a Mario Bros game attacked you at once, you’d die immediately (there would be nowhere to jump to). Or if all the enemies in any given level of an FPS game just ran directly at you and shot with perfect accuracy, you’d have no hope.

Think about your average FPS that simulates your involvement in military operations — all of the enemies are not always aware of what you and your allies are doing, so even if the enemies have overwhelming odds against you, you can still win by doing limited engagements and striking key targets, etc. I think the same is true in real wars in many cases, but that’s not something that you see in the skirmish modes of other RTS games.

This is a big topic that I’ll touch on more deeply in a future article in this series, as it’s likely to be the most controversial design decision I’ve made with the game. A few people will likely view this as a form of cheating AI, but I have good reasons for having done it this way (primarily that it allows for so many varied simulations, versus one symmetrical simulation). The AI ships never get bonuses above the players, the AI does not have undue information about player activities, and the AI does not get bonuses or penalties based on player actions beyond the visible AI Progress indicator (more on that below). The strategic and tactical code for the AI in the game uses the exact same rules as constrain the human players, and that’s where the intelligence of our system really shines.

Asymetrical AI

In AI War, to offer procedural campaigns that give a certain David vs Goliath feel (where the human players are always David to some degree), I made a separate rules system for parts of the AI versus what the humans do. The AI’s economy works based on internal reinforcement points, wave countdowns, and an overall AI Progress number that gets increased or decreased based on player actions. This lets the players somewhat set the pace of game advancement, which adds another layer of strategy that you would normally only encounter in turn-based games. It’s a very asymmetrical sort of system that you totally couldn’ t have in a pvp-style of skirmish game with AI acting as human standins, but it works beautifully in a co-op-style game where the AI is always the enemy.

Next Time

This provides a pretty good overview of the decisions we made and how it all came together. In the next article, which is now available, I delve into some actual code. If there is anything that readers particularly want me to address in a future article, don’t hesitate to ask! I’m not shy about talking about the inner workings of the AI system here, since this is something I’d really like to see other developers do in their games. I play lots of games other than my own, just like anyone else, and I’d like to see stronger AI across the board.

Designing Emergent AI

by Christopher M. Park

The first part of this article series has been a hit with a lot of people, yet criticized by others for being too introductory/broad. Fair enough, starting with this article I’m going to get a lot lower-level. If you’re not a programmer or an AI enthusiast, you probably won’t find much interest beyond this point.

What Do You Mean, It Works Like A Database?

First question that most people are asking is in what way my AI code can possibly be like a database. In a past article (Optimizing 30,000+ Ships In Realtime In C#) I already talked about how I am using frequent rollups for performance reasons. That also helps with the performance of the AI, but that’s really not the meat of what I’m talking about.

I’m using LINQ for things like target selection and other goal selection with the AI in the game, and that really cuts down on the amount of code to make the first level of a decision (it also cuts down on the amount of code needed in general, I’d estimate that the entirety of the decision-making AI code for the game is less than 20,000 lines of code). Here’s one of the LINQ queries for determining target selection:

var targets =
//30% chance to ignore damage enemy can do to them, and just go for highest-value targets
( unit.UnitType.AIAlwaysStrikeStrongestAgainst ||
AILoop.Instance.AIRandom.Next( 0, 100 ) < 30 ?
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
obj.GetHasAttackPenaltyAgainstThis( unit ) ascending, //ships that we have penalties against are the last to be hit
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy
out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
Mat.ApproxDistanceBetweenPoints( obj.LocationCenter, unit.LocationCenter ) ascending, //how close am I to the enemy
obj.UnitType.ShieldRating ascending, //how high are their shields
unit.UnitType.AttackPower ascending, //go for the lowest-attack target (economic, probably)
obj.Health ascending //how much health the enemy has left
select obj
:
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
( chooseWeaklyDefendedTarget ?
obj.UnitType.TripleBasicFirePower >= obj.NearbyMilitaryUnitPower :
( chooseStronglyDefendedTarget ?
obj.UnitType.TripleBasicFirePower < obj.NearbyMilitaryUnitPower : true ) ) descending, //lightly defended area
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy
out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
obj.GetHitPercent( unit ) descending, //how likely I am to hit the enemy
unit.GetAttackPowerAgainstThis( obj, false ) descending, //how much damage the enemy can do to me
obj.Health ascending //how much health the enemy has left
select obj
);

Blogger eats a lot of the formatting there, but hopefully you can see what is going on based on the comments in green. In some ways you could call this a decision-tree (it does have multiple tiers of sorting), but the overall code is a lot more brief and (when properly formatted with tabs, etc) easier to read. And the best thing is that, since these are implemented as a sort, rather than distinct if/else or where clause statements, what you arrive at is a preference for the AI to do one thing versus another thing.

There are a lot of things that it takes into consideration up there, and there are a few different modes in which it can run, and that provides a lot of intelligence on its own. But that’s not enough. The loop that actually evaluates the above logic also adds some more intelligence of its own:

bool foundTarget = false;
foreach ( AIUnit enemyUnit in targets )
{
if ( enemyUnit.Health <= 0 || enemyUnit.CloakingLevel == CloakingLevel.Full )
continue; //skip targets that are already dead, or are cloaked
if ( unit.CloakingLevel == CloakingLevel.Full &&
enemyUnit.UnitType.ShipType == ShipType.Scout )
continue; //don’t give away the position of cloaked ships to scouts
if ( unit.CloakingLevel != CloakingLevel.None &&
enemyUnit.UnitType.TachyonBeamRange > 0 )
continue; //cloaked ships will not attack tachyon beam sources
if ( enemyUnit.UnitType.VeryLowPriorityTarget )
continue; //if it is a very low priority target, just skip it
if ( enemyUnit.IsProtectedByCounterMissiles && unit.UnitType.ShotIsMissile )
continue; //if enemy is protected by counter-missiles and we fire missiles, skip it
if ( enemyUnit.IsProtectedByCounterSnipers && unit.UnitType.ShotIsSniper )
continue; //if enemy is protected by counter-sniper flares and we fire sniper shots, skip it
if ( enemyUnit.GetAttackPowerAgainstThis( unit, false ) == 0 )
continue; //if we are unable to hurt the enemy unit, skip attacking it
if ( unit.EffectiveMoveSpeed == 0 && !unit.UnitType.UsesTeleportation &&
enemyUnit.GetHitPercent( unit ) <>
continue; //stop ourselves from targeting fixed ships onto long-range targets

gc = GameCommand.Create( GameCommandType.SetAttackTarget, true );
gc.FGObjectNumber1 = unit.FGObjectNumber;
gc.FGObjectNumber2 = enemyUnit.FGObjectNumber;
gc.Integer1 = 0; //Not Attack-Moving
unit.LastAICommand = gc.AICommand;
AILoop.Instance.RequestCommand( gc );
foundTarget = true;
break;
}

//if no target in range, and guarding, go back to base if out of range
if ( !foundTarget && unit.GuardingObjectNumber > 0 )
{
Point guardCenter = unit.GuardingObject.LocationCenter;
if ( Mat.ApproxDistanceBetweenPoints( guardCenter, unit.LocationCenter ) >
Configuration.GUARD_RADIUS )
Move( unit, guardCenter );
}

Nothing real surprising in there, but basically it has a few more decision points (most of these being hard rules, rather than preferences). Elsewhere, in the pursuit logic once targets are selected, ships have a preference for not all targeting exactly the same thing — this aspect of them watching what each other are doing is all that is really needed, at least in the game design I am using, to make them do things like branching and splitting and hitting more targets, as well as targeting effectively.

Rather than analyzing the above code point by point, I’ll mostly just let it speak for itself. The comments are pretty self-explanatory overall, but if anyone does have questions about a specific part, let me know.

Danger Levels

One important piece of logic from the above code that I will touch on is that of danger levels, or basically the lines of code above where it is evaluating whether or not to prefer a target based on how well it is defended by nearby ships. All ships have a 30% chance to disregard the danger level and just go for their best targets, and some ship types (like Bombers) do that pretty much all the time, and this makes the AI harder to predict.

The main benefit of an approach like that is that it causes the AI to most of the time try to pick off targets that are lightly defended (such as scrubbing out all the outlying harvesters or poorly defended constructors in a system), and yet there’s still a risk that the ships, or part of the ships will make a run at a really-valuable-but-defended target like your command station or an Advanced Factory. This sort of uncertainty generally comes across as very human-like, and even if the AI makes the occasional suicide run with a batch of ships, quite often those suicide runs can be effective if you are not
careful.

Changing The AI Based On Player Feedback

Another criticism that some readers had about the first AI article was to do with my note that I would fix any exploits that people find and tell me about. Fair enough, I didn’t really explain myself there and so I can understand that criticism. However, as I noted, we haven’t had any exploits against the AI since our alpha versions, so I’m not overly concerned that this will be a common issue.

But, more to the point, what I was actually trying to convey is that with a system of AI like what you see above, putting in extra tweaks and logic is actually fairly straightforward. In our alpha versions, whenever someone found a way to trick the AI I could often come up with a solution within a few days, sometimes a few hours. The reason is simple: the LINQ code is short and expressive. All I have to do is decide what sort of rule or preference to make the AI start considering, what relative priority that preference should have if it is indeed a preference, and then add in a few lines of code. That’s it.

With a decision tree approach, I don’t think I’d be able t to do that — the code gets huge and spread out through many classes (I have 10 classes for the AI, including ones that are basically just data holders — AIUnit, etc — and others that are rollups — AIUnitRollup, which is used per player per planet). My argument for using this sort of code approach is not only that it works well and can result in some nice emergent behavior, but also that it is comparably easy to maintain and extend. That’s something to consider — this style of AI is pretty quick to try out, if you want to experiment with it in your next project.

The first part of this article series was basically an introduction to our AI design, and the second part of this article series took a look at some of the LINQ code used in the game, as well as discussing danger levels and clarifying a few points from the first article. The second article was basically just for programmers and AI enthusiasts, as is this third one. The topic, this time, is the manner in which this approach is limited.

This Approach Is Limited?

Well, of course. I don’t think that there is any one AI approach that can be used to the exclusion of all others. And our AI designs are already heavily hybridized (as the first article in the series mentions), so it’s not like I’m using any single approach with AI War, anyway. I used the techniques and approaches that were relevant for the game I was making, and in some respects (such as pathfinding), I blended in highly traditional approaches.

The purpose of this article is to discuss the limits of the new approaches I’ve proposed (the emergent aspects, and the LINQ aspects), and thereby expose the ways in which these techniques can be blended with other techniques in games beyond just AI War. Let’s get started:

Where Emergence Quickly Breaks Down

To get emergent behavior, you need to have a sufficient number of agents. And to keep them from doing truly crazy stuff, you need to have a relatively simple set of bounds for their emergence (in the case of AI War, those bounds are choosing what to attack or where to run away to. Basically: tactics). Here are some examples of places that I think it would be massively ineffective to try for emergent behavior:

1. A competitive puzzle game like tetris. There’s only one agent (the cursor).

2. Economic simulations in RTS games. That’s all too interrelated, since any one decision has potentially massive effects on the rest of the economy (i.e., build one building and the opportunity cost is pretty high, so some sort of ongoing coordination would be very much needed).

3. Any sort of game that requires historical knowledge in order to predict player actions. For example, card games that involve bluffing would not fare well without a lot of historical knowledge being analyzed.

4. Any game that is turn-based, and which has a limited number of moves per turn, would probably not do well with this because of the huge opportunity cost. Civilization IV could probably use emergent techniques despite the fact that it is turn-based, but Chess or Go are right out.

Why Emergence Works For AI War

1. There are a lot of units, so that gives opportunities for compound thinking based on intelligence in each unit.

2. Though the decision space is very bounded as noted above (what to attack, or where to retreat to), that same decision space is also very complex. Just look at all of the decision points in the LINQ logic in the prior article. There are often fifty ways any given attack could succeed to some degree, and there are a thousand ways they could fail. This sort of subtlety lets the AI fuzzify its choices between the nearly-best choices, and the result is effective unpredictableness to the tactics that feels relatively human.

3. There is a very low opportunity cost per decision made. The AI can make a hundred decisions in a second for a group of ships, and then as soon as those orders are carried out it can make a whole new set of decisions if the situation has changed unfavorably.

4. Unpredictableness is valuable above most all else in the tactics, which works well with emergence. As long as the AI doesn’t do something really stupid, just having it choose effectively randomly between the available pretty-good choices results in surprise tactics and apparent strategies that are really just happenstance. If there were fewer paths to success (as in a racing game, for example), the boundedness of decisions to make would be too tight to make any real opportunities for desirable emergent behavior.

Learning Over Time

One intentional design choice that I made with AI War was to keep it focused on the present. I remember seeing Grandmaster Chess players at Chess tournaments, where they would play against 50 regular ranked players at once, walking around the room and analyzing each position in turn. On most boards they could glance at the setup and make their move almost instantly. On a few they would pause a bit more. They would still typically win every game, just
choosing whoever put up the best fight to cede a victory to (that person typically got a prize).

I figured it would be nice to have an AI with basically those capabilities. It would look at the board at present, disregarding anything it learned in the past (in fact, it would be just a subject matter expert, not a learning AI in any capacity), and then it would make one of the better choices based on what it saw. This is particularly interesting when players do something unexpected, because then the AI does something equally unexpected in response (responding
instantly to the new, unusual “board state”).

I feel like this works quite well for AI War, but it’s not something that will evolve over time without the players learning and changing (thus forcing the AI to react to the different situations), or without my making extensions and tweaks to the AI. This was the sort of system I had consciously designed, but in a game with fully symmetrical rules for the AI and the humans, this approach would probably be somewhat limited.

The better solution, in those cases, would be to combine emergent decision making with data that is collected and aggregated/evaluated over time. That collected data becomes just more decision points in the central LINQ queries for each agent. Of course, this requires a lot more storage space, and more processing power, but the benefits would probably be immense if someone is able to pull it off with properly bounded evaluations (so that the AI does not
“learn” something really stupid and counterproductive).

Isn’t That LINQ Query Just A Decision Tree?
One complaint that a few AI programmers have had is that the LINQ queries that I’ve shared in the second article aren’t really that different from a traditional decision tree. And to that, I reply: you’re absolutely right, that aspect is not really too different. The main advantage is an increase in readability (assuming you know LINQ, complex statements are much more efficiently expressed).

The other advantage, however, is that soft “preferences” can easily be expressed. Rather than having a huge and branching set of IF/ELSE statements, you have the ORDER BY clause in LINQ which makes the tree evaluation a lot more flexible. In other words, if you have just this:

ORDER BY comparison 1,
comparison 2,
comparison 3

You would be able to have a case where you have 1 as true, 2 as false, and 3 as true just as easily as all three being true, or having false, true, false, or any other combination. I can’t think of a way to get that sort of flexibility in traditional code without a lot of duplication (the same checks in multiple branches of the tree), or the use of the dreaded and hard-to-read gotos.

So, while the idea of the LINQ query is basically the same as the decision tree in concept, in practice it is not only more readable, but it can be vastly more effective depending on how complex your tree would otherwise be. You don’t even have to use LINQ — any sorting algorithm of sufficient complexity could do the same sort of thing. The novelty of the approach is not LINQ itself, but rather using a tiered sorting algorithm in place of a decision tree. You could also express the above LINQ code in C# as:

someCollection.Sort( delegate( SomeType o1, SomeType o2 )
{
int val = o1.property1.CompareTo( o2.property1 );
if ( val == 0 )
val = o1.property2.CompareTo( o2.property2 );
if ( val == 0 )
val = o1.property3.CompareTo( o2.property3 );
return val;
} );

In fact, throughout the code of AI War, statements of that very sort are quite common. These are more efficient to execute than the equivalent LINQ statement, so on the main thread where performance is key, this is the sort of approach I use. In alpha versions I had it directly in LINQ, which was excellent for prototyping the approaches, but then I converted everything except the AI thread into these sorts instead of LINQ, purely for performance reasons. If you’re working in another language or don’t know SQL-style code, you could easily start and end with this kind of sorting.

Combining Approaches

In AI War, I use basically the following approaches:

1. Traditional pathfinding

2. Simple decision-engine-style logic for central strategic decisions.

3. Per-unit decision-engine-style logic for tactics.

4. Fuzzy logic (in the form of minor randomization for decisions) throughout to reduce predictability and encourage emergence given #3.

5. Soft-preferences-style code (in the form of LINQ and sorts) instead of hard-rules-style IF/ELSE decision tree logic.

If I were to write an AI for a traditional symmetrical RTS AI, I’d also want to combine in the following (which are not needed, and thus not present, in AI War):

6. A decision tree for economic management (using the sorting/LINQ approach).

7. A learning-over-time component to support AI at a level for really great human players in a symmetrical system.

If I were to write an AI for a FPS or racing game, I’d probably take an approach very similar to what those genres already do. I can’t think of a better way to do AI in those genres than they already are in the better games in each genre. You could potentially combine in emergence with larger groups of enemies in war-style games, and that could have some very interesting results, but for an individual AI player I can’t think of much different to do.

For a puzzle game, or a turn-based affair like Chess, there again I would use basically the current approaches from other games, which are deep analysis of future moves and their consequences and results, and thus selection of the best one (with some heuristics thrown in to speed processing, perhaps.).

Performers or adventure games could also use mild bits of swarm behavior to interesting effect, but I think a lot of players (myself included) really do like the convention of heavily rules-based enemies in those genres. These games don’t really tend to feature AI that is meant to mimic humans, rather AI that is meant to just follow simple rules that the human players can learn and remember.

The sort-style decision tree might be easier to code and might bring results slightly more efficiently in all of the above cases, but the end result for the player wouldn’t be much different. Still, making AI code easier to program and read is a good thing for everyone (lower costs/time to develop, fewer bugs based on poor readability, etc).

Next Time

Part 4 of this series talks about the asymmetry in this AI design.

Designing Emergent AI, Part 4: Asymetrical Goals

The first part of this article series was basically an introduction to our AI design, and the second part of this article series took a look at some of the LINQ code used in the game, as well as discussing danger levels and clarifying a few points from the first article. The third part of this series covered the limitations of this approach. The topic, this time, is the asymmetrical system used for things like resource management in AI War.

This topic is a very challenging one for me to approach, because there are such heated feelings on both sides of it amongst game designers. Just recently I remembered that I had actually already written an article on this topic, targeted at my alpha testers in order to explain to them the conceptual shift that I was planning at that time. They were skeptical of the idea at the time (as I would have been if someone else had suggested it to me), and this article went a
long way toward convincing them that the concept might have some merit — the most convincing thing of all, of course, was when they could actually see how well the AI worked in practice.

The game had been in development for less than three months at the time I originally wrote this, but amazingly almost all of it still holds true now, 2 months after the game’s public release (the things that no longer hold true are how some of the AI types played out in the end, and the division of AI ships into offensive/defensive groups — these are minor points that existing AI War players will notice the discrepancy in, but otherwise this makes no difference to the overall ideas being presented here).

ABOUT THE ARTIFICIAL INTELLIGENCE IN AI WAR

About the AI in most RTS Games

Most Real-Time Strategy (RTS) games use a style of AI that tries to mimic what a human player might do. The AI in these games has all the same responsibilities and duties as the human players, and the success of the AI is predicated on it properly simulating what a human might do.

These sorts of AI rely heavily on exceedingly complex Finite State Machines (FSMs) — in other words, “if the situation is X, then do Y, then Z.” This sort of deterministic algorithm takes a long time to design and program, and is overall pretty predictable. Worst of all, these algorithms tend not to have very interesting results — invariably, facing this sort of AI is sort of like playing against another human, only stupider-yet-faster. Clever players are able to trick the AI by finding patterns and weaknesses in the algorithms, and the AI tends to be slow to respond to what the players are doing — if it responds at all. This is after months of work on the part of some poor AI programmer(s).

Nondeterministic AI in other Games

In most modern AI design, nondeterminism is an important goal — given the same inputs, the AI shouldn’t always give EXACTLY the same output. Otherwise the AI is inhumanly predictable. The chief ways of combating this predictability are fuzzy logic (where inputs are “fuzzified,” so that the outputs are also thus less precise) or some variation of a learning AI, which grows and changes over time (its historical knowledge makes it act differently over the course of its existence).

The problem with a standard learning AI is that it can easily learn the wrong lessons and start doing something strange. Debugging is very difficult, because it’s hard to know what the AI thinks it is doing and why. Also, until it has some base amount of historical data, it seems that it will either be a) wildly unpredictable and unproductive, or b) using deterministic methods that make it predictable. It can be like teaching an amoeba to tap-dance — but instead it starts setting things on fire, and you wonder what made it think that was part of what it should do.

Therefore, even with a learning AI, you’re likely to have a pretty predictable early game. Plus, if the game supports saving, the entire historical knowledge of the AI would have to be saved if the AI is to keep track of what it was doing (and keep its learned intelligence). This can make save files absolutely huge, among other various disadvantages.

Semi-Stateless AI in AI War

For AI War, I wanted a nondeterministic AI that was not dependent on having any historical knowledge. Of course, this means a pretty fair amount of fuzzy logic by definition — that is indeed in place — but I also wanted some of the characteristics of a learning AI. Essentially, I wanted to model data mining practices in modern business applications (something I’m imminently familiar with from my day job). My rule of thumb was this: At any given point in time,
the AI should be able to look at a set of meaningful variables, apply some rules and formulas, and come to the best possible conclusion (fuzzified, of course).

A human example: At chess tournaments, often the grandmasters will play against the normal-human players 40 or so at a time (purely for fun/publicity). The 40 lower-ranked players sit at tables in a ring around the room, each with their own chess game against the grandmaster. The grandmaster walks slowly around the room, making a move at each game in sequence. There is no way that the grandmaster is remembering the state of all 40 games; rather, he analyzes each
game as he comes to it, and makes the best possible move at the time. He has to think harder at games where there is a particularly clever player, but by and large the grandmaster will win every game out of the 40 because of the skill gap. The general outcome is that the grandmaster picks the cleverest of the other players, and lets them win on purpose (if there is a prize — if the grandmaster is showing off, they’ll just beat everyone).

The AI in AI War is a lot like that grandmaster — it is capable of coming to the game “blind” about once per second, and making the best possible choices at that time. Minor bits of data are accumulated over time and factored in (as the grandmaster might remember details about the cleverer opponents facing him), but overall this is not necessary. The AI also remembers some data about its past actions to a) help it follow through on past decisions unless there is a compelling reason not to, and b) to help prevent it from falling into patterns that the opponents can take advantage of.

Decentralization of Command In Other RTS Games

One important factor in creating an interesting AI opponent is that it must be able to do multiple things at once. In all too many games, the AI will just have one military arm moving around the board at a time, which is not what a human player would typically do. Where are the diversions, the multi-front attacks, the flanking?

In AI War, it was important to me that the tactical and strategic commanders be able to perform as many different activities as makes sense given the units at hand. Typically this would be accomplished by creating multiple independent “agents” per AI player, and assigning control each unit to a particular agent. You then run into issues of the agents having to negotiate and coordinate among themselves, but in AI War’s semi-stateless environment it is even
worse — how do you intelligently divide up the units among these arbitrary agents if you have to keep reassigning them? And how to know when to spawn more agents, versus consolidate useless agents? These problems can all be solved, but they are not something to be attempted lightly, and nor will they be kind to the CPU. The central problem is that of resource management. Not only the delegation of control of existing units, but trying to balance tactical/strategic
elements with the generation of resources and the production of new units.Which brings me to my next point…

Resource Management In AI War

I struggled with the command decentralization issue for several days, trying to come up with a solution that would meet all of my many criteria, and ultimately came to a realization: what matters is not what the AI is actually doing, but what the visible effect is to the human players. If the AI struggles with all these things invisibly, burning up programmer hours and CPU cycles, and then comes to result A, wouldn’t it be better to just shortcut some of that and have it arrive at result A? Specifically, I realized that the economic simulations had a very low payoff as far as the players were concerned. If I took out the human-style economic model for the AI — no resources, no techs, just a generalized, linearized ship-creation algorithm — what would the impact be?

First of all, this change makes it so that the AI players do not use builders, factories, reactors, or any of the other economy-related ships. This changes the gameplay to a mildly significant degree, because the human players cannot use the strategy of targeting economic buildings to weaken the AI. This is definitely a drawback, but in most RTS games the AI tends to have so many resources that this is generally not a viable strategy, anyway.

Having decided that this gameplay shift was acceptable, I set about designing a ship-creation algorithm. This is harder than it sounds, as each AI has to know a) what ships it is allowed to build at any given point in the game, b) how much material it is allow to spend per “resource event,” and other factors that keep things fair. Note that this is NOT your typical “cheating AI” that can just do whatever it wants — the AI plays by different rules here, but they are strict rules that simulate essentially what the AI would otherwise be doing, anyway.

Decentralization of Command In AI War

Now that the economy is handled, we are back to the issue of decentralization. Each AI is now allowed to build a certain number of ships of specific types during each resource event, but how to intelligently choose which to build? Also, how to decide what to do with the ships that already exist?

First off, the AI needs to divide its units into two categories — offensive and defensive. Most games don’t do this, but in AI War this works very effectively. Each AI decides that it wants to have a certain number of ships of certain types defending each planet or capital ship that it controls. It’s first priority is production to meet those defensive goals.

Any units that aren’t needed for defense are considered offensive units. These get handed over to the strategic-routing algorithm, which is described next. Each unit-producing ship controlled by the AI player will build offensive units based on a complex algorthm of fuzzy-logic and weighting — the result is a fairly mixed army that trends towards the strongest units the AI can produce (and the favored units of a given AI type), but which never fully stops building the weaker units (which are always still useful to some degree in AI War).

Strategic Routing in AI War

The strategy planning in AI War consists of deciding what units to send to which planets. Defensive units tend not to leave their planet unless the thing they are protecting also leaves, so this pretty much just means routing around the offensive units.

This takes the form of two general activities: 1) attacking — sending ships to the planet at which they should be ale to do the most amount of damage; and 2) retreating from overwhelming defeats, when possible (no other RTS game AI that I have encountered has ever bothered with this).

The AI does not use cheap factors such as player score, who the host is, or any other non-gameplay variables in these sorts of decisions. All its decisions are based on what units are where, and what it currently calculates the outcome of a conflict is most likely to be.

Tactical Decisions in AI War

When it comes to tactical decision-making, this is actually one of the conceptually simpler parts of the AI. Each unit tries to get to its optimal targeting range, and targets the ships it is best able to hurt. It will stay and fight until such time as it dies, all its enemies are dead, or the Strategic Routing algorithm tells it to run away.

AI Types in AI War

When I think about the human-emulating AI found in most RTS games, it strikes me that this is extremely limiting. In most genres of game, you face off against a variety of different enemies that have powers that differ from those of the human players. Some opponents are vastly stronger, others are weaker, and all are distinctive and interesting in their own way. In RTS games, everyone is pretty much the same — or at least things are balanced to the point of
approximate fairness — and I think this harms the longevity of these games.What seems more interesting to me, and what I’ve decided to do in AI War, is to provide a wide variety of types of AI, each with their own strengths, weaknesses, and genuinely unique behaviors. Some have ships that the player can never get, others start with many planets instead of one, some never capture planets but instead roam around the neutral planets as lawless raiders. The possibilities are endless, especially when playing against multiple AI types in a single game.

The result is situations that are often intentionally unfair — as they would be in real war. You can simulate being the invading force into a galaxy controlled by evil aliens, or simulate the opposite — they’re invading you. I think of this as being rather like the situation in Ender’s Game. You can have AIs that are timid and hide, or others that come after you with unbridled aggression — Terminators, or Borg, or whatever you want to call them. Some will use strange, alien combat tactics to throw you off guard, while others will routinely use confusing diversions to mask what their true objective is. Others will outclass you in terms of technology from the very start, others will have vastly more manpower but inferior technology — the list goes on and on.

The whole goal of a game like this is to provide an interesting, varied play experience for the players. If all the AIs are essentially the same except for their general demeanor (as in most other games), that doesn’t provide a lot of options. AI War’s style of varied AIs is not “AI cheating” in the traditional sense — each type of AI has specific rules that it must follow, which are simply different than the rules the human players are held to.

Think of this as the offense and defense in a football game: each team has wildly different goals — one to score, one to not let the other team score — but the overall success of one team versus another is determined based on how well they do in their respective goals. In football, the teams also routinely switch off who is on offense and who is on defense, and that’s where the analogy to AI War starts to break down a bit. In AI War, it would be more like if one football team always played offense, but the defenders were allowed to have an extra three guys on the field. Maybe the defenders could only score by forcing a turnover and running the ball all the way back, but that would be significantly easier due to the extra players.

The AI Types in AI War are like that — they unbalance the rules on purpose, not to cheat, but to provide something genuinely new and varied.

The Future of AI in AI War

At present, the AI does not yet make very interesting tactical decisions — flanking, firepower concentration on key targets, etc. These and other “behaviorlets” will be added to future iterations of the AI. The AI will evaluate the behaviorlet conditions during tactical encounters, and employ them when it deems it makes sense to do so.

In fact, this concept of “behaviorlets” is the main thing that is left to do with the AI all across the board. Right now the AI is very by-the-numbers, which makes it predictable except where the fuzzy logic makes it somewhat randomized. This is comparable to the end state of many other RTS game AIs (yet it took me days instead of months or years), but with the architecture of AI War, I can continue to add in new “behaviorlets” in all aspects of the AI, tomake it progressively more formidable, varied, and human-seeming. Example behaviorlets include scouting, sniping, mine laying and avoidance, using staging areas for attacks, economy targeting, use of transports, planet denial, and more. All of these things can be programmed into the existing architecture with relative ease; the complexity is bounded to the behaviorlet itself, rather than having to worry about how the behaviorlet will interact with larger elements of (for example) your typical AI finite state machine. This makes for a very object-oriented approach to the AI, which fits my thinking style.

Computer Requirements

This is a very modular AI system that can be extended over time, and yet it is also extremely efficient — it is already able to control tens of thousands of ships without significant slowdown on a dual-core computer. However, it is being designed with dual-core computers in mind, and it is highly unlikely to run well at all on a single-processor computer.

On the other hand, the AI logic is only run on the game host (also unusual for an RTS game — possibly another first), which means that single-threaded computers can join someone else’s multiplayer game just fine. Given that this is the largest-scale RTS game in existence at the moment in terms of active units allowed during gameplay (by at least a factor of two), this is also pretty notable.

In Conclusion (We Return To The Present)

As noted at the start of this article, the main body of this article was written when the game was still in alpha stages, with the prior few months having been spent testing out the mechanics, the networking, the graphics pipeline, etc, in pure player-vs-player (pvp) modes. I had spent a week or so with the AI in a more traditional model, and felt like that was not working the way I wanted on any level, and so I decided to go back to the underlying principles of what I was trying to accomplish to see if there was a simpler approach.

After a few more days of thought, and then maybe three days of coding, I had a basic prototype of the AI working. I wrote this article then to let my alpha testers know what I was planning, and then spent the next three months refining, expanding, and completing the AI (and the rest of the game) before a beta in April. The game spent a month in beta, ironing out bugs and balance issues, and then went on sale on the Arcen site and then Stardock’s Impulse in May.

This asymmetrical AI is the aspect of the game design that is most commonly criticized by other programmers or game designers who have not actually played the game. From my growing list of players, and the few reviews that the game has received so far (our exposure is still low, but growing due to the game’s huge popularity on Impulse and elsewhere), this hasn’t seemed to be an issue for anyone who actually plays the game. That comes back to my core realization
with the AI: it doesn’t matter what the AI is really doing, it matters what the AI seems to be doing, and what it makes the player do.

This style of AI creates a lot of unique gameplay situations, provides a good, fair-seeming challenge, and in general provides a degree of variety you don’t encounter with other AIs. This isn’t going to power sentient robots anytime soon, but in terms of creating an interesting game it seems to have paid off. That’s really the goal, isn’t it? To create a game that people find fun, challenging, and interesting? To get too hung up on the semantics of one approachversus another, and which is more “authentic” is counterproductive in my book. We really ought to have more games experimenting with various approaches to AI design, because I think it could make for some really fun games in all sorts of genres.

Next Time?

I have a number of other articles planned about the game, most notably a series on game design ideas that flopped for one reason or another and thus didn’t make it into the final game — an exploration of the failures-along-the-way is always fun and illuminating. But as far as the topic of AI goes, I’ve covered all of the points I set out to cover. Unless readers bring up more topics that they’d like me to address, this will probably be the final article in my AI series.


上一篇:

下一篇: