游戏邦在:
杂志专栏:
gamerboom.com订阅到鲜果订阅到抓虾google reader订阅到有道订阅到QQ邮箱订阅到帮看

论述游戏AI设计的未来发展趋势

发布时间:2012-01-20 11:02:48 Tags:,,,

作者:Steven Woodcock

2000年的游戏开发者大会清楚呈现的一点趋势是:在开发者、制作人和管理人士心中,游戏AI最终“实现自己的目标”。游戏AI如今已被视作游戏设计过程的重要组成元素,而不再被归到项目计划的次要位置,由兼职实习生利用暑假时间完成。在很多人看来,游戏AI已变得和游戏图像引擎所包含的功能一样重要。换而言之,游戏AI如今已变成“制作清单”上的重要事项,我们从会议的反应及关于此内容的问卷调查中就能发现这点。

本文主要分享我、Neil Kirby及Eric Dybsand关于游戏AI的看法。这有助于探究开发者所面临的问题、他们所采用的技术,及他们对于行业未来发展的看法。文章还涉及我过去1年通过网站收集的问卷调查结果。

资源——不再是问题

之前,我曾在其他文章中提到,AI开发者越来越多地参与至游戏设计过程,协助团队制作出更优质的AI元素。我还提到,许多项目将越来越多的程序员分配到游戏AI制作,AI程序员也逐渐享有更多的CPU资源。

从这次的会议来看,资源战争基本已经结束。近80%的会议参与者表示,他们在当前及之前的项目中都配备至少1位全职AI工作人员;有近1/3的人士表示,他们目前有2位或更多开发者全职负责AI内容。编程资源的提高在过去几年里主要体现在业内AI质量的提升,就行业和市场实况来看,我们未来甚至有望看到整个团队都投身AI工作的现象。

AI开发资源调查(from gamasutra)

AI开发资源调查(from gamasutra)

更有趣的地方在于开发者获得的CPU资源数量。开发者称他们现在通常能够得到25%的CPU资源(游戏邦注:这比1999年提高250%)。若你将CPU操作能力的提高考虑在内,其趋势就会变成更明显。

很多开发者还表示,大家对于游戏AI的态度也发生转变。前几年大家的理念是“(AI)只要不影响帧速率就行”,但如今整个游戏团队慢慢觉得AI和其他游戏元素一样重要。有些程序员表示,许多开发者都向团队成员这样表示,“新图像功能只要不减缓AI速度即可。”这显然说明游戏AI已受到广泛关注。

开发者同时也没有承受资源压力。有些开发者依然欢快地表示他们将几乎100%的电脑资源都投入到电脑AI中,但他们还称这能够带来更深刻的玩法,但不一定是更优质的玩法。几乎所有开发者都为AI投入大量资源,部分原因是为了更好利用CPU,但也是为了将AI过程从其余游戏引擎中分离出来。

AI开发者依然不满3D图像芯片过多运用CPU资源,称图像程序员不再像过去那样需要如此多CPU资源。

近年来的趋势

于1998和1999年GDC诞生的众多AI技术过去1年来持续保持发展势头。近几个月问世的基于有趣AI的游戏证明,行业水平已有明显提高。下面就来看看几个主要发展趋势。

人造生命。自1999年GDC以来最突出的发展趋势也许就是许多游戏纷纷采用人造生命(A-Life)技术。从Maxis的《模拟人生》到CogniToy的《Mind Rover》,开发者发现,A-Life技术让他们能够灵活地在游戏角色中呈现栩栩如生的行为。

the sims(from gamesradar.com)

the sims(from gamesradar.com)

A-Life技术来自于对真实生物体的研究。A-Life旨在通过硬编码规则、基因算法和群集算法等方式模仿人类行为。开发者无需编写各种相当复杂的行为,而是将问题分解成更小元素。这些行为通常同游戏角色进行的决策层次存在某种联系,这些决策旨在判断他们要如何实现目标。低级别编码行为和角色需求间的互动令高级“明智”行为能够在没有复杂编程的情况下形成。

此方式的简单性及其所带来的惊人行为令众多开发者在过去1年中难以对其视而不见,很多游戏纷纷采用此技术。《模拟人生》无疑是大家最熟悉的一款。这款游戏运用被Maxis联合创始人和游戏设计师Will Wright称作“智能地形”的技术。在这款游戏中,所有角色具有各种动机和需求,地形提供各种方式满足玩家的这些需求。各地形会向附近玩家发布其能够提供的信息。例如,当饥饿角色靠近冰箱时,冰箱的“我有食物”消息发布促使玩家决定从中拿些东西。食物会自己告诉玩家其需要进行烹饪。因此角色在游戏的逐步指引下进行操作,只受到简单的物件程序的驱动。

开发者显然深深着迷于此方式的发展潜力,广泛讨论此话题。此理念显然也适合其他游戏题材。例如在第一人称射击游戏中,既定房间会出现许多碎片弹,旨在“示意”NPC帮助玩家角色。然后NPC会变得非常紧张,对房间产生“糟糕感觉”(游戏邦注:所有这些都旨在强化游戏体验,让其变得更逼真和有趣)。若干开发者颇关注此技巧,所以我们未来会看到更多A-Life出现在游戏中。

探险。和以往会议不同,此次GDC开发者并没有过多谈及探险内容。A*算法依然是最佳探险算法,虽然各项目会做出相应调整。开发者称需要在游戏中融入探险元素的游戏都会运用一定的A*算法。多数游戏还采用影响地图、吸引-冲击机制以及群集模式。总体而言,游戏社区已顺利解决此问题,如今开始瞄准特定游戏的具体执行方式。

开发者越来越习惯于他们的探险工具,我们开始看到结合地形分析的复杂探险内容。地形分析是比简单探险内容更复杂的问题,其AI需要研究地形情况,寻找各种自然特征——闸口处和埋伏地点等。全面地形分析能够给予游戏AI各种关于游戏地图信息的“解决方案”,专门解决复杂探险问题。地形分析能够让AI的地图知识更基于地点,这能够简化许多AI任务。遗憾的是,当游戏采用随机地图(此功能频繁出现于当前的游戏中)时,地形分析就变得越发艰巨。随机生成地形让开发者无法手动“预先分析”地图,然后将结果直接植入游戏AI中。

过去发行的很多游戏都尝试地形分析元素。例如,Ensemble Studios完全调整《帝国时代》续作《Age of Kings》的探险模式,游戏采用相当复杂的地形分析元素。影响地图用于辨别重要地点,例如金矿及创建相关建筑物的最佳地点。他们还被用于标记集结待命区和进攻路线:AI划分既有敌人的影响,所以他能够找到深入敌人区域的路线,避开所有潜在警报。

另一巧妙运用地形分析模式的游戏是Red Storm的《Force 21》。开发者利用可视图表将游戏地形分解成若干界限分明但存有联系的区域;然后AI就能够运用这些更大区域进行高级探险及指引车辆。通过将地图划分成“我能够进入的区域”和“我无法进入的区域”,AI能够向其组成成员发布更高级的操作指令,将执行任务留给组成成员。这反过来能够带来额外好处:组成成员能够利用A*算法解决更小的局部问题,将更多的CPU留给其他AI活动。

结构。和探险主题密切联系的是组成成员的结构——开发者采用此策略让军队的表现更逼真。虽然会上只有少数开发者真正需要在他们的游戏中融入结构元素,但此话题引起颇多关注。多数融入结构元素的开发者都基于严格规则系统落实这些聚集方式,旨在确保各成员处于他们应该停留的位置。有位正在制作运动游戏的开发者称自己正在研究“剧本”模式。

状态机制和分层AI。基于规则的简单限定和模糊状态机器(FSM和FuSM)依然是开发者的选择工具,令那些更“学术”性的技术黯然失色,例如神经网络和基因算法。开发者发现简单性促使这些方式通俗易懂,易于调试,它们在结合A-Life游戏中的封装游戏中表现更突出。

开发者正在寻找运用这些工具的新方式。出于许多相同的原因,A-Life方式被用于将复杂AI决策分解及简化成若干易于定义的步骤,开发者在AI设计上更多采用分层方式。Interplay的《Starfleet Command》和Red Storm的《Force 21》就是采用此模式,通过高级“海军上将”或“将军”向旗下的战略小组发出常规活动和进攻指令。在《Force 21》中,这些小组基于战略层面组合而成;每个小组都有一个“谋士”,旨在诠释收到的指令,然后将其变成各交通工具的具体操作和进攻指令。

会上众多投身策略游戏的开发者表示,他们计划或已将此分层模式运用至他们的AI引擎中。这不仅是更逼真的呈现方式,而且将故障排除过程变得更简单。很多开发者采用此方式的原因是他们能够提高游戏在战略层面的吸引力,允许玩家自定义AI,谋划策略,同时排除那些玩家偶尔会遇到的低水平“任务式”AI。这是我们从策略游戏中看到的另一备受玩家青睐的趋势——见证游戏的各种“帝国模组”,例如《Stars》、《Empire of the Fading Suns》和《Alpha Centauri》。

emperor-of-the-fading-suns(from sztab.com)

emperor-of-the-fading-suns(from sztab.com)

AI SDK是否有所帮助?

GDC 2000圆桌会议的一大讨论话题是AI SDK的可行性。AI开发者目前能够接触到的软件开发工具至少有3种:

* Mathématiques Appliquées的DirectIA,这是个代理工具包,其通过状态机器创建各种突发行为。

* Louder Than A Bomb的Spark!,这是个逻辑模糊的编辑器,主要服务AI引擎设计师。

* The Motion Factory的Motivate,这能够赋予动画角色相当复杂的动作/反应状态机制特性(游戏邦注:这被运用至Red Orb《波斯王子》的3D版本及其他作品中)。

许多开发者之前都没有发现这些工具,因此对它们的特性非常感兴趣。但在那些有经验的开发者看来,这些工具的作用并不那么显著,尽管问卷调查结果显示,目前有1-2个开发者正在评估DirectIA工具包。不过许多开发者表示,未来行业也许将出现更多有效的SDK。

谈到潜在功能,许多开发者都觉得提供简单群集或探险功能的SDK最能满足他们的需求。有位开发者表示,他希望在AI脚本中看到标准化的“机器人式”语言,虽然这点获得的共鸣很少。此外,他们还表示开发者愿意掏钱购买的SDK工具包应触手可及。多数开发者觉得价格不是问题;如今开发者已习惯投入大笔资金购买工具包、SDK和模型等元素。这意味着若有人能够制作出灵活配合开发者需求的AI SDK,他们的产品就会很有市场。

逐渐淡出的技术

自去年圆桌会议来就日益突出的情况是,越来越多“非传统”AI技术,例如神经网络和基因算法(GA)的影响开始逐渐衰弱。过去几年里,开发者主要谈论他们在设计和开发过程中如何挖掘这些技术,但今年的会议更多着眼于如何将更传统的模式发挥得更好。这里涉及的原因很多,但主要归结于这些模式已被开发者更好地掌握,并且性能表现“更突出”。开发者似乎想要更多着眼于如何更好运用此模式,倾向将相关理论的探索留给学术领域。

基因算法在过去1年里受到严重打击。会上没有任何开发者将此方式运用于自己当前的项目中,许多开发者觉得,其优点被过分夸大。虽然去年有一组成员表示他们打算尝试将GA方式运用至游戏调试中,但试验过的开发者今年表示,此方式作用不大。GA方式主要被运用于《Creatures》和《Petz》系列之类的“生活模拟”游戏中。

一个例外情况是A-Life持续受到开发者的青睐。从协助指导成员结构的群集算法(游戏邦注:包括《Force 21》、《Age of King》和《Homeworld》)到面向物件的欲望/满足感方式(《模拟人生》),开发者逐渐发现这些技巧令他们的游戏变得更栩栩如生,融入更多“意料之中的意外”的意味。

未来发展

圆桌会议的有趣之处在于,总是会谈到行业未来发展,今年还则特别谈到游戏AI的发展趋势。和以往一样,开发者呈现的看法各不相同,但其中也存在若干普遍看法。

所有开发者都认为,游戏AI将依然是游戏的主要组成要素。当前发展势头不会被新涌现的“炫酷”3D图像引擎所淹没,CPU和3D显卡特性的持续提高将继续给AI开发者带来更多力量。和去年一样,开发者都觉得行业会继续慢慢远离单一而刻板的规则模式,转而通过各种方式创造更多有目的性的灵活AI。可以说,可拓展AI将继续得到更多关注和支持,主要体现在第一人称射击游戏领域和更复杂的策略游戏。

学术和辩论观点将继续影响游戏AI领域,虽然有时学术领域似乎从开发者身上学到更多。多数情况下,开发者都觉得,有关AI的学术研究非常有趣,但无法真正帮助他们制作游戏,而学术领域的研究人员觉得游戏领域的快速发展趋势令人称羡,虽然相关技术并未得到充分证实。

游戏AI领域无疑依然是游戏开发中最具创新性的领域之一。我们很清楚哪些研究成果和工具将帮助我们进行开发工作。鉴于CPU限制条件已被完全克服,如今优质游戏AI日渐成为设计过程的组成要素,AI开发者有望看到更富创新性和实验精神的未来。

可视图像

游戏AI开始探索的一个有趣领域是地形分析。地形分析主要处理此简单任务:探索地图,寻找下步合理操作,让AI发现各种地形特征(游戏邦注:例如丘陵、山脊和凸点)的战略和策略价值,将此认知融入计划中。帮助完成此任务的一个工具就是可视图像。

可视图像是非常简单的概念,原先主要用于机器人操作空间。其运作模式是:假设你看到一张中间有丘陵,四周被树木环绕的牧场地图。地图以相应形状的几何图形代表此丘陵和树木。此场景的可视图像以几何图形的顶点代表图像的顶点,设定顶点间的图像边界,相应几何图形顶点之间总是存在清晰的路径。各连接线的粗细相当于两个对应几何图形顶点的距离。这呈现简单地图,其中你能够通过探险算法穿越地图,同时避开障碍。

但可视图像也存在些许问题。它们只提供粗糙连接信息,基于此图像所创建的路径看起来有点呆板。此外,开发者需要投以特别关注,防止最小组成成员运动时碰到几何图形的边缘,因为可视图像生成的路径不会考虑组成成员的大小。但这依然是将地形分解成简单区域的最直接方式,它们被运用于探索、埋伏及地形生成内容中。

游戏邦注:原文发布于2000年11月1日,文章叙述以当时为背景。(本文为游戏邦/gamerboom.com编译,拒绝任何不保留版权的转载,如需转载请联系:游戏邦

Game AI: The State of the Industry

by Steven Woodcock

In the first of this two-part report on the state of game AI, Steven Woodcock shares what issues came up while moderating the AI roundtables at the 2000 Game Developers Conference. Next week, in Part Two, John E. Laird will discuss how academics and developers can better share information with each other, and Ensemble Studios’ Dave Pottinger will peer into the future of game AI.

One thing was made clear in the aftermath of this year’s Game Developers Conference: game AI has finally “made it” in the minds of developers, producers, and management. It is recognized as an important part of the game design process. No longer is it relegated to the backwater of the schedule, something to be done by a part-time intern over the summer. For many people, crafting a game’s AI has become every bit as important as the features the game’s graphics engine will sport. In other words, game AI is now a “checklist” item, and the response to both our AI roundtables at this year’s GDC and various polls on my game AI web site (www.gameai.com) bear witness to the fact that developers are aggressively seeking new and better ways to make their AI stand out from that of other games.

The technical level and quality of the GDC AI roundtable discussions continues to increase. More important, however, was that our “AI for Beginners” session was packed. There seem to be a lot of developers, producers, and artists that want to understand the basics of AI, whether it’s so they can go forth and write the next great game AI or just so they can understand what their programmers are telling them.

As I’ve done in years past, I’ll use this article to touch on some of the insights I gleaned from the roundtable discussions that Neil Kirby, Eric Dybsand, and I conducted. These forums are invaluable for discovering the problems developers face, what techniques they’re using, and where they think the industry is going. I’ll also discuss some of the poll results taken over the past year on my web site, some of which also provided interesting grist for the roundtable discussions.

Resources: The Big Non-issue

Last year’s article (Game AI: The State of the Industry) mentioned that AI developers were (finally) becoming more involved in the game design process and using their involvement to help craft better AI opponents. I also noted that more projects were devoting more programmers to game AI, and AI programmers were getting a bigger chunk of the overall CPU resources as well.

This year’s roundtables revealed that, for the most part, the resource battle is over (Figure 1). Nearly 80 percent of the developers attending the roundtables reported at least one person working full-time on AI on either a current or previous project; roughly one-third of those reported that two or more developers were working full-time on AI. This rapid increase in programming resources has been evident over the last few years in the overall increase in AI quality throughout the industry, and is probably close to the maximum one could reasonably expect a team to devote to AI given the realities of the industry and the marketplace.

Even more interesting was the amount of CPU resources that developers say they’re getting. On average, developers say they now get a whopping 25 percent of the CPU’s cycles, which is a 250 percent increase over the average amount of CPU resources developers said they were getting at the 1999 roundtables. When you factor in the increase in CPU power year after year, this trend becomes even more remarkable.

Many developers also reported that general attitudes toward game AI have shifted. In prior years the mantra was “as long as it doesn’t affect the frame rate,” but this year people reported that there is a growing recognition by entire development teams that AI is as important as other aspects of the game. Believe it or not, a few programmers actually reported the incredible luxury of being able to say to their team, “New graphics features are fine, so long as they don’t slow down the AI.” If that isn’t a sign of how seriously game AI is now being taken, I don’t know what is.

Developers didn’t feel pressured by resources, either. Some developers (mostly those working on turn-based games) continued to gleefully remind everyone that they devoted practically 100 percent of the computer’s resources for computer-opponent AI, but they also admitted that this generally allowed deeper play, but not always better play. (It’s interesting to note that all of the turn-based developers at the roundtables were doing strategy games of some kind — more than other genres, that market has remained the most resistant to the lure of real-time play.) Nearly every developer was making heavy use of threads for their AIs in one fashion or another, in part to better utilize the CPU but also often just to help isolate AI processes from the rest of the game engine.

AI developers continued to credit 3D graphics chips for their increased use of CPU resources. Graphics programmers simply don’t need as much of the CPU as they once did.

Trends Since Last Year

A number of AI technologies noted at the 1998 and 1999 GDCs has continued to grow and accelerate over the last year. The number of games released in recent months that emphasize interesting AI — and which actually deliver on their promise — is a testament to the rising level of expertise in the industry. Here’s a look at some trends.

Artificial life. Perhaps the most obvious trend since the 1999 GDC was the wave of games using artificial life (A-Life) techniques of one kind or another. From Maxis’s The Sims to CogniToy’s Mind Rover, developers are finding that A-Life techniques provide them with flexible ways to create realistic, lifelike behavior in their game characters.

The power of A-Life techniques stems from its roots in the study of real-world living organisms. A-Life seeks to emulate that behavior through a variety of methods that can use hard-coded rules, genetic algorithms, flocking algorithms, and so on. Rather than try to code up a huge variety of extremely complex behaviors (similar to cooking a big meal), developers can break down the problem into smaller pieces (for example, open refrigerator, grab a dinner, put it in the microwave). These behaviors are then linked in some kind of decision-making hierarchy that the game characters use (in conjunction with motivating emotions, if any) to determine what actions they need to take to satisfy their needs. The interactions that occur between the low-level, explicitly coded behaviors and the motivations/needs of the characters causes higher-level, more “intelligent” behaviors to emerge without any explicit, complex programming.

The simplicity of this approach combined with the amazing resultant behaviors has proved irresistible to a number of developers over the last year, and a number of games have made use of the technique. The Sims is probably the best known of these. That game makes use of a technique that Maxis co-founder and Sims designer Will Wright has dubbed “smart terrain.” In the game, all characters have various motivations and needs, and the terrain offers various ways to satisfy those needs. Each piece of terrain broadcasts to nearby characters what it has to offer. For example, when a hungry character walks near a refrigerator, the refrigerator’s “I have food” broadcast allows the character to decide to get some food from it. The food itself broadcasts that it needs cooking, and the microwave broadcasts that it can cook food. Thus the character is guided from action to action realistically, driven only by simple, object-level programming.

Developers were definitely taken with the possibilities of this approach, and there was much discussion about it at the roundtables. The idea has obvious possibilities for other game genres as well. Imagine a first-person shooter, for example, in which a given room that has seen lots of frags “broadcasts” this fact to the NPCs assisting your player’s character. The NPC could then get nervous and anxious, and have a “bad feeling” about the room — all of which would serve to heighten the playing experience and make it more realistic and entertaining. Several developers took copious notes on this technique, so we’ll probably be seeing even more A-Life in games in the future.

Pathfinding. In a remarkable departure from the roundtables of previous years, developers really didn’t have much to ask or say about pathfinding at this year’s GDC roundtables. The A* algorithm (for more details, see Bryan Stout’s excellent article Smart Moves: Intelligent Path-Finding) continues to reign as the preferred pathfinding algorithm, although everybody has their own variations and adaptations for their particular project. Every developer present who had needed pathfinding in their game had used some form of the A* algorithm. Most had also used influence maps, attractor-repulsor systems, and flocking to one degree or another. Generally speaking, the game community has this problem well in hand and is now focusing on particular implementations for specific games (such as pathfinding in 3D space, doing real-time path-granularity adjustments, efficiently recognizing when paths were blocked, and so on).

As developers become more comfortable with their pathfinding tools, we are beginning to see complex pathfinding coupled with terrain analysis. Terrain analysis is a much tougher problem than simple pathfinding in that the AI must study the terrain and look for various natural features — choke-points, ambush locations, and the like. Good terrain analysis can provide a game’s AI with multiple “resolutions” of information about the game map that are well tuned for solving complex pathfinding problems. Terrain analysis also helps make the AI’s knowledge of the map more location-based, which (as we’ve seen in the example of The Sims) can simplify many of the AI’s tasks. Unfortunately, terrain analysis is made somewhat harder when randomly generated maps are used, a feature which is popular in today’s games. Randomly generating terrain precludes developers from “pre-analyzing” maps by hand and loading the results directly into the game’s AI.

Several games released in the past year have made attempts at terrain analysis. For example, Ensemble Studios completely revamped the pathfinding approach used in Age of Empires for its successor, Age of Kings, which uses some fairly sophisticated terrain-analysis capabilities. Influence maps were used to identify important locations such as gold mines and ideal locations for building placement relative to them. They’re also used to identify staging areas and routes for attacks: the AI plots out all the influences of known enemy buildings so that it can find a route into an enemy’s domain that avoids any possible early warning.

Another game that makes interesting use of terrain analysis is Red Storm’s Force 21. The developers used a visibility graph (see “Visibility Graphs” sidebar) to break down the game’s terrain into distinct but interconnected areas; the AI can then use these larger areas for higher-level pathfinding and vehicle direction. By cleanly dividing maps into “areas I can go” and “areas I can’t get to,” the AI is able to issue higher-level movement orders to its units and leave the implementation issues (such as not running into things, deciding whether to go over the bridge or through the stream, and so on) to the units themselves. This in turn has an additional benefit: the units can make use of the A* algorithm to solve smaller, local problems, thus leaving more of the CPU for other AI activity.

Formations. Closely related to the subject of pathfinding in general is that of unit formations — techniques used by developers to make groups of military units behave realistically. While only a few developers present at this year’s roundtables had actually needed to use formations in their games, the topic sparked quite a bit of interest (probably due to the recent spate of games with this feature). Most of those who had implemented formations had used some form of flocking with a strict overlying rules-based system to ensure that units stayed where they were supposed to. One developer, who was working on a sports game, said he was investigating using a “playbook” approach (similar to that used by a football coach) to tell his units where to go.

State machines and hierarchical AIs. The simple rules-based finite- and fuzzy-state machines (FSMs and FuSMs) continue to be the tools of choice for developers, overshadowing more “academic” technologies such as neural networks and genetic algorithms. Developers find that their simplicity makes these approaches far easier to understand and debug, and they work well in combination with the types of encapsulation seen in games using A-Life techniques.

Developers are looking for new ways to use these tools. For many of the same reasons A-Life techniques are being used to break down and simplify complex AI decisions into a series of small, easily defined steps, developers are taking more of a layered, hierarchical approach to AI design. Interplay’s Starfleet Command and Red Storm’s Force 21 take such an approach, using higher-level strategic “admirals” or “generals” to issue general movement and attack orders to tactical groups of units under their command. In Force 21 these units are organized at a tactical level into platoons; each platoon has a “tactician” who interprets the orders the platoon has received and turns them into specific movement and attack orders for individual vehicles.

Most developers at the roundtables who were working on strategy games reported that they were either planning to implement or already had used this type of layered approach to their AI engines. Not only was it a more realistic representation, but it made debugging simpler. Most of those who used this design also liked the way it allowed them to add hooks at the strategic level to allow for user customization of AIs, building strategies, and so on, while isolating the lower-level “get the job done” AI from anything untoward that the user might accidentally do to it. This is another trend we’re seeing in strategy games that players find quite enjoyable — witness the various “empire mods” for games such as Stars, Empire of the Fading Suns and Alpha Centauri.

Can AI SDKs Help?

The single biggest topic of discussion at the GDC 2000 roundtables was the feasibility of AI SDKs. There are at least three software development kits currently available to AI developers:

* Mathématiques Appliquées’ DirectIA, an agent-based toolkit that uses state machines to build up emergent behaviors.

* Louder Than A Bomb’s Spark!, a fuzzy-logic editor intended for AI engine developers.

* The Motion Factory’s Motivate, which can provide some fairly sophisticated action/reaction state machine capabilities for animating characters. It was used in Red Orb’s Prince of Persia 3D, among others.

Many developers (especially those at the “AI for Beginners” session) were relatively unaware of these toolkits and hence were very interested in their capabilities. It didn’t seem, however, that many of the more experienced developers thought these toolkits would be all that useful, though a quick poll did reveal that one or two developers were in the process of evaluating the DirectIA toolkit. Most expressed the opinion that one or more SDKs would come to market that would prove them wrong.

In discussing possible features, most felt that an SDK that provided simple flocking or pathfinding functions might best meet their needs. One developer said he’d like to see some kind of standardized “bot-like” language for AI scripts, though there didn’t seem to be any widespread enthusiasm for this idea (probably because of fears it would limit creativity). Also discussed briefly in conjunction with this topic was the matter of what developers would be willing to pay for such an SDK, should a useful one actually be available. Most felt that price was not a particular object; developers today are used to paying (or convincing their bosses to pay) thousands of dollars for toolkits, SDKs, models, and the like. This indicates that if somebody can develop an AI SDK flexible enough to meet the demands of developers, they should be able to pay the rent.

Technologies on the Wane

It’s become clearer since last year’s roundtables that the influence of the more “nontraditional” AI techniques, such as neural networks and genetic algorithms (GAs), is continuing to wane. Whereas in previous years developers had many stories to tell of exploring these and other technologies during their design and development efforts, at this year’s sessions there was much more focus on making the more traditional approaches (state machines, rules-based AIs, and so on) work better. The reasons for this varied, but essentially boiled down to variations on the fact that these approaches are better understood and work “well enough.” Developers seemed to want to focus much more on how to make them work better and leave exploration of theory to the academic field.

Genetic algorithms have taken a particularly hard hit in the past year. There wasn’t a single developer at any of the roundtables that reported using them in any current projects, and most felt that their appeal was overrated. While last year’s group had expressed some interest in experimenting with using GAs to help with game tuning, the developers who had tried reported this year that they hadn’t found this to be very useful. Nobody could think of much use for GAs outside of the well-known “life simulators” such as the Creatures and Petz series.

The one exception to this, as previously noted, is the continued use of A-Life techniques. From flocking algorithms that help guide unit formations (Force 21, Age of Kings, Homeworld) to object-oriented desire/satisfaction approaches (The Sims), developers are finding that these techniques make their games much more lifelike and “predictably unpredictable” than ever before.

Where We’re Headed

Always interesting at the roundtables are the inevitable discussions of where the industry in general, and game AI in particular, is headed. As usual, we got almost as many opinions as there were attendees, but some common trends could be seen emerging down the road.

Everybody thought that game AI would continue to be an important part of most games. The recent advances were unlikely to be lost to a new wave of “gee-whiz” 3D graphics engines, and the continued increase in CPU and 3D card capabilities was only going to continue to give AI developers more horsepower. There was the same feeling as last year that the industry would continue to move slowly away from monolithic and rigid rules-based approaches to more purpose-oriented, flexible AIs built using a variety of approaches. It seems safe to assume that extensible AIs will continue to enjoy some popularity and support among developers, mostly in the first-person shooter arena but also in more sophisticated strategy games.

Academia and the defense establishment continue to influence the game AI field (see John Laird’s “Bridging the Gap Between Developers and Researchers” to be published in Part Two next week), though it sometimes seems that the academic world learns more from game developers than the other way around. For the most part, developers seem to feel that the academic study of AI is interesting but won’t really help them ship their product, while researchers from the academic field find the rapid progress of the game industry enviable even if the techniques aren’t all that well documented.

There can be no doubt that the game AI field continues to be one of the most innovative areas of game development. We know what works and tools are beginning to appear to help us do our jobs. With CPU constraints essentially eliminated and the possibilities of good game AI now part of the design process, AI developers can look forward to a bright future of innovation and experimentation.

Visibility Graphs

One of the interesting areas that game AI is beginning to explore is the realm of terrain analysis. Terrain analysis takes the relatively simple task of path-finding across a map to its next logical step, which is to get the AI to recognize the strategic and tactical value of various terrain features such as hills, ridges, choke-points, and so on, and incorporate this knowledge into its planning. One tool that offers much promise for dealing with this task is the visibility graph.

Visibility graphs are fairly simple constructs originally developed for the field of robotics motion. They work as follows: Assume you are looking down at a map that has a hill in the center and a pasture with clumps of trees all around it. Let appropriately shaped polygons represent the hill and the trees. The visibility graph for this scene uses the vertices of the polygons for the vertices in the graph, and builds the edges of the graph between the vertices wherever there is a clear (unobstructed) path between the corresponding polygon vertices. The weight of each connecting line equals the distance between the two corresponding polygon vertices. This gives you a simplified map against which you can run a pathfinding algorithm to traverse the map while avoiding the obstacles.

There are some problems with visibility graphs, however. They only give raw connection information, and paths built using them tend to look a little mechanical. Also, the developer needs to do some additional work to prevent all but the smallest units from colliding with polygon (graph) edges as they move, since the path generated from a visibility graph doesn’t take into account unit size at all. Still, they’re a straightforward way to break down terrain into simplified areas, and they have uses in pathfinding, setting up ambushes (the unobstructed graph edges are natural ambush points), and terrain generation.(Source:gamasutra


上一篇:

下一篇: