游戏邦在:
杂志专栏:
gamerboom.com订阅到鲜果订阅到抓虾google reader订阅到有道订阅到QQ邮箱订阅到帮看

举实例说明当代游戏音效制作的三种方法

发布时间:2011-05-30 18:01:12 Tags:,,,

作者:Damian Kastbauer

游戏邦注:本文发布于2010年1月28日,文中所涉时间、事件和数据均以此为准。

近期游戏音效发展领域得很不错。随着我们逐渐在当前控制器时代中前进,许多新兴音效的最佳范例通过文章、展示和视频被展现出来。

即便从某种程度上来说,游戏进入下个时代的比赛才刚刚开始,但自4年前Xbox 360发布这个发令枪响时,发展迅速的技术音效设计中某些有超前意识的领军人物已经研发出新型技术,并成功开发出极具灵感的音效。

现在还有人想着给玩家在游戏中带来最丰富的音效体验,这种感觉真得很棒。从某种程度上来说,每个致力于游戏音效的人都在尝试解决许多相同的问题。

无论你从事的是动态混合系统、互动音乐或逼真的呼吸系统,同行业者也在尝试解决与你相同的问题,为他们自己的游戏提供支持。

为解开音效制作的谜团,我将细致分析当代的游戏音效,列举出某些先驱者和优秀人物。本文将主要关注今日的多平台竞争中的技术音效设计师。

混响

自早期PC平台的EAX诞生以来,混响这个领域获得进展,它近期的进步得益于其在音效中间件工具中的普及。

将混响用于某个游戏层面中,并在混音子集中运用单个预设的运算法则,已经成为标准的做法。许多开发者将这个步骤再次深化,以当前玩家所处位置为基础,制作可产生不同混响效果的混响区域。

而且,这些预设已经被延伸至玩家所处区域外的其他地区,这样来自不同区域的声音便可以使用原本的区域和设定,以获得混响信息。这是游戏行业必不可少的要素,你必须细致平衡所有资源,用来加强游戏设计。

尽管预设混响和混响区域已成为标准,也颇受音效设计师喜爱,但仍然可以再次发展至即时音效。计算游戏运行中某个声音的混响既可以通过在声音播放时计算其几何体,也可以利用混响卷积。

2007年随着《Crackdown》的面世,Realtime Worlds首倡即时卷积式混响的想法。

audio_crackdown(from gamasutra.com)

audio_crackdown(from gamasutra.com)

Raymond Usher对Xbox团队说道:“当我们听说复杂的混响/反射/卷积或‘音效阴影’系统在《Crackdown》中的表现后,便意识到可以如此制作即时枪战声。由于我们可以模拟游戏中每个3D声音的真实反射,在配以合适的内容,就可以为玩家创造前所未有的听觉体验。”

那么,什么是以每个声音为背景实行的光线跟踪和卷积即时混响呢?

“在物理学中,光线追迹可以用来计算光束在介质中传播的情况。在介质中传播时,光束可能会被介质吸收,改变传播方向或者射出介质表面等。我们通过计算理想化的窄光束(光线)通过介质中的情形来解决这种复杂的情况。”(游戏邦注:此解释源自维基百科)

卷积的概念解释如下:“在音效信号过程中,卷积混响指数字模拟物理或虚拟领域混响的过程。这个过程以数学混响运转为基础,使用预录音效范例的脉冲响应。为产生这种混响效果,脉冲响应首先储存在数字化信号处理系统中。随后,用进入的音效信号环绕以产生这个过程。”(游戏邦注:此解释源自维基百科)

最终你获得的是预录的脉冲响应,空间通过对环绕物理空间的光线追踪来修饰。这使得声音能够即时交流,更容易识别3D空间某点触发声音的位置和力度变化,声音通过当前环绕区域的几何体进行反射。

当你驾车驶于《Crackdown》中的Pacific City时,可以在枪击声、爆炸声、物理物体和车载广播中听出上述做法的效果。值得注意的是,出自Ruffian Games之手的《Crackdown 2》将于近期内畅销于各大商店,Realtime Worlds的新MMO游戏《全面通缉》也将如此。

近期关于Audiokinetic的Wwise toolset的新闻暗示了卷积混响未来的使用效果,希望即时混响的想法能够进一步整合至运行的三维空间中。

环境音效

我们可以听到屋外正在下雪,除此之外,还有计算机的嗡嗡声、车流声、小鸟间歇的鸣叫声,还有敲击键盘的声音。生活中处处充满声音。

无论是随机性的声音还是手机响起时有规律的响声,声音世界总在你的日常生活中占一定空间,帮助提示所处的环境。

我们已经经历过各个控制器时代中的真实自然随机音效、基于位置的音效和弦乐及动态环境音效。在此过程中学到了许多经验,以及某些帮助声音设计师做出将这些声音融入游戏环境技术世界中的艺术选择。

我们在尝试模拟周遭世界或仅存于脑中的世界,而且在正在这条无尽之路上不断进步。

根据不同情形,遗忘的世界可以充满动作和生命。这种“充满活力”的感觉大部分通过丰富的动态环境质感表达出来,Bethesda Softworks的音效团队细致地完成了这项工作。音效设计师Marc Lambert在发布前起在开发者日记上提供了环境系统的某些背景,具体内容如下:

“团队构建起真正出色的场景,有完整的昼夜循环和动态天气。这些环境的全音效细节需要系统化的方法,这点上我获得了程序员和Elder Scrolls Construction Set的帮助,基于游戏中的地理区域配上一整套声音,给予时间限制和天气参数。”

在玩家花无数个小时升级角色并收集草药和在森林或地下城中配备毒药的游戏中,扩大体验的关键在于非重复性行为的想法。我们可以从音效上实现这种做法,通过引进动态环境音效来补偿玩家在解决某些重复性和无可避免的任务时产生的游戏体验。

Bethesda Softworks音效设计师Marc Lamber还说道:“我认为环境音效强调是游戏音效中另一个强势点。相对从地下城入口处走出看到日光时带来的那种空间和新鲜感,令人毛骨悚然的寂静、遥远的呻吟声和隆隆声能产生恐怖的体验。游戏中无数地下空间的声音通过手工来处理,使用的并非系统性的方法。”

毫无疑问,环境音效可妥善用于空间交流中。当你将分离音境和基于等级的工具结合起来,恰当使用这些声音想法,动态和相互作用可以用来创造不断改变的音效,根据环境和参数自然发生改变。

audio_fable 2(from gamasutra.com)

audio_fable 2(from gamasutra.com)

同样,在《神鬼寓言2》中,音效设计师可以直接在地图中加入环境音效层。在视频开发日志中,Lionhead音效总监Russel Shaw解释道:“我设计了一个系统,可以将环境音效层加载真实的《神鬼寓言2》地图中。这样当你跑过森林时,我们可以播放丛林主题音乐。从一个环境切换到另一个环境很关键,因而必须首先确保技术。”

这可以视作当代控制器圈中的另一种趋势,音效设计师可以处理每个方面的音效,其在游戏中使用的方法现在已很普及。

过去,制作声音素材并附上说明后发给程序员的做法并不常见。音效并非原始内容设计的部分,需要手动编码至关卡合适的位置中,任何参数或转变信息深度编码至引擎中。

很显然,这种无需部门间传递就可以制作、执行来完成任务确实有好处。我觉得以这种做法可以从音效制作和整合的过程中获得收益,将创意性工具交到心存互动的音效设计师和专业制作人员手中,让他们为这些流线型工作流程铺平道路。

随着我们不断深化在游戏中呈现现实的方式,我们在这些世界中的音效也应当有所改变。在《孤岛危机》中,开发商Crytek取得了巨大的飞跃,他们向玩家提供了现实化的沙盘,以使玩家与周围的模拟世界互动。在2008年游戏开发者大会上,Tomas Neumann和Christian Schilling讲述了他们的思考方法:“环境音效通过在地图上标注各音效区域来实现,某些区域可能与其他重叠或位于其他区域中。Schilling认为自然环境应该根据玩家发生改变,但环境也要求动态行为,比如枪响后鸟叫声停止。”

在游戏中,所有的设置都想让玩家沉浸在充满生气的世界中。上述做法实属团队的妙举,带来玩家此前从未体验过的互动行为。

总音效师Christian Schilling在受媒体采访时继续解释基本概念,并提供其他背景:

“在自然中探索意味着你会听到鸟、昆虫、动物、风、水及其他物质的声音。这些玩家远近的所有声音都属于环境。用枪开火意味着你会听到鸟飞走的声音,随后便是寂静。”

“当然,我认为关键在于这里的寂静不仅指代风、水和其他物质的声音,还意指远处的声音(游戏邦注:比如远处动物的叫声和其他声音)。但是我们在游戏中并没去除近处蚊子的声音,时时刻刻在玩家耳畔飞来飞去,因为我们觉得他们对枪响并不在意。”

“那么在开枪之后,你依然可以听到的是近处的声响,如微风拂过树叶或附近树皮坠落(游戏邦注:这些都是离玩家较近的环境)。而且还有远处的环境音效,可能是远处的风、海洋和动物。无论是什么动物,只要声音足够远使玩家分辨不出即可。还加上其他远处的声响,预示着某些事件即将到来。”

“在《孤岛危机》的各个关卡中,我们零星地设置了许多敌营,因而你可能听到某些人在远处摔平底锅或关门的声音。玩家可以根据声音大致判断敌营的位置,这样就能够循声找到敌营。”

“这无疑需要大量的工作,但我们觉得如果玩家选择以睿智的方式来玩这个游戏,也就是细心观察并在攻击前制定计划,他可以从这种设计中获得好处。”

以这种方式设计音效,他们可通过环境音效让玩家融入游戏环境中。他们实现的关卡细节设计值得赞赏,已证实是某种前卫的尝试,进一步通过音效设计来模拟显示。

参数

如果我们真正想要在视频游戏中呈现现实世界,那么就必须考虑逼真呈现环境的行为和尝试的每个方面,以传达游戏想告诉玩家的信息。当我们可以有效塑造和传达屏幕上动作的现实声音时,就可以进一步完善与玩家互动的过程。

现在我们逐渐在音效制作中看到的是,尝试利用潜在模拟的价值令声音更为精致和逼真话,之前受CPU限制非常难或根本无法实现。

游戏音效从业人员评论和讲解“小细节”是常有的事,他们用声音来以极不明显的方式来增加游戏的可玩性。尽管之前受到RAM配给限制无法实现,现在我们可以积极地最大化利用每个平台的额外资源。利用这些资源的部分内容就是使用运行特征和参数来更改内容,使用自定义工具和音效中间件来与游戏引擎产生联系。

audio_ghostbusters(from gamasutra.com)

audio_ghostbusters(from gamasutra.com)

在《Ghostbusters》的Wii版本中,Gl33k音效团队负责内容创作和执行。他们影响即时参数控制功能改变各种混合方式。

在Namco Bandai的《Cook or Be Cooked》中,负责人Barker通过邮件说道:“我尝试在烹饪中加入音效,当牛排发出滋滋声时,这种效果确实比牛排随着时间变焦更具现实性。这让我改变了游戏中需要的音效,提供更具现实性的烹饪。这种设计很微妙,而且多数人从未注意到,但做起来确实很复杂。”

“我大致将烹饪分为4个阶段,从开始直至烧焦。每个阶段演化至下个阶段都有衔接。这些都只通过简单的步骤来修饰,比如轻抛某种事物或将其转移至烤箱中。我们尝试在游戏中融入尽可能多的变量,而且几乎每个声音都有个随机的容器与之相配。”

同样,使用FMOD Designer工具包设计Nihilistic Software的《Conan》时,开发商也可以基于物体接近玩家来使用远距参数来调整DSP设定。

尽管参数值时常出现在屏幕中,但是通常不能直接用于音效上。我们可以用互动声音设计艺术来基于游戏改变音效,让玩家更投入游戏中,当代音效引擎和工具展示出来的特征已可以实现上述目的。

在2008年的《蜘蛛侠:影之网》中,Shaba Games首席音效设计师Brad Meyer可以玩家角色的优良或邪恶本性以及每个关卡的动态情感来决定使用的声音和音效,使用的是Wwise工具包。

从《蜘蛛侠》过渡到《Venom》,Wwise可以使用DSP触发音效。这种改变可以通过Wwise工具来试听,能够做出比较。这种仿制功能是当代音效设计的关键组成部分,反复试听和精致可用于开发健全的音效系统和高度专业的声音设计。

Brad Meyer在邮件中写道:“我在音效执行层面上的做法是在蜘蛛侠转为黑装时降下音调,同时还加入某些参数EQ。这两种效果的结合让黑装听起来更强,而红装听起来更欢快和祥和。这种效果很微妙,部分原因在于我不想让玩家老是听到同一个音乐,但我认为这种效果是潜意识的。”

强大的工具包确实能给设计师带来帮助,可以在无需完善游戏引擎的帮助下实时尝试多种概念是很棒的事情。

在开发过程中我们可以便捷使用音效想法和技术,我们可以继续在问题中寻找最佳的可能解决方案,我们能通过努力来确保呈现的声音能符合游戏中的行动。

在技术音效设计领域中,现在游戏工作室的工作人员有很多机会。设计师可以直接通过工具包让基于范例的音效更贴近互动行为。

对于这些将想法实现的伟人,我们确实有很多可借鉴之处。我希望通过列举这些互动音效的样例,这样可以继续创新,将游戏音效推动至下个时代。(本文为游戏邦/gamerboom.com编译,如需转载请联系:游戏邦

The Next Big Steps In Game Sound Design

Damian Kastbauer

It’s a great time in game audio these days. As we move forward in the current console generation, several emerging examples of best practices in audio implementation have been exposed through articles, demonstrations, and video examples.

Even though in some ways it feels like the race towards next gen has just begun, some of the forward-thinking frontrunners in the burgeoning field of Technical Sound Design have been establishing innovative techniques and pulling off inspirational audio since the starting gun was fired over four years ago with the release of the Xbox 360.

It’s a good feeling to know that there are people out there doing the deep thinking in order to bring you some of the richest audio experiences in games available today. In some ways, everyone working in game audio is trying to solve a lot of the same problems.

Whether you’re implementing a dynamic mixing system, interactive music, or a living, breathing ambient system, the chances are good that your colleagues are slaving away trying to solve similar problems to support their own titles.

In trying to unravel the mystery of what makes things tick, I’ll be taking a deeper look at our current generation of game sound and singling out several pioneers and outspoken individuals who are leaving a trail of interactive sonic goodness (and publicly available information) in their wake. Stick around for the harrowing saga of the technical sound designer in today’s multi-platform maelstrom.

Reverb

Reverb is one area that has been gaining ground since the early days of EAX on the PC platform, and more recently thanks to its omnipresence in audio middleware toolsets.

It has become standard practice to enable reverb within a single game level, and apply a single preset algorithm to a subset of the sound mix. Many developers have taken this a step further and created reverb regions that will call different reverb presets based on the area the player is currently located. This allows the reverb to change based on predetermined locations using predefined reverb settings.

Furthermore, these presets have been extended to areas outside of the player region, so that sounds coming from a different region can use the region and settings of their origin in order to get their reverberant information. Each of these scenarios is valid in an industry where you must carefully balance all of your resources, and where features must play to the strengths of your game design.

While preset reverb and reverb regions have become a standard and are a welcome addition to a sound designer’s toolbox, there is still the potential to push further into realtime. By calculating the reverb of a sound in the game at runtime either through the calculation of geometry at the time a sound is played or through the use of reverb convolution.

Leading the charge in 2007 with Crackdown, Realtime Worlds set out to bring the idea of realtime convolution reverb to the front line.

“When we heard the results of our complex Reverb/Reflections/Convolution or ‘Audio-Shader’ system in Crackdown, we knew that we could make our gunfights sound like that, only in realtime! Because we are simulating true reflections on every 3D voice in the game, with the right content, we could immerse the player in a way never before heard.”- Raymond Usher, to Team Xbox

So, what is realtime Reverb using ray tracing and convolution in the context of a per-voice implementation? Here’s a quick definition of ray tracing as it applies to physics calculation:

“In physics, ray tracing is a method for calculating the path of waves or particles through a system with regions of varying propagation velocity, absorption characteristics, and reflecting surfaces. Under these circumstances, wavefronts may bend, change direction, or reflect off surfaces, complicating analysis. Ray tracing solves the problem by repeatedly advancing idealized narrow beams called rays through the medium by discrete amounts. Simple problems can be analyzed by propagating a few rays using simple mathematics. More detailed analysis can be performed by using a computer to propagate many rays.” – Wikipedia

On the other side of the coin you have the concept of convolution: “In audio signal processing, convolution reverb is a process for digitally simulating the reverberation of a physical or virtual space. It is based on the mathematical convolution operation, and uses a pre-recorded audio sample of the impulse response of the space being modeled. To apply the reverberation effect, the impulse-response recording is first stored in a digital signal-processing system. This is then convolved with the incoming audio signal to be processed.” – Wikipedia

What you end up with is a pre-recorded impulse response of a space being modified (or convoluted) by the ray-traced calculations of the surrounding physical spaces. What this allows the sound to communicate in realtime is a greater sense of location and dynamics as sound is triggered from a point in 3D space, and sound is reflected off of the geometry of the immediate surrounding area.

You can hear the results of their effort in every gunshot, explosion, physics object, and car radio as you travel through the concrete jungle of Crackdown’s Pacific City. It’s worth noting that Ruffian Games’ Crackdown 2 will be hitting shelves soon, as will Realtime Worlds’ new MMO All Points Bulletin.

With a future for convolution reverb implied by recent news of Audiokinetic’s Wwise toolset, let’s hope the idea of realtime reverb continues to play an integral part in the next steps towards runtime spatialization.

Ambient

Listen, the snow is falling… In addition to that, my computer is humming, traffic is driving by outside, birds are intermittently chirping, not to mention the clacking of my “silent” keyboard. Life is full of sound. We’ve all spent time basking in the endless variation and myriad ways in which the world around us conspires to astound and delight with the magic of its soundscape.

Whether it is the total randomness of each footstep, or the consistency of our chirping cell phones, the sound of the world lends a sense of space to your daily life and helps ground you in the moment.

We are taking steps in every console generation toward true elemental randomization, positional significance, and orchestrated and dynamic ambient sounds. Some of the lessons we have learned along the way are being applied in ways that empower the sound designer to make artistic choices in how these sounds are translated into the technical world of game environments.

We are always moving the ball forward in our never-ending attempts at simulating the world around us… or the world that exists only in our minds.

The world of Oblivion can be bustling with movement and life or devoid of presence, depending on the circumstances. The feeling of “aliveness” is in no small part shaped by the rich dynamic ambient textures that have been carefully orchestrated by the Bethesda Softworks sound team. Audio Designer Marc Lambert provided some background on their ambient system in a developer diary shortly before launch:

“The team has put together a truly stunning landscape, complete with day/night cycles and dynamic weather. Covering so much ground — literally, in this case — with full audio detail would require a systematic approach, and this is where I really got a lot of help from our programmers and the Elder Scrolls Construction Set [in order to] specify a set of sounds for a defined geographic region of the game, give them time restrictions as well as weather parameters.” – Marc Lambert, Bethesda Softworks Newsletter

In a game where you can spend countless hours collecting herbs and mixing potions in the forest or dungeon crawling while leveling up your character, one of the keys to extending the experience is the idea of non-repetitive activity. If we can help to offset that from a sound perspective by introducing dynamic ambiance it can help offset some of the grind the player experiences when tackling some of the more repetitive and unavoidable tasks.

“[The ambient sound] emphasizes what I think is another strong point in the audio of the game — contrast. The creepy quiet, distant moans and rumbles are a claustrophobic experience compared to the feeling of space and fresh air upon emerging from the dungeon’s entrance into a clear, sunny day. The game’s innumerable subterranean spaces got their sound treatment by hand as opposed to a system-wide method.” – Marc Lambert, Bethesda Softworks Newsletter

It should come as no surprise that ambiance can be used to great effect in communicating the idea of space. When you combine the use of abstracted soundscapes and level-based tools to apply these sound ideas appropriately, the strengths of dynamics and interactivity can be leveraged to create a constantly changing tapestry that naturally reacts to the environment and parameters.

Similarly, in Fable II, the sound designers were able to “paint ambient layers” directly onto their maps. In a video development diary, Lionhead audio director Russel Shaw explains: “I designed a system whereby we could paint ambient layers onto the actual Fable II maps. So that as you’re running through a forest for instance, we painted down a forest theme, and the blending from one ambiance to another is quite important, so the technology was lain down first of all.” – Russel Shaw, posted by Kotaku

In what could be seen as another trend in the current console cycle, enabling the sound designers to handle every aspect of sound and the way it is used by the game is just now becoming common. The ability to implement with little to no programmer involvement outside of the initial system design, setup, and toolset creation is directly in contrast to what was previously a symbiotic relationship requiring a higher level of communication between all parties.

In the past, it was not uncommon to create sound assets and deliver them with a set of instructions to a programmer. A step removed from the original content creator, the sounds would need to be hand coded into the level at the appropriate location and any parametric or transition information hard coded deep within the engine.

It is clearly a benefit to the scope of any discipline to be able to create, implement, and execute a clear vision without a handoff between departments to accomplish the task. In this way I feel like we are gaining in the art of audio implementation and sound integration — by putting creative tools in the hands of the interactive-minded sound designers and implementation specialists who are helping to pave the way for these streamlined workflows.

As we continue to move closer towards realistically representing a model of reality in games, so should our worlds react and be influenced by sound and its effect on these worlds. In Crysis, developer Crytek has made tremendous leaps towards providing the player with a realistic sandbox in which to interact with the simulated world around them. In a presentation at the Game Developers Conference in 2008 Tomas Neumann and Christian Schilling explained their reasoning: “Ambient sound effects were created by marking areas across the map for ambient sounds, with certain areas overlapping or being inside each other, with levels of priority based on the player’s location. ‘Nature should react to the player,’ said Schilling, and so the ambiance also required dynamic behavior, with bird sounds ending when gunshots are fired.” – Gamasutra

In a game where everything is tailored towards immersing the player in a living, breathing world, this addition was a masterstroke of understatement from the team and brings a level of interactivity that hadn’t been previously experienced.

Audio Lead Christian Schilling went on to explain the basic concept and provide additional background when contacted:

“Sneaking through nature means you hear birds, insects, animals, wind, water, materials. So everything — the close and the distant sounds of the ambiance. Firing your gun means you hear birds flapping away, and silence.

“Silence of course means, here, wind, water, materials, but also — and this was the key I believe — distant sounds (distant animals and other noises). We left the close mosquito sounds in as well, which fly in every now and then — because we thought they don’t care about gun shots.

“So, after firing your gun, you do hear close noises like soft wind through the leaves or some random crumbling bark of some tree next to you (the close environment), all rather close and crispy, but also the distant layer of the ambiance, warm in the middle frequencies, which may be distant wind, the ocean, distant animals — [it doesn't] matter what animals, just distant enough to not know what they are — plus other distant sounds that could foreshadow upcoming events.

“In Crysis we had several enemy camps here and there in the levels, so you would maybe hear somebody dropping a pan or shutting a door in the distance, roughly coming from the direction of the camp, so you could follow that noise and find the camp.

It was a fairly large amount of work, but we thought, ‘If the player chooses the intelligent way to play — slowly observing and planning before attacking — he would get the benefits of this design.’”

In this way, they have chosen to encourage a sense of involvement with the environment by giving the ambient soundscape an awareness of the sounds the player is making. The level of detail they attained is commendable, and has proven to be a forward thinking attempt at further simulating reality through creative audio implementation.

Parameters

If we really are stretching to replicate a level of perceived reality with video games, then we must give consideration to every aspect of an activity and attempt to model it realistically in order to convey information about what the gameplay is trying to tell us. When we can effectively model and communicate the realistic sounds of the actions portrayed on screen, then we can step closer towards blurring the line between the player and their interactions.

What we are starting to see pop up more frequently in audio implementation is an attempt to harness the values of the underlying simulation and use them to take sound to a level of subtlety and fidelity that was previously either very difficult or impossible to achieve due to memory budget or CPU constraints.

It’s not uncommon for someone in game audio to comment and expound on the “tiny detail” that they enabled with sound to enhance the gameplay in ways that may not be obvious to the player. While previously encumbered by RAM allocation, streaming budgets, and voice limitations, we are now actively working to maximize the additional resources available to us on each platform. Part of utilizing these resources is the ability to access runtime features and parameters to modify the existing sample based content using custom toolsets and audio middleware to interface with the game engine.

In the Wii version of Ghostbusters, the Gl33k audio team handled the content creation and implementation. Some of the ways that they were able to leverage the real time parameter control functionality was by changing the mix based on various states:

“The ‘in to goggle’ sound causes a previously unheard channel to rise to full volume. This allowed us to create much more dramatic flair without bothering any programming staff.” The PKE Meter was “Driven by the RTPC, which also slightly drives the volume of the ambient bus.

“Ghost vox were handled using switch groups, since states would often change, but animations did not. Many of the states and sounds associated with them that we wanted to happen and come across, did not actually have any specific animations to drive them, so we actually ended up going in and hooking up state changes in code to drive whatever type of voice FX we wanted for the creature. This helped give them some more variety without having to use up memory for specific state animations.” – Jimi Barker, Ghostbusters and Wwise, on Vimeo

In Namco Bandai’s Cook or Be Cooked, says Barker via email, “I tied [RTPC] in with the cooking times, so when a steak sizzles, it actually sounds more realistic than fading in a loop over time. This allowed me to actually change the state of the sound needed over time to give a more realistic representation of the food cooking as its visual state changed. It’s totally subtle, and most people will never notice it, but there’s actually a pretty complicated process going on behind that curtain.

“I (had) roughly four states per cookable object that went from beginning, all the way through burned. There were loops for each of those states that fed into each other. These were also modified with one-shots — for example, flipping an object or moving it to the oven. We tried to provide as much variation as we could fit into the game, so almost every sound has a random container accompanied with it.”

Similarly, with the FMOD Designer toolset on Nihilistic Software’s Conan, the developers were able to use the distance parameter to adjust DSP settings based on the proximity of an object to the player. In one example, a loop was positioned at the top of a large waterfall far across a valley with a shallow LPF that gradually (over the course of 100 meters) released the high frequencies.

As the player approaches, the filter gradually opens up on your way toward two additional waterfall sources, placed underneath a bridge directly in front of the waterfall. The additional sources had a smaller rolloff with a steeper LPF applied and were meant to add diversity to the “global” sound.

The shifting textures and frequencies of the three sounds combined sound massive as you battle your way across the bridge which helps to add a sense of audio drama to the scenario, which you can view here.

Whereas parameter values have always existed behind the screen, they have not always been as readily available to be harnessed by audio. The fact that we are at a place in the art of interactive sound design where we can make subtle sound changes based on gameplay in an attempt to better immerse the player is a testament to the power of current generation audio engines and the features exposed from within toolsets.

In 2008′s Spider-Man: Web of Shadows, Shaba Games lead sound designer Brad Meyer was able to use the player character’s good/evil affinity, in addition to the dynamic “mood” of each level, to determine the sound palette used, as well as the sound of the effects, using the Wwise toolset.

By tying the transition between Spiderman and Venom to a switch/state in Wwise, a DSP modification of the sounds triggered could be applied. The change could be easily auditioned with the flip of a switch within the Wwise toolset, allowing for prototyping in parallel and outside the confines of the game engine and gameplay iteration. This ability to mock-up features is a key component in the current generation, where iteration and polish allow for the development of robust audio systems and highly specialized sound design.

“To explain what I [ended up doing] on the implementation side… was drop the pitch of Spider-Man’s sounds by a couple semitones when he switched to the Black Suit, and also engaged a parametric EQ to boost some of the low-mid frequencies. The combination of these two effects made the Black Suit sound stronger and more powerful, and Red Suit quicker and more graceful.

“The effect was rather subtle, in part because it happens so often I didn’t want to fatigue the player’s ears with all this extra low frequency information as Black Suit, but I think it works if nothing else on a subliminal level.” – Brad Meyer, via email
It makes sense that with a powerful prototyping toolset at the sound designer’s disposal, the ability to try out various concepts in realtime without the aid of a fully developed game engine can be a thing of beauty.

By enabling the rapid iteration of audio ideas and techniques during development, we can continue to reach for the best possible solution to a given problem, or put differently, we can work hard towards making sure that the sound played back at runtime best represents the given action in-game.

In the field of Technical Sound Design there is a vast array of potential at the fingertips of anyone working in game audio today. Under the surface and accessed directly through toolsets, the features available help bring sample based audio closer towards interactivity. In what can sometimes be a veiled art, information on implementation techniques has at times be difficult to come by.

We are truly standing on the shoulders of the giants who have helped bring these idea’s out in the open for people to learn from. It is my hope that by taking the time to highlight some of the stunning examples of interactive audio, we can all continue to innovate and drive game audio well into the next generation. (Source: Gamasutra)


上一篇:

下一篇: