游戏邦在:
杂志专栏:
gamerboom.com订阅到鲜果订阅到抓虾google reader订阅到有道订阅到QQ邮箱订阅到帮看

有关游戏引擎的过去,现在与未来

发布时间:2016-04-15 17:00:49 Tags:,,,,

作者:Kevin Normann

在今天的产业中出现了许多非常有趣的开发,特别是围绕着电子游戏引擎的发展的开发,这也是我想要与你们分享的内容。但现在的我还不打算这么做。相反地我将努力做到自律,即我将先思考为什么作为在产业中待了22年多的资深人士,我会受到这些全新的开发内容的鼓舞。而为了理解为何这一问题如此重要,我们需要先谈谈电子游戏引擎的开发历史。

简要的历史回顾

对于作为电子游戏产业中的开发者的我们,我们能够找到许多致力于全新且有趣的新游戏的机会。早前这全部都是关于游戏,但随着硬件兼容性的提高,并且游戏也变得更加复杂,我们需要投入更多时间于游戏背后的技术中。所以我们便看到在90年代和2000年代许多拥有成功作品和引以为傲的技术成就的公司将他们的引擎授权给了其它工作室去创造其它产品。这是一种自然发展的过程并让许多游戏开发者能够专注于我们工作中最重要的一部分—-游戏。

在2004年秋天,我们的产业迎来了Xbox 360和PS3。电子游戏硬件通过转向多核心处理器(需要游戏工程师去学习“并发性”)而在兼容性方面迈出了巨大的一步。很快地,现在市场上的那些电子游戏引擎都将过时。实际上没有一种受欢迎的引擎,甚至是全新的技术引擎能够有效地执行于全新硬件中,更别说去推动新硬件的发展极限了。我们过去常开玩笑说软件效能将以“Gigi-NOPS”进行衡量,即意味着每秒会出现数十亿的“无操作”状态,因为我们没有办法保证这些处理程序始终处于运行状态。实际上从最初发布到出现任何游戏去证实这些机器间是否存在兼容性这期间需要花费超过5年的时间。但当游戏产业一直在努力追赶今天的硬件性能的同时,各大硬件制造商也在努力推动性能的继续发展。

软件已经落后了

今天我们仍然可以看到随着越来越多处理器被使用,硬件的兼容性仍在快速发展着。我们认为这种情况不只与主要CPU有关,同时也与图形卡(GPU)和其它处理器以及像专用物理芯片,视频压缩等协同处理器相关。甚至今天的电话也都拥有4个处理器并且将跟随着笔记本电脑和台式电脑的脚步而发展。所有的这些处理能力的发展都是随着处理器数量的增加而增长,但是用于推动这一技术的软件引擎却并未跟上它们的脚步。

对此存在许多原因,我也认为这些原因比你想象中的更加明显。

1.从2009年起游戏引擎不能跟上硬件性能和兼容性的主要原因是引擎开发者没有去做这件事的压力。为什么他们未能感受到这样的压力呢?因为整个产业都忙于应对全新且容易接近的新技术所推动而不断涌入的全新休闲玩家,即开发者的关注点转向了休闲/社交体验。

2.此外,像免费游戏和应用内部货币等全新盈利模式也吸引了许多开发者,并且这也推动着那些原本擅于开发高端游戏引擎的工作室开始整合全新模式或适应新用户的需求。

3.少数致力于硬核游戏的开发者(游戏邦注:通常是那些致力于现有的受欢迎游戏的续集的开发者)乐于在安全参数范围内做出适当的完善。较少的竞争也意味着他们能够推动一些具有创新性的理念的发展。无需尝试新硬件的兼容性便能够为一些常用的硬件加速。总之游戏开发者只是让游戏引擎中间件去适应新硬件的兼容性,但是游戏引擎中间件公司却正忙于处理不断增长的休闲游戏开发者的需求。

总之,去满足不断增长的全新且多样平台和玩家以及全新的盈利和市场营销策略以及游戏风格的种种需求所呈现的巨大挑战导致电子游戏产业的软件端始终非常忙碌。与此同时,硬件制造商也在不断推动着兼容性的发展,所以硬件的真正兼容性与软件兼容性之间的鸿沟也被逐渐扩大。我敢保证这一阶段会让那些清楚自己此时的成就但是我们所有人却未能意识到的硬件制造商沮丧不已。

这将我们带到了哪里?

我上述提到的要点是,消费者花了许多时间去探索的是不需要软件去推动技术兼容性发展的游戏。所以这将我们带到了哪里呢?玩家是否对今天的游戏体验感到满意并且不再想看到技术限制?限制我们是否已经创造出了玩家想要的游戏并且它们是否是只需要经历适当迭代的最终内容?那些已经进入这一领域3年并看到像《我的世界》等成功游戏的发展的人可能会这么想。但作为产业资深人士的我可以告诉你们,尽管这几年我们迎来了一些有趣且吸引人的新玩具,但是我们的产业仍然受到来自同一股力量的影响。特别是人们对于全新体验的渴望将始终推动着开发者去创造能够改变消费者体验的新技术。

就像今天最受欢迎的议题是虚拟现实和增强现实,而这两种技术都需要全新的处理和设计方式。自2009年的休闲“软核”游戏以来,不管是在规模还是复杂性方面游戏都在不断发展着,并在2012年进化成了“中核”游戏,而现在它们又将再一次把全新的消费者带向了一个很难与传统硬核游戏玩家区分开来的复杂级别。如今的电子游戏市场变得比之前更大了,而这里的游戏玩家也开始渴望看到“全新的”内容。它们中的很多人可能一开始只是休闲玩家,但现在却已经做好了迎接全新技术的准备。

虽然这是一种好现象,但是仍然存在一个不幸的现实。

在某些方面看来电子游戏引擎技术仍然较落后。产业中真正的创新者发现自己不得不妥协于现有的中间件引擎选择,或者他们只能花时间和金钱去实现目标。

以下便是现代电子游戏引擎(游戏邦注:如Unity,Unreal和Hero引擎等)的不足之处:

Unity(from chinadaily)

Unity(from chinadaily)

1.比起电子游戏引擎,视频卡更强大,因为这些引擎所依赖的软件范式已经过时好几年了。现代化现有引擎的核心结构需要耗费巨额成本,此外,这种设计需要这个受技术所驱动的产业所缺少的“开箱即用的”思想者。

2.不断发展的云计算和云存储以及强大的数据带宽正在推翻着我们看似无限的边界。而创造游戏去利用这种强大的能力便需要全新的组织层次,全新的工具链,美术和技术。如果工具和过程不够强大,未来的任何复杂项目也都会坍塌。这种情况在之前便已经出现过了。例如Midway试图通过使用2008年最出色的“现代”技术而挤进“开放世界”游戏平台。但是这一努力却毁了这家公司。过去我曾在艺电给新员工上过一堂课,那时候我使用了一张PS1能力的图表,即用于衡量处理器能力与内存数去创造一个并不比屏幕上的像素大多少的小小的机器人,然后我又在它旁边呈现了一张相似的PS2能力图表,最后我还呈现了一张能够填满三张8.5×11页面的PS3能力图。在这个例子中我解释了机器的能力如何在处理器能力和内存能力这两个空间中发展,但在游戏端,这一能力将被用于6个规格中,即世界大小,世界细节,角色数量,角色细节,角色智力以及之前提到的giga-NOPS。“当你能够做任何事的时候你会做些什么?”这一问题是电子游戏开发者需要不断纠结的问题。并且在今后几年里这将变成一个越发复杂的问题。在复杂性管理方面游戏引擎将比之前提供更多帮助。

3.现代引擎做出了第一次尝试去完善团队获得不同程度的成功的方式,但事实上他们离自己真正应该身处的位置还差很远。通过与开发者交谈,你可以听到有关他们所遭遇的挑战的故事并且会听到他们对因为工具的限制而导致失败的抱怨。即使开发者工具和引擎能够提高团队协作,它们也往往带有各种陷阱,即有可能导致一个开发者会破坏另一个开发者的工作。推动游戏的进一步发展需要大量中间管理工作以及保护现有过程的思考周期。如果这真的是现在游戏所面临的状况,这些引擎该如何推动未来游戏的开发呢?而因为最受欢迎的现代引擎Unity 3D特别不适合这一环节,问题也跟着变得更加复杂了。虽然能够让小型团队更轻松且廉价地开发跨平台游戏,但是对于下一代的创新内容(如VR,AR以及基于分布式云端的游戏)来说,Unity 的核心架构将不再能够满足开发者的需求。它们可能将再次被降级到小型游戏的开发过程中。

为什么现代电子游戏引擎难以适应改变?

现有的引擎不能适应如今的市场中所出现的这些改变吗?对此的简要回答是“并非完全如此”,较详细的回答则是它们当然会尝试着去适应改变,但存在其它势力阻止着它们这么做。都说历史是我们最好的老师。我们都知道电子游戏引擎拥有一个自然的生命周期。成功的引擎之所以成功是因为它们提供给了当时的开发者所面临的问题解决方法。如果开发者所面临的问题非常困难,那么能够解决这个问题的第一个引擎便能获得巨大的成功(游戏邦注:就像Unity便解决了大多数游戏开发者想要将游戏带向网页的烦恼)。但是最终出现了两种情况。首先,随着引擎越受欢迎,便会有越多团队将其用于自己的游戏中。而越多团队使用同样的引擎将导致引擎开发者需要花更多时间去解决支持问题。重构将变得更加复杂,且不仅需要耗费大量时间也将消耗大量金钱。从现实层面来看,技术的基础将变得更加稳定。同时游戏和技术与范式中的创造性将推动着它们继续发展。那些拥有稳定资金来源的小型团队总是能够进一步推动这一界限的发展。所以尽管最受欢迎的引擎的开发者从未停止过寻找创新的脚步,但实际上他们也面对着许多抵触着创新的客户。从金钱的角度来看,他们其实可以通过快速更新而赚到更多钱,但是大型重构工作需要投入更多时间和金钱并且将带给他们和用户更多支持挑战。当然就更不用提看到自己当前的技术表现出色且不存在能够与之相抗衡的其它技术时的自满了。在找到任何让人信服的理由之前,没有一个电子游戏引擎开发者会愿意去承担重构他们现有产品的挑战的。

所以我们处于当前引擎技术的哪个发展阶段?

让我们想想现代的FPS之王Unreal,中层游戏引擎Unity以及MMO游戏中间件Hero,你便会发现今天的软件技术与真正潜能之间存在着一道巨大的缺口。这些引擎的核心架构其实已经都过时了,需要开发者投入更多努力进行重构才能更好地去利用现代硬件和基于云端的计算机系统。再一次地,最受游戏欢迎的引擎Unity便需要进行最多重构工作。即使这些公司拥有动机和足够的金钱去做这些事,但是用于做出这些改变的时间便足以再创造出一个全新的引擎了。而这些引擎所具有的最强大的资产便在于它们拥有嵌入式架构能让开发者去定制并强化功能,但结果也仅限于此了。最终对于那些想要创造更让人兴奋的全新体验的开发者来说这些引擎的核心架构存在更多限制性。

对于我所列出的各种问题,我们的产业对于改变的需求也越来越迫切。那些拥有关于云端开发解决方法,平台架构,线上服务和并行计算等全新思维方式的创新公司已经创造了新技术并展示了他们所做出的承诺。而这些公司通常都是由我们这些厌倦了现有无能的解决方法并希望帮助给予各种水平和背景的开发者利用有限预算去获取真正具有创造性的理念的人所运行的。

本文为游戏邦/gamerboom.com编译,拒绝任何不保留版权的转发,如需转载请联系:游戏邦

Game Engines: Past, Present and Future

by Kevin Normann

There are some very interesting developments happening in our industry today, particularly around video game engine advances, that I am eager to talk about. But I am not going to. Well, not yet anyway… Instead I will practice some self-discipline, or “Zen” as my college karate instructor once taught me, and first reflect on why I, a 22+ year industry “veteran”, find myself genuinely excited and encouraged by these new developments. To fully understand why this is meaningful, we need to talk a little about the history of video game engine development. I hope my readers will find this personal reflection both familiar and useful in understanding why we should be optimistic about the future of the craft that we love so much.

A brief history

For some of us developers in the video game industry, there have been many opportunities to work on new and interesting games. In the early days, it was all about the games, but as hardware grew in capability and games grew in complexity, more and more time was spent wrestling with the technology behind the games. This lead to a phase in the 90′s and 2000′s where companies with successful products and who were particularly proud of their technical achievements would license their engines out to other studios to be used on other products. It was a natural progression and it allowed many game developers to stay focused on the most important part of our jobs – the games.

In the fall of 2004 the industry received word of Xbox360 and PS3. Video game hardware took another leap in capability by moving strongly to multi-core processors requiring game engineers to learn a new term – “concurrency”. Instantly, existing video game engines on the market became outdated. In fact none of the popular engines or even latest tech-engines were architected to perform well on the new hardware, let alone push the hardware to its limits. We used to joke at the time that software efficiency was going to be measured in “Gigi-NOPS” meaning billions of “no operations” per second, because there was no way to keep all those processors busy all the time. In fact, it took over half a decade from their initial release before any title was published that arguably demonstrated the capability of either of these machines. But while the industry labored all that time to catch up to the hardware performance of the day, the hardware manufacturers pushed performance forward.

Software has fallen behind

Today, we are still seeing hardware capability growing tremendously through the use of more and more processors. We see this happening not just related to the main CPU, but also to the graphics cards (GPUs) and other processors and co-processors such as dedicated physics chips, video compression, etc. Even phones today commonly have 4 processors and will likely follow along the same path as laptops and desktops. All of this processing power growth through increased processor counts, and yet the software engines used to drive this technology have not kept up.

There are many reasons why software technology has fallen behind and I think they are rather obvious when you think about it.

1) The primary reason why game engines did not keep up with hardware performance and capability since 2009 is because the engine developers did not feel the pressure to do so from the industry. Why did they not feel the pressure? Because, the industry was busy with an influx of new, casual, gamers to the market driven by new approachable and personal technology (i.e. mobile devices and web/Facebook) which shifted the focus to casual/social experiences over visually stunning ones.

2) Additionally, new monetization models such as free to play, freemium and in app currencies took a lot of attention by developers and it turns out the studios that were traditionally good at high end game engine development were usually some of the slowest to successfully incorporate the new models or adapt to new consumer expectations which lead to many closings and loss of revenue that resulted in less high-end innovation.

3) The few developers working on hard core games (usually those working on sequels to existing popular franchises) were happy to make modest improvements within safe parameters. Less competition meant less reason to push for revolutionary ideas. General hardware speed up could be enjoyed without taking on the challenge of approaching the capability of new hardware. In short, game developers left it to game engine middleware to adapt for new hardware capabilities, but the game engine middleware companies were busy focusing on addressing the needs of the growing number of casual game developers.

Overall the challenges in meeting the needs of a growing number of new and diverse platforms, new and diverse gamers and new monetization and marketing strategies and new and innovative game play styles has kept the software arm of the video game industry quite busy on everything except hardware performance and capability. Meanwhile, over this same time, the hardware manufacturers have continued to push capability forward as they have always done, widening the gap between the actual capability of the hardware and the capability that the software can efficiently tap into. I am sure this phase has been frustrating for hardware manufacturers who are keenly aware of their accomplishments over this time and that most of the rest of us are not.

Where does that leave us?

The primary point that I made above is that consumers spent many years exploring games that didn’t require software to evolve toward greater technical capability. So where does that leave us? Is it the case that gamers are happy with today’s experiences and are no longer interested in seeing technical limits pushed? Have we now produced the games that gamers have always wanted and now they will be finally content to only play modest iterations of what they’ve already seen? Some might think so if one was only watching the space for 3 years and watched the growing success of games like Minecraft. But, veterans of the industry can tell you that while we have had some interesting and distracting new toys to play with in recent years, the same root force continues to affect change in our industry as it has always done. Specifically, the hunger for new experiences will always drive developers to push the technology to revolutionize customer experiences.

Today there is a lot of talk about virtual reality and augmented reality, both of which require new levels of processing and new ways of engineering to pull off well. Games are also growing in size and complexity again from the casual “softcore” games of 2009 that moved into the “mid-core” games of 2012 are now bringing all of those new consumers to a level of sophistication that will be harder to distinguish from traditional hard core gamers. The grandma of tomorrow might be as “hard-core” about their games as her granddaughter and grandson are today! Today the video game market is larger than ever and these gamers are starting to grow hungry for the next “new” thing. Many of them may have started as casual gamers, but are now plugged in and ready to adopt awesome technology along with everyone else.

This is all well and good, however there is still one unfortunate reality…

Video game engine technology is still lagging behind in several key ways. True innovators in the industry are finding that they must either compromise their vision to make it work with the middle ware engine options available, or find the money and time to move their own mountains to reach their goals.

Here are some of the ways in which modern video game engines such as (need I name them?) Unity, Unreal, and Hero engine (to name a few) fall short…

1) Video cards are far more capable than video game engines can tap into because the software paradigms that these engines were built on are now many years (perhaps a decade) out of date. To modernize the core architecture of existing engines would require a huge and costly re-architecture. Even further, this kind of engineering requires specialized and capable “out-of-the-box” thinkers that are uncommon even in this tech-driven industry.

2) The growing power of cloud computing and cloud storage combined with incredible data bandwidth is pushing the boundaries of what is possible to seemingly limitless levels. Building games to take advantage of this power requires an entirely new level of organization with a new level of tool chain and art and technology pipeline to be able to construct amazingly intricate worlds and experiences. If the tools and processes aren’t robust enough, complex projects of the future will collapse under their own weight. This has happened before at moments of big change. An example of how challenging (and dangerous) these migrations can be, consider what happened to Midway when they made a serious effort to move into “open world” games cross platform using the best of “modern” technology of 2008. The effort was a big factor in killing the company. I used to teach a class to new hires at EA where I graphed the power of the PS1 on the board measuring processor power verses amount of RAM to produce a small dot not much larger than a pixel on the screen, then I placed a similar graph of PS2 power next to it that was about one inch square then finally showed the power of the PS3 which filled three 8.5×11″ pages. In this example I explained how power in the machines tends to grow in two dimensions processor power verses RAM, but that power is spent in 6 dimensions on the game side, namely world size, world detail, character count, character detail, and character intelligence with the 6th dimension being the aforementioned (in part 1) giga-NOPS. The question “what do you do when you can do anything?” is a question that video game developers have to contend with more and more. This question will only get more challenging to answer in the years to come. Game engines must provide far more help for managing complexity than ever before. Modern game development tools are not yet up to the challenges before us.

3) Modern engines have made first efforts to improve the way teams work together with various levels of success, but they fall far short of where they really should be for the large teams working on larger projects from remote locations on technology that will live and operate in the cloud. By talking to developers you can hear the stories of their challenges and complaints for lost work and effort because their tools restrict them from being as productive as they feel they could be. Even when developer tools and engines provide for group collaboration they are often full of gotchas that allow one developer to stomp on the work of another developer. For games to make forward progress requires a tremendous amount of middle management and thought cycles on protecting existing progress which takes away from the individual productivity. If this is true for games of today, how are these engines going to fare developing the games (or should I call them gaming ecosystems) of tomorrow? The problem is even worse than this because the most popular modern engine, Unity3D, is particularly bad in this department. It was propelled to number status by allowing small teams to easily develop cross platform cheaply and target the web at the same time, but for the next generation of innovation (VR, AR, and distributed cloud based games) Unity’s core architecture will fall short of meeting developer’s needs. It will be relegated to playing a smaller role in the development of the same small games it has been used for in the past.

Why can’t modern video game engines adapt?

Can’t existing engines adapt to changes in market pressures such as these? The short answer is “ultimately no” and the long answer is that of course they will try, but there are other forces acting on them that will hold them back. Here is where history is our best teacher. Video game engines have a natural life cycle that we’ve seen over and over. The ones that succeed, do so because they offer solutions to pain points that developers are facing at the time. If the pain points are particularly challenging, then the first engine to answer the challenge usually succeeds in big ways (such as Unity’s solving the pain that most game developers at the time faced in trying to move their games to the web). However, eventually two things happen. First, the more popular an engine is, the more teams license it for their games. More game teams using an engine causes the engine’s developer to spend more and more time dealing with support issues. Re-factoring becomes much harder, and re-architecting becomes costly, challenging, and time consuming. In a real way, the foundation of the technology solidifies into concrete. Meanwhile, innovations in games and technologies and the paradigms driving them continue to move forward. Smaller teams with solid funding are most often the ones to push the boundaries forward in little and sometimes big ways that over time create quite a large technology gap between where a leading game engine started out, and where the industry is currently exploring. So, while the developers of the most popular engines never stop looking for ways to innovate and provide for their clients, the fact that they have so many clients works against that innovation. Also, on the money side, they can make more money by releasing small updates quickly, but large re-architecting efforts cost a lot of time and money and increased support challenges for them and their customers. Not to mention how easy it is to become complacent when your current technology is riding high and there doesn’t seem to be any significantly better technology ready to challenge them. Until they have a compelling reason, no video game engine developer will take on the challenge of seriously re-architecting their products.

So where do we stand with current engine technology?

Just thinking in terms of the modern FPS king Unreal, mid-level game engine Unity, and MMO middleware Hero, it is clear that there is a big gap between what the software technology of today offers, and what the potential really is. The core architecture within each of these engines is now years out of date and would require a tremendous effort to re-build to take full advantage of modern hardware and cloud based computing potentials. And, again, the most popular engine for most games, Unity, requires the greatest amount of re-architecture of them all. Even if these companies had the motivation and were willing to pay the cost, the time to make such changes would be measured in years effectively generating almost an entirely new engine. Their greatest asset is that they all have plug-in architectures to allow developers to customize and enhance features, but this can only get you so far. Ultimately their core architectures will continue to feel more and more restrictive to developers seeking to produce newer exciting experiences.

All of this said, there is reason for optimism! Industry forces for change are growing on many fronts in response to the increasing pains I’ve outlined. Innovative emerging companies with new paradigms of thought around cloud based development solutions, platform architectures, live services, and parallel computing have been quietly building technology and are showing great promise. These companies are being run by developers just like you and me who are tired of the inefficiencies of the existing solutions and are intent on helping developers of all levels and backgrounds get the most out of limited budgets to realize their creative ideas. I will go into more detail on these innovations in follow up posts. Please look for them on Gamasutra or on my company blog page at www.midnightstudios.net. (source:gamasutra)

 


上一篇:

下一篇: