游戏邦在:
杂志专栏:
gamerboom.com订阅到鲜果订阅到抓虾google reader订阅到有道订阅到QQ邮箱订阅到帮看

创造优化的在线游戏基础设施的5大技巧

发布时间:2014-07-07 14:47:53 Tags:,,,,

作者:Adam Weissmuller

在过去5年里,基于云端的在线游戏市场似乎取得了巨大的发展,即伴随着越来越多社交网站,游戏类型,手机设备的出现以及高速宽带连接等都将进一步证实人们对于今年市场收益的预测,即有可能达到253亿美元,这是自2009年以来每年以13.9%的比例在增长着。

尽管游戏发行商倾向于专注于游戏的娱乐价值和“粘性”,但甚至连最受欢迎的游戏也有可能因为后端基础设施不能传递所需性能去提供优化在线游戏体验而损失大量的玩家。更复杂的是,游戏发行商必须以很高的代价执行一种高性能的后端基础设施。如果发行商不够谨慎的话,他们便有可能因为执行一个超过指定基础设施预算并破坏了底线的后端基础设施而遭遇失败。

牢记着平衡性能需求与成本,不管是初创游戏发行商还是完善的游戏发行商都应该在创建并执行在线游戏时考虑到如下5大基础设施技巧:

不要为了加速市场进程而牺牲性能

据说在在线游戏市场中收益等同于收益—-这意味着如果你能够更快速地将游戏带到市场中,你便能够更快地生成收益,并且能够更快地将收益再次投资于开发能够进一步发展收益的功能中。为了能够更快速在测试,开发和发行阶段中移动着,在线游戏开发商和发行商必须执行快速的部署策略同时进一步着眼于最有效的基础设施选择。

如今的发行商喜欢部署那种能够让他们将游戏更快速地带到市场中并转成所需的公共云端计算机服务的基础设施。游戏发行商通常都是使用虚拟公共云,这包含“子虚拟”一个服务器的资源并在多家公司和应用中分享它们。尽管虚拟的公共云对于初创发行商来说是一种较划算的选择(他们想要更快地发行游戏),但是这却有可能引起一些与分享这些虚拟服务器资源相关的性能问题。

对于追求能够突出比虚拟公共云所提供的更棒性能的功能的随需应变基础设施的游戏发行商,裸机公共云可能会是有效的选择。裸机公共云将一个完整的服务器物理资源贡献给用户,并且基于网络配置也是可读的,所以新游戏基础设施将伴随着一个API调用或者网页界面上的点击而运转起来。

无需建造过多基础设施而获得扩展性

发行商在发行一款AAA级在线游戏时经常会获得大量的玩家。相反地,随着好奇元素的减少以及“观光客”的离去,日活跃用户(DAU)将出现急剧下降。在过去,大约25%的基础设施对于最初的发行准备来说是“最合适的”。随着竞争不断加强,游戏的普及率也变得越来越难预测。

Public cloud(from depositphotos)

Public cloud(from depositphotos)

公共云基础设施能够满足波动需求,因为它的能力可以基于最小的过渡时间打开或关闭。对于那些拥有较短脉冲,临时的比赛或有限的并发用户的游戏,购买一个全新的服务器,等待它的到来,在数据中心配置它并不是一个适当的选择。

对于带有持久世界的游戏,即伴随着无数玩家同时进行互动,我们将能通过能够提供给用户最棒的多种基础设施服务的复合环境中获得可扩展性。举个例子来说,游戏的排行榜,分数信息以及其它用户数据都可以置于主机托管或者一个托管环境中,与此同时对抗赛将出现在高性能的裸机公共云中。混合同样也能节省成本去呈现一款只依赖于一种基础设施类型的更大型游戏或社交游戏。

多样性基础设施去减少停机时间

带着减少游戏停机时间的目的,在线游戏发行商将创造带有弹性和较高可用性的基础设施。

Netflix和其它公司已经公开宣称致力于设计应用和基础设施部署去阻止失败的发生,并且他们创造高度弹性解决方法的一大战术便包含水平比例缩放。基于这种方法,多种具有相同功能的服务器都致力于使用全局或局部负载平衡器去追踪流量。如果服务器遭遇了失败,另外一个便会自动拿起负载去阻止停机时间的出现。跨越数据中心和地形去多样化这些负载能够提供更多保护层去应对更大规模的系统化停机。

具有更强“故障敏感性”且拥有更低延迟需求(游戏邦注:如作为特定的MMOG类型)的游戏通常是基于托管的私有云或主机托管环境进行部署。来自“故障敏感性”游戏的懂行的发行商会设计具有较高可用性的基础设施,并使用能够支持更快访问储存器且更多故障恢复选项的高端装备和定制配置(因为它们致力于支持特定的应用)。高可用性基础设施和服务水平保障将基于全局IP路由选择和转换,回发通道设备网络,数据中心或机架级配电,以及服务器和储存硬件本身进行配置。

不要让潜伏因素破坏了你的用户基础

延迟总是在线游戏公司所担忧的最大问题之一,因为它会潜在且持续对用户流失和游戏内部转换率造成消极影响。延迟问题一般可以划分成服务端组件和网络端组件,在延迟敏感范围内理解你的游戏在哪里遭遇失败是基础设施计划中一个关键点。一旦你确定了这些门槛,那么使用适当的技术去维持用户体验将变得更加清晰有效。

有些用于降低服务器端延迟的基础设施解决方法包含使用裸机物理硬件和带有像较高的I/0磁盘和专门闪存等特殊元素的定制服务器。在网络端,技术运维团队将越过边界网关协议(BGP)通过多载波优化和边缘缓存为静态文件传输以及其它加速技术追踪决策。

为稳定的工作量考虑不间断的基础设施

在重资投资于其它非基础设施组件后,你可能没有过多资本能够支持前期基础设施购买。特别是对于只有有限资金的小型游戏工作室来说,他们必须着眼于替代方法而不是尝试着利用那些不能使用或没钱支付的剩余基础设施开始。同样地,已建立起来的游戏发行商需要在提交高成本的资源前创造基本的需求。错误的基础设施选择将会是致命的。有许多游戏公司已经投资了许多钱于基础设施中,但是当需求不能满足期许时都遭遇了破产或重组。

尽管游戏发行商可以转向需求式云端基础设施平台为新游戏和开发项目最小化最初投资,但是当公司创建了大规模环境去支持更稳定的工作量并且必须持续购买更多“实体”去支持玩家规模时,需求式的产品将会崩溃。如果需求层次是稳定的话,一个长期的托管或托管协议将会是有意义的(基于游戏生命周期)。当流量和使用量是一致的且可预期的,那么使用长期基础设施租赁或购买装备将能够提高控制力度并降低成本。

在线游戏产业在接下来5年将继续发展着,而一种最佳游戏基础设施将能够有效地帮助游戏发行商维持自己的竞争优势。传递伴随着适当的基础设施的完美用户体验的发行商最终将能够成功吸引并维持一个强大游戏社区。

本文为游戏邦/gamerboom.com编译,拒绝任何不保留版权的转载,如需转载请联系:游戏邦

Five Considerations for Creating an Optimal Online Gaming Infrastructure

by Adam Weissmuller

The online, cloud-based gaming market has seen massive growth over the last five years, with the proliferation of social sites, gaming genres, mobile devices and high-speed broadband connections fueling the prediction that the market will reach $25.3 billion dollars this year, a compound annual growth rate of 13.9% since 2009.

While game publishers tend to focus on the entertainment value and “stickiness” of a game, even the most popular and addictive titles can suffer significant player loss if the backend infrastructure can’t deliver the performance needed to provide an optimal online gaming experience. Making matters even more complex, game publishers must implement a high-performing backend infrastructure at a price that makes business sense. If publishers aren’t careful, they can easily end up implementing a backend infrastructure that exceeds their allocated infrastructure budgets and hurts their bottom lines.

Keeping in mind the need to balance performance requirements and costs, both start-up and well-established enterprise game publishers should take into consideration these five infrastructure tips when building and deploying online games:

Don’t Sacrifice Performance for Increased Speed-to-Market

It’s been said in the online gaming market that revenue equals revenue – meaning the faster you deliver your game to market, the faster you generate revenue and the faster that revenue can be reinvested in the development of features that can grow revenue yet again. In order to move through the test, dev and launch stages as quickly as possible, online game developers and publishers must go beyond implementing rapid deployment strategies and also take a close look at the most effective and efficient infrastructure options.

Today, publishers interested in deploying an infrastructure that lets them bring games to market as quickly as possible turn to on-demand public cloud computing services. Game publishers have traditionally used virtual public clouds, which involve “virtualizing” a server’s resources and sharing them across multiple companies and applications. While virtualized public clouds can be a cost-effective option for start-up publishers that want to quickly launch their game and easily scale capacity as demand requires, they can introduce significant performance issues associated with sharing these virtualized server resources.

For game publishers seeking an on-demand infrastructure that features better performance than virtualized public clouds typically offer, bare-metal public clouds are a viable option. Bare-metal clouds dedicate an entire server’s physical resources to the user, and are also readily available with online provisioning so new game infrastructure can be up and running with an API call or a few clicks of the mouse within a web interface.

Achieve Scalability Without Overbuilding Infrastructure

Publishers often experience a large influx of players when launching their AAA online gaming title. Conversely, Daily Active Users (DAUs) can dramatically decline when the curiosity factor subsides and tourists leave. In the past, overbuilding infrastructure by as much as 25% was the “best practice” for initial launch spike preparations. Competition has since intensified, and game adoption rates have become harder to predict.

Public cloud infrastructure can meet fluctuating demand because of its ability to be easily turned on or off with minimal ramp-up time. For games that have short bursts of play, temporary match play or a limited number of concurrent users, purchasing a new server, waiting for it to arrive, configuring it and racking it in the data center is simply not an option.

For games with persistent worlds where thousands of players interact at the same time, scalability can be achieved through hybrid environments giving users the best of multiple infrastructure services. For example, a game’s leader board, scoring information and other customer data could be located in a colocation or a managed hosting environment, while match play occurs in a high performing bare-metal public cloud. Hybridization also saves on costs versus trying to host a large title or social game relying solely on one type of infrastructure.

Diversify Infrastructure to Minimize Downtime

With the aim of minimizing gaming downtime, online game publishers are opting to build out their infrastructure with resiliency and high availability in mind.

Netflix and others have been vocal about their intention to design their applications and infrastructure deployments to prevent failure, and one of the tactics for building a highly resilient solution involves horizontal scaling. With this approach, multiple servers that are functionally the same are set up to run together using global and local load balancers to route traffic. In the event that one server fails, another automatically picks up the load to prevent downtime. Diversifying these loads across data centers and geographies provides more layers of protection against systematic, large-scale outages.

Games that are more “fault sensitive,” and that have lower latency requirements such as certain MMOG genres, are often deployed in managed hosting, private cloud or colocation environments. Savvy publishers of “fault sensitive” games design infrastructure with high availability in mind, and typically use higher-end equipment and custom configurations that support faster access to storage and more failover options because they were designed to support the specific application. High availability infrastructure and service level guarantees can be constructed for global IP routing and switching, back channel device networking, data center or rack-level power distribution, as well as the server and storage hardware itself.

Don’t Let Latency Destroy Your User Base

Lag is consistently cited as one the biggest areas of concern for online game companies because of its potential ongoing negative impact on subscriber churn and in-game transaction conversion rates. Latency issues can generally be divided into server-side and network-side components, and understanding where your game falls within the latency-sensitivity spectrum is a key component of infrastructure planning. Once you establish these thresholds, applying the right technologies to maintain the user experience becomes much clearer.

Some of the infrastructure solutions used to minimize server-side latency include the use of bare-metal physical hardware and customized servers with specialty elements like high I/O disks and specialty flash drives. On the network side, tech ops teams are overriding border gateway protocol (BGP) routing decisions through multi-carrier optimization, edge caching for static file delivery and other acceleration techniques.

Consider Always-On Infrastructure for Stable Workloads

After heavily investing in other non-infrastructure components, there may be little capital left to support large upfront infrastructure purchases. Particularly for smaller gaming studios with limited funds, they must look to alternative methods rather than try to start out? with a surplus of infrastructure they may not use or be able to pay off. Similarly, established game publishers need to create baseline demand before committing costly resources. The wrong infrastructure choice can be lethal. More than a few gaming companies have over-invested in infrastructure only to face bankruptcy and restructuring when demand failed to meet expectations.

While game publishers can turn to on-demand cloud infrastructure platforms to minimize initial investments for new games and development projects, the economics of on-demand offerings typically breakdown when companies build out large-scale environments to support more stable workloads and must continue to purchase more “instances” to support the volume of gamers. A long-term hosting or colocation agreement can, over a game lifecycle, make sense if demand levels are relatively stable. When traffic and usage is consistent and predictable, entering into longer-term infrastructure leases and/or purchasing equipment can increase control and lower costs.

The online game industry is poised to grow as much if not more over the next five years than it has since 2009, and an optimal gaming infrastructure is critical to helping game publishers maintain a competitive advantage. Publishers that deliver a flawless user experience, with infrastructure costs that makes business sense, will be the ones that can ultimately attract and maintain a vibrant gamer community.(source:gamasutra)

 


上一篇:

下一篇: