游戏邦在:
杂志专栏:
gamerboom.com订阅到鲜果订阅到抓虾google reader订阅到有道订阅到QQ邮箱订阅到帮看

以长轮询方式解决手机多人游戏的数据传输问题

发布时间:2012-01-30 14:06:27 Tags:,,,

作者:Bernt Habermeier

许多出色的开发者习惯于在桌面环境下制作游戏,在进行手机应用制作时往往忽视了网络连接的可靠性。当你发现应用的运转受网络条件影响时,你可以无可奈何地指责运营商,当然也可以自己想办法解决问题。在下文中,我们将指导如何解决此类问题,讨论如何让多人游戏在蜂窝网络(cellular network )覆盖不均的情况下良好地运转。

近期我为Moblyng的《Social Poker Live》编写适用于蜂窝网络的网络代码,发现某些早期的方法依然值得借鉴。在制作网络连接条件不良的游戏方面,我有数年的经验,接触过28.8K到56K的网络环境。

《Social Poker Live》

在我们开始讨论问题和解决方法之前,我想先简单地评述下这款游戏。这样,你就能明白我即将讨论的解决方案可以用来制作出功能丰富和质量极佳的游戏。

《Social Poker Live》是款多人扑克游戏,可以通过iOS和Android移动设备(游戏邦注:包括手机和平板电脑)网络连接进行即时对战,而且能够适应较差的网络条件。

比如,你可以在从旧金山驶往雷德伍德市的Caltrain上玩游戏,火车驶过隧道后游戏会重新连接。

当然,游戏在电脑上也可以良好的运转,但在这篇文章中,我们将关注其移动设备体验。

以下是某些用户体验截图,最左边是加载屏幕,最右边是获胜屏幕。游戏有个功能,我们可以看到哪些Facebook扑克好友在线,游戏会询问你是否加入他们。

Social Poker Live(from gamasutra)

Social Poker Live(from gamasutra)

我推荐你用Chrome来加载游戏,打开开发者工具,点击“连网”标签,然后回到游戏中并加入一桌游戏。你会看到数据在客户端和服务器间来回传送,包括我们将在下文中讨论的数据包标识符和回单。

developer tools(from gamasutra)

developer tools(from gamasutra)

问题

我们普遍认为,在移动设备上运行即时多人游戏是件很困难的事情。这种类型的游戏需要玩家群组间的二次数据同步,因而与典型社交游戏的执行相比要更为困难,但并非完全不可实现。多人即时游戏可能面临的两个问题是:可伸缩性和蜂窝网络连接。这篇文章将着重讨论第2个问题,也就是在使用HTML和JavaScript的3G蜂窝网络下获得较理想的连接效果。

升级

传统网络技术在将请求发送给服务器方面做得很不错。当客户端察觉到需要信息时,就会向服务器发送HTTP请求,随后服务器很快会根据请求做出响应。

在多人游戏中,游戏客户端知道自己需要尽量快速进行游戏更新,但是它不知道何时需要提出更新信息的请求。这并不匹配需要客户端首先提出数据请求的模型。客户端要何时提出更新请求呢?

你可以用积极轮询请求,每秒向服务器发出一次请求。但你的运营管理人员肯定会对此不满,因为无论从客户端还是服务器的角度来说,这都不是种很有效率的做法。而且,你无法获得亚秒响应时间,尤其是在蜂窝网络下。

WebSockets

我使用WebSockets来执行客户端和服务器间的通信。

WebSockets确实很有用,因为它们提供了你在执行响应性多人游戏需要的功能。对于服务器和客户端来说,它们随时可以读写信息。

更为重要的是,WebSockets的效率会更高,因为它们无需在每条信息中传送毫无必要的HTTP开头。这是个巨大的优势,尤其是对需要在蜂窝网络中运转的手机应用而言。如果你仍然不相信这个工具,可以访问http://websocket.org/quantum.html。

不幸的是,我们正在使用的并非WebSockets,因为Android不支持WebSockets。

lookie和长轮询(long poll)

长轮询的优势在于,它完全颠覆了典型的客户端发送请求的做法。长轮询的核心在于客户端和服务器间的相互联系,打破了客户端尽快发送请求的传统惯例。

客户端期望服务器能够留住请求,保持潜在的TCP连接的开放性。只要服务器拥有希望同客户端共享的信息,它就会将数据传送给客户端并终止连接。这种做法的效率相对较高,免除了所有HTTP请求的麻烦。

客户端收到更新数据后马上向服务器发送下个请求,依然期望服务器留住该请求,直到有需要传回的数据为止。

我将这种长轮询通信称为“lookie”请求。即便这种系统没有此等有趣的名字,但我认为只要它能够有效地运转就行了。

动作

长轮询并非客户端发出的唯一Ajax GET请求。用户做出的动作需要马上传送给服务器,因为客户端并不能控制何时需要分配下个长轮询请求,所以它无法使用该渠道来与服务器交流用户主导动作。因而,我们需要执行另一个Ajax通信渠道,用来为异步的始于客户端、终于服务器的一次性通信服务。这些请求是额外的Ajax请求,客户端可以很快地从服务器处得到响应。

我将这种通信称为“动作”请求。

网络条件

蜂窝网络的斑斑劣迹众所周知。我相信,你肯定遇到过如下情形:你自己觉得网络连接很通畅,最终却发现自己只是在自言自语。当这种情况发生在我身上时,我会觉得有些尴尬。我们可以料想到,移动设备上的客户端和服务器间的通信代码也会受到同样的影响。

解决方法

如果频繁碰到数据传输问题,除了更换网络运营商之外,你也可以通过改变代码的方式来解决。在条件不良的情况下,如果数据未送达,代码会尝试重新发送数据。你可以察觉到暂停等Ajax请求产生的相关错误,但是你无法知晓的是,HTTP请求是消失在传输往服务器的途中还是消失在传输回客户端的途中。

具体地说,在尝试重发请求或数据时,存在不同的通信失败和相关问题。

table (from gamasutra)

table (from gamasutra)

对动作请求来说,如果客户端到服务器这个过程的通信失败,那么重新发送相同的动作不会产生任何问题,就像我们之前从未发送过数据一样。但是,如果服务器收到动作请求并处理,但是从服务器发回客户端的响应信息通信失败,那么当客户端再次尝试发送动作请求时,可能就会出现问题。在这种情况下,服务器会再次处理客户端发送的相同动作,随后很可能会带来些意料之外的后果。所以,我们的代码需要分清这两种情况。

对于lookie请求来说,如果我们不注意的话,最终有可能向客户端发送相同的数据更新信息,或者如果服务器认为客户端已经获得了它传送出的数据,可能就会导致客户端出现未更新的问题。除了丢失数据或双重数据外,我们还需要确保让数据更新以正确的顺序传播到客户端上,如果不注意的话,数据将变得一片混乱。

步骤

在上文中,我们假设所使用的蜂窝网络连接条件不良。从客户端的角度来讲,这意味着Ajax请求有可能会超时并需要重新尝试发送。

从客户端和服务器两个方面来看,这个假设意味着我们尝试传输的数据无法到达终点,所以最好能够在需要的时候重新传输。

我们需要服务器做的首件事情是将数据留在内存中,用序列数字为每条逻辑数据信息打上标签。只要我们有了这个序列数字,我们就能够很轻易地通过客户端和服务器的通信,发现两端拥有的或需要再次传输的数据。

通常采用的方法是,服务器将带有序列数字的数据传输给客户端,然后客户端将这些序列数字反馈给服务器,表示数据已经收到。

序列数字是元信息,应当只同代码中的网络协议层次有关。如下图所示,每个数据包都有个独立的序列ID。

message data(from gamasutra)

message data(from gamasutra)

服务器有个要传输给客户端的数据。在这个例子中,信息数据包含与玩家相关的信息。服务器网络协议在信息中添加序列ID,形成元信息。这个序列数字值同协议层次有关。服务器或客户端的其他部分与此均无关联。

message data (from gamasutra)

message data (from gamasutra)

游戏中的服务器端核心逻辑可以更新客户端。该方法最终会调用附加序列数字的网络程序,最终将所有的逻辑信息转变为传输信息。

接下来,我们将构建整个客户端和服务器事件链的范例。

server-client(from gamasutra)

server-client(from gamasutra)

步骤1。假设核心服务器逻辑想要将各种信息传输给玩家,再假设其通过2个调用将数据放置在开放的信息流上,网络代码给数据设定的序列数字分别为0和1.。

数据已经做好了为客户端长轮询所接收的准备。

server-client (from gamasutra)

server-client (from gamasutra)

步骤2。在某个时候,客户端开始其长轮询请求,并随请求发送出接收ID-1(游戏邦注:代表客户端还未获得任何数据)。

服务器对此做出响应,将所获得的所有内容都传输给客户端,也就是带有序列数字0和1的数据包。

server-client (from gamasutra)

server-client (from gamasutra)

步骤3。客户端接收到带有序列数字为0和1的数据包,立即继续其长轮询,但是此次其发送出的接收ID为1(游戏邦注:即确认其已经接收到数据包0和1)。

当客户端接收到信息时,可以按照顺序组合起来。

server-client (from gamasutra)

server-client (from gamasutra)

步骤4。服务器上的网络代码获得来自客户端的下个长轮询信息。此次的接收ID为1,意味着客户端承认已经收到了所有信息。

所以,服务器清除了内存中的信息包0和1,以及整个事件链。

将上述所有步骤综合起来,整个过程如下图所示:

server-client(from gamasutra)

server-client(from gamasutra)

因为服务器为每个外出信息都分派了独特和递增的序列数字,所以客户端可以将信息的接收情况反馈给服务器。这种反馈使得服务器可以删除旧数据。如果某些问题导致客户端未接收到信息,那么这种反馈还会让服务器再次传输信息。上图是信息传输通常的示例。

可能出现的问题

这个过程的关键之处在于,当不良状况发生时数据会再次尝试发送。接下来,让我们来看看某些失败的案例。

server-client (from gamasutra)

server-client (from gamasutra)

在上图中,我展示了当来自服务器的数据出现问题时的情况。这不只是简单的TCP/IP包丢失的现象。

TCP/IP很完善,多数情况下都可以安排好自己的传输,但是所有经验丰富的人或许都碰到过如下情况:网站停止加载(游戏邦注:或停止响应)。你的应用对这种情况的应对效果好坏取决于代码。

在《Social Poker Live》中,我们做了10个二级长轮询。这意味着,客户端会花10秒钟来等待服务器的响应,如果没有获得任何数据,客户端会放弃这个请求,发出下个轮询。

这个等待时间或许并不恰当,我们仍在调试中。最优频率并非适合所有的应用。对这款应用来说,较佳的等待时间可能是15秒,也可能是5秒。

确定最佳的长轮询等待时间时间很困难的事情,但是在确定时必须考虑到以下内容:随着长轮询间隔时间的增加,客户端重新发出请求获得丢失数据的时间间隔也会增长。这意味着,如果出现上述ICP/IP问题,那么应用会显得很迟缓。但是,随着长轮询间隔时间的减少,你的服务器需要处理更多来自客户端的请求。客户端不断发出连接请求要求更新,这样很可能客户端会重新请求已经处在传输中的数据。

所以,10秒的时间似乎很适合扑克游戏。玩家似乎愿意等待这段时间让浏览器刷新,而不会焦急地去点击刷新按键。或许,这个时间还取决于网络连接的质量。

server-client (from gamasutra)

server-client (from gamasutra)

上图显示长轮询请求出现拖延问题的情况。服务器未获得带有接收ID的请求。原本在到达最大的长轮询间隔时间之后,客户端会重新发送带有相同接收ID(游戏邦注:ID值为1)的请求,服务器会删除相关信息。但是,虽然客户端已经收到了0和1数据包,但因为拖延问题导致服务器未获得下个长轮询请求,所以它就必须保留0和1数据包,直到其最终收到下个长轮询请求。这个请求带有值为1的接收ID,才会删除掉数据包0和1。

客户端至服务器的通信问题

在上文中,我们分析了服务器至客户端的数据传输问题。如果出现问题的是相反方向的传输,情况又会如何呢?因为《Social Poker Live》客户端通常每次只会处理1个未解决的请求,所以我无需为游戏的动作执行正式的序列/反馈解决方案。

但是,即便每次只处理1个信息,我仍然需要考虑到分派动作可能出现的问题。为避免这种问题,客户端需要在每个动作的传输时添加独特的标识符。服务器要先确认之前从未见过这个动作,如果其之前已经对这个动作做出过响应,那么只需要传输之前预缓存的结果即可。

这种解决方案并不适用于所有游戏。你需要自行研究客户端的动作,如果你向服务器传送许多动作,或许也需要采用序列/反馈解决方案来实现客户端与服务器间的数据通信。

总结

处理TCP/IP连接问题或许有更好的解决方案,但是如果你想要获得能够适应不良网络环境的应用,单纯依赖TCP/IP是远远不够的。你的TCP/IP在蜂窝网络下可能出现问题,你的应用会停滞,用户体验便会受到影响。如果你希望自己的多人游戏能够在蜂窝网络覆盖不良的地方顺利运行,那么就必须处理上述问题。

我选择使用长轮询,但这并不意味着使用WebSockets或其他更高层次的通信框架就无需担心TCP/IP问题。WebSockets仍然需要依靠TCP/IP,虽然用它来处理许多事情可能较为简单,并且更节省开支,但是仍然有可能出现拖延问题。(本文为游戏邦/gamerboom.com编译,拒绝任何不保留版权的转载,如需转载请联系:游戏邦

Building Games that Run on Poor Mobile Connections

Bernt Habermeier

In this article we’ll walk you through how you can make your mobile web game resilient to poor network conditions. Many excellent developers are used to developing games for the desktop environment, but often don’t think about network reliability when they implement their apps. When you find that your app freezes up over a cellular network you can shrug and blame the carrier, or you can roll up your sleeves and fix the problem. We’ll teach you how to fix such problems and discuss the creation of multiplayer games that play great even in areas where the user’s cellular coverage is not great.

It turns out that some of the old methods from way back came in handy when I recently wrote the cellular-network-friendly networking code for Moblyng’s Social Poker Live. I have a few years experience working on games over questionable network connections, including 28.8k – 56k modems.

Try It Before You Buy It

Before we dive into what the issues are and how to solve them, I’d like to quickly review the game so you can see what I’m talking about is a solution we used in a feature rich, live production quality title.

Social Poker Live is a multiplayer poker game that can be played in real-time, over any internet connection on iOS and Android mobile devices (phones and tablets), and it’s resilient to spotty network conditions.

For example, you can play the game on Caltrain going from San Francisco to Redwood City, and will recover going through tunnels and the like.

Of course, the game also runs well on a desktop, but in this article we’ll focus on the mobile experience. Try out Moblyng’s Social Poker Live on your iOS or Android phone at http://poker.moblyng.com.

Here are some screenshots of the typical user experience, starting with the loading screen, all the way to the winning screen. One neat feature is that we’ll discover which of your Facebook poker pals are online, and ask you if you’d like to join their table. Here is what that looks like:

I’d like to encourage you to load the game in Chrome, bring up the developer tools, look at the Network tab, and go back to the game to sit at a table. You’ll see the data passed back and forth between the client and the server, including packet identifiers and acknowledgements that we’ll talk about in the rest of the article.

Problems? What Problems?

The common perception is that it’s difficult to implement real-time multiplayer games on mobile devices. It’s true that this type of game is more involved than implementing a typical social game where the game doesn’t require down to the second data synchronization across a group of players, but it’s not terribly difficult either. There tends to be two problems with multiplayer real-time games: scalability and networking over a cellular network. This article talks about the latter — how to get reasonable networking performance even over spotty 3G cellular networks using HTML and JavaScript.

Gimme Updates!

Traditional web technologies are decent with issuing requests to servers. The client knows when it wants some piece of information and hits the server with an HTTP request, and as a result the server scrambles to offer up the request as quickly as it can.
In multiplayer games the game client knows it wants game updates as quickly as possible, but it doesn’t know when to ask for such information. This doesn’t fit well with a model where the client has to initiate the request for data. When do you ask for an update?

You could just hit up the server once a second with an active poll request. Your operations manager could also decide to hit you once a second over your noggin because that’s not efficient for neither the client or the server. All hitting aside, you won’t get sub-second response times — especially over a cellular network.

Enter the WebSocket

I wish I could say I implemented the client/server communication with WebSockets.

WebSockets just make sense because they provide the exact functionality that you want when implementing a responsive multiplayer game; both the server and the client can read/write information whenever they want.

More importantly, WebSockets are also much more efficient because they do not send all of the unnecessary HTTP headers with every message. This is a huge benefit, especially for mobile applications that are trying to run over a cellular network. Still not convinced? Take a look at http://websocket.org/quantum.html.

Unfortunately, I can’t say that we’re using WebSockets because Android doesn’t support Web Sockets. As I lament the lack of Web Sockets on Android, let’s move on…

Lookie, the Long Poll

What’s neat about the long poll is that it turns the notion of the typical client-initiated request on its head. There are many variations and tricks people play with long poll implementations, but at the core of the long poll is a simple understanding between the client and server to break with tradition of servicing a client request as quickly as possible.

Instead the client expects the server to hold on to the request and keep the underlying TCP connection open for as long as it wants to (within reason). The moment the server has information it would like to share with the client it sends the data over to the client and terminates the connection. This is reasonably efficient because all the pain of setting up the HTTP request is already behind us.

Upon data receipt the client immediately initiates the next request to the server — again expecting the server to sit on it until there is something of note to send back.

I affectionately call such long poll communication “lookie” requests. I’m sure your implementation will work without resorting to such a cute name, but your technical conversations with your coworkers simply won’t be as fun as mine. Example: Duude, how many lookies are we running on our prod server config? (warms my heart. Every time).

And… Action!

The long poll is not the only Ajax GET request the client makes. User initiated actions need to be sent to the server right away, and because the client isn’t in control over when it needs to issue the next long poll request, it can’t use that channel to communicate with the server about user initiated actions. Thus, we have to implement one more Ajax communication channel for asynchronous one-off communications originating on the client and going to the server. These requests are traditional Ajax requests, where the client can expect a fast turn-around from the server.

I call such communication “action” requests.

Hello? Can You Hear Me?

Cellular networks are notoriously spotty. I’m sure you’ve found yourself in the situation where you were sure you had a good connection only to realize that you’ve been talking to yourself for a while. When this happens to me, I always feel a bit sheepish. We can expect that our client/server communication code on mobile devices will be similarly affected — though I can’t comment on its emotional state.

The Fix

Besides changing your cellular carrier when you get frequent data drops, you’ll want to write your code so that it will handle poor conditions and retry sending the data if it doesn’t arrive. You can catch exceptions like timeouts and associated errors as part of making the Ajax request, but what you don’t know is if your HTTP request died on the way to the server or if it died on it’s way back to the client.

Specifically, here are the modes of communication failure and their associated problems when trying to resend the request or data.

For action requests, if the client to server communication fails, nothing bad happens if we just retry sending the same action: It’s as if we had never sent the data. However, if the server gets the action request and processes it, but the response message from server to client fails — then we’re in a bit of a bind when the client retries the action request all over again. In this case the server will process the same action once more on the client retry, which is likely to result in some kind of grief later on. So our code will want to catch this.

For lookie requests, if we’re not careful we might end up sending duplicate data updates to the client, or we might lose updates going to the client if the server is too optimistic about the client having received data it had sent out. Beyond the loss or duplication of data, we also want to make sure the updates are delivered to the client in the correct order — all things you might get wrong about if you are not careful.

Can You Hear Me Now?

We’d do well to assume that the connection we are getting over the cellular network is going to be spotty. From the client side, that means being reasonably aggressive with timing out our Ajax requests, and retrying.

From both sides it means going with the assumption that data you’re trying to send to the other side is not going to get there, so you better hold on to it so you can resend it if you need to.

The first thing we’ll want the server to do is keep the data in memory and label each logical data message with a sequence number. Once we have sequence numbers we have an easy way for the client and server talk about what data they have and or need to resend.

The general plan is for the server to send data with sequence numbers to the client, and then have the client acknowledge (subsequently called ‘ack’), these sequence numbers back to the server as a form of proof of receipt.

Sequence numbers are meta-information that should only be relevant to the network protocol layer in your code, and for the purposes of the following charts, a box with a number in it implies a data packet that has a specific sequence ID affixed to it.

The server has a message that it wants to send to the client. In this example the message data contains information about a player. The server networking protocol adds a sequence ID to the message as meta-information. This sequence number is relevant to the protocol layer only. The rest of the server or client code should not care about it.

Server-side core logic in the game can make calls that pertain to updating a client from a high-level perspective (ex: sendPlayerInfo, sendInventory, and updatePos). The methods end up calling a networking routine that affixes a message sequence number to each logical message and subsequently pushes all this to an outgoing message buffer array.

Below we build up an example of the entire client/server chain of events.

Step 1. Let’s say that the core server logic wants to send various messages to a player. As an example, let’s assume it makes 2 calls that put data onto an outgoing message queue, and the networking code annotates the data with sequence numbers: 0 and 1.

The data is ready for pickup by a long-poll from the client.

Step 2. The client at some point starts it long-poll request, and sends out an ack id of -1 along with the request (meaning it never got any data yet).

The server responds by sending everything it’s got for that client: Packets with sequence number of 0 and 1.

Step 3. The client receives packets with sequence number of 0 and 1, and immediately continues its long poll, but this time sends an ack id of 1 (confirming that it got packet 0 and 1).

When the client gets the information it can queue up client-side actions that can react to the messages in the core client code (ie: data is passed on out from networking code for processing in core client code).

Step 4. The networking code on the server gets the next long-poll message from the client. This time it carries an ack id of 1, which means the client acknowledges receiving everything up to and including packet 1.

Thus, the server deletes packet info 0 and 1 from memory for this client, and the whole chain of events can continue.

Putting it all together, the whole sequence of events looks like this.

Because the server assigns a unique and increasing sequence number to every outgoing message, the client can acknowledge the receipt of this information back to the server. This acknowledgement allows the server to delete the old data, or in some cases, it allows the server to resend the old data because something bad happened and the client never got it. Above is a small example of a data run where nothing went wrong.

What Could Possibly Go Wrong?

The whole point of all this is to be able to retry when bad things happen. Let’s take a look at a few failure cases.

Above I show what happens when data that is supposed to be coming from the server stalls out. I’m calling this a “stall”, because it’s most likely not just a single TCP/IP packet that went astray.

TCP/IP is sophisticated and can take care of its self most of the time, but everyone has experienced what it feels like when it can’t: A site stops loading (or stops being responsive). How resilient your application is to stalls like this depends on how much your code cares (and retries).

In Social Poker Live, at the moment, we do 10 second long-polls. Meaning the client waits for a response from the server for 10 seconds, and should it not get any data, it’ll abandon the request, and send the next poll.

That might not be the right frequency; we’re still fine-tuning it. Even if were optimal for poker, it wouldn’t necessarily be for your application. Maybe 5 seconds might be better. Maybe 15 seconds.

It’s hard to know what the best long-poll time should be, but here are some considerations: As you increase the duration of the long poll, it’ll take more time for the client to re-request data that might have been lost. This larger average latency means the application will feel more sluggish when there was a TCP/IP stall. However, as you decrease the duration of the long poll, your server has to deal with more spam from the client that constantly makes new connections to ask about updates, and also it’s more likely that the client will re-request data that is already in transit from the server (not lost, just not received yet).

So 10 seconds felt about right for Poker. It’s probably just about the time a person would be willing to stare at their browser waiting for a site to recover from a stall before hitting the refresh button in frustration. I’d be happy to hear what you think about this value. Maybe it should be dynamically set based on the quality of the connection?

Above is a case where the long-poll request stalls out. The server never gets the request along with the ack-id. This means that after the max long-poll duration, the client will re-issue the request with the same ack-id (value of 1), and the server deletes the information up-to-and-including packet. Although the client had packets 0 and 1, the server never got the next long-poll request due to a stall, and so had to keep packets 0 and 1 in memory until it finally got the next long-poll request, which carried with it an ack id of 1, and it was safe to delete packets 0 and 1.

What About Client to Server Communication Issues?

Above we covered problems when there is a stall with the server to client flow of data, but what about data flowing in the opposite direction? Because the Social Poker Live client will typically only have one outstanding client-initiated request at a time, I didn’t need to implement a formal seq / ack solution for actions.

However, even through there should only ever be one message outstanding at a time, I still had to take care of the possibility of issuing a single action more than once. To avoid this possibility, the client sends a unique identifier with each action. The server makes sure that it has not yet seen such an action before, and if it did, it simply returns a pre-cached result of the already serviced call.

This solution won’t work for all games. You have to judge for yourself how chatty the client actions are, and if you’re sending a lot of actions to the server, you may well have to also resort to a seq/ack solution for client-to-server data communication.
Conclusion

There may be other (and better) ways you can deal with stalled TCP/IP connections than what I’ve done, but if you want a resilient application the one thing you can’t do is simply trust that anything relying on TCP/IP will do just fine. Your TCP/IP will stall out on cellular networks, your app will freeze, and your users will be bummed. If you want to make multiplayer games that play great even in areas where your cellular coverage isn’t, you have to deal with this somehow.

Although I chose to use long-polling, don’t assume that if you switch to WebSockets, or some other higher level communications framework, that you don’t have to worry about TCP/IP stalls. WebSockets still relies on TCP/IP and although many things would be cleaner and would have much less overhead, you can still get stalled out.

I haven’t looked at all the cool frameworks or libraries that are out there, so I’d like to hear what you’ve found compelling. One library that looks promising, especially if you want to use WebSockets (but can’t because it’s not implemented in all the places we care about), is Socket.io. That looks really slick, and it does look like there would be ways of catching TCP/IP stalls. (Source: Gamasutra)


上一篇:

下一篇: