游戏邦在:
杂志专栏:
gamerboom.com订阅到鲜果订阅到抓虾google reader订阅到有道订阅到QQ邮箱订阅到帮看

开发者分享游戏设定方式的思路和可行性的评估分析

发布时间:2017-03-21 09:59:04 Tags:,,

本文原作者:Alexander King 译者ciel chen

这个月初,在GDC的ALT.CTRL展上,我和Sam Von Ehren以及Noca Wu展示了我们共同设计的可选控制器游戏。Alt.CTRL是一个展示那些拥有独特控制器的游戏的平台。在展会上,参加者对我们在情感逃避测试仪(Emotional Fugitive Detector)中使用的设计和开发程序感到很好奇。我想分享一些我们开发过程中的情报来作为一个迷你剖析,以便以后游戏设计者们在运用新型输入形式时能有所借鉴和帮助。所以接下来是设计一个异位面部扫描游戏(a dystopic face-scanning game)的步骤!

Emotional Fugitive Detector(from gamasutra.com)

Emotional Fugitive Detector(from gamasutra.com)

关于该游戏成品

情感测试仪是一个双人协作游戏,玩家需要两个人协作战胜一个可恶的面部追踪机器人。两名玩家中的一名要被面部扫描,然后让他的搭档能根据他的面部表情选出对应的情绪。如果他们表现得太多,面部追踪API会识别出来,如果他们表现得太微妙,他们的搭档可能会选错情绪。这种技术被称之为”不友好科技”、“影响惊人的”还有呃…… “这种科技本身是有点傲娇……但是这个概念实在是太棒了”!这是一个有关情绪细微差别的游戏,在这个游戏里,人脸既是控制器,也是屏幕。但是并不一定都是那样开始的……

步骤一:古怪的主意和受到的启发

去年早些时候,我们在一些地方有读到过关于面部追踪的开放源代码库。作为一个游戏设计师经常会做的那样,我们觉得这能做出一个非常酷的游戏。我们本来是想做面部格斗游戏的,玩家会对彼此做不同表情来攻击或抵挡。但是在初步想法阶段基本没有进展,直到我们参加了去年的ALT.CTRL展。看到那么多令人惊讶的游戏,创造了各种独特的亲身体验经历,这让我们很受鼓舞,不再停留在想法阶段,而开始去真正试着做出点什么了不起的东西。

步骤二:看看这个想法是否已经被人捷足先登了

所以我们知道我们是想做一个应用脸部识别的游戏。我们要做的第一件事却也是我在探索新机制的时候经常做的事:看看是否别人之前就做过了!我认为这是开发过程中总被疏忽的部分,因为游戏作为艺术形式存在的历史实在太短了。但是如果你花时间做一些调查,不仅可以避免把类似作品当做创新,让自己陷入尴尬的境地,还可以在前人的失败或成功中学习到经验。

我们这个项目比较小众。尽管身体追踪已经运用在很多体感游戏中了,但我们发现很少游戏运用了脸部输入功能(对比较聪明的人来说,他会知道样例的缺乏可能是个危险讯号)。我们唯一可以找到其应用的地方是在各种高科技演示中或者简易游戏中作为按钮的替代。像Eye Jumper或者Face Glider的游戏会让玩家用他们的脸部进行输入,但是是以一种非常直接的方式来执行引导或者跳跃。

看到那些别人做的游戏,这让我们清楚地明白,我们想要做的是能把面部追踪和人脸的功效作为游戏设计的一部分。比如在VR中,面部动作作为输入能做到更好的探索。所以我们想要从表情方面入手,并让其作为游戏的首要玩法。我们的大脑是能够读出脸部表情的,但是这很少作为技术投入游戏中使用。

步骤三:测试游戏的技术可行性

现在到了把想法付诸实践的时候了。最初我们使用的openCV(开放源代码计算机视觉类库),这是目前为止脸部追踪记录得最好的开放源代码库。于是在使用了多种技术后,检测照片或者视频中的脸部点就变得容易了,这跟你在SnapChat里面的面部滤镜原理相似。这个强大的代码库也让在Unity中实现插件内容变得十分简单。然而,这从根本上来说还只是有了脸部探测方法而已;它只给你提供了脸部的识别点,再没别的了。然而如果你是要在某人脸上装上猫耳朵(当然这是个很好的目标)那这就够用了,但我们想要的是探测表情的变化,比如微笑或者皱眉。我们试着构建我们自己的方法来识别这些表情。不过这些好像已经有人完成过了…..我们发现面部动作识别是个挺有意义的问题!不过多亏有人在我们之前探过这条路子。我们最终找到一个名为clmtrackr的java库,它扩展了跟OpenCV相似的理论方法,但是已经被整顿为针对脸部的库,所以它可以输出鉴定四种表情(伤心、高兴、生气、惊讶)的置信区间(confidence intervals)[根据一定置信程度而估计的区间,它给出了未知的总体参数的上下限]。

我想要强调的是我们是游戏设计师,不是研究人员、计算机科学家、视觉识别专家或者其他别的之类的。任何对这块领域存在真正兴趣的人好像都会对我们的快捷方式和修改感到恐惧。然而我们只是想改变一件工具的用途来把它运用于游戏体验中而已。

所以这得花些功夫,但我们在一两个月内把初始技术给运作起来了。我们可以读出人类面部的四种表情,并将之输入到游戏系统当中。尽管一开始我们不是很确定这个技术是否行得通,后来没花太久的时间它便以相对不错的水准运转起来了。不过,只是“相对”而已。

步骤4:无尽的游戏测试

我和Sam、Noca相识在纽约大学游戏中心(the NYU Game Center),我们三个一起在那里获得了美术硕士学位(MFAs)。在那里,游戏测试是游戏设计方法的核心原则——总之就是要不断地进行测试。交互系统几乎无法做出预判,所以有些在你脑袋里可能很棒的东西,在人们对其玩法不熟悉的情况下,仍旧有可能夭折。项目会举办一周一度的游戏测试之夜,也被叫做游戏测试星期三。学生们,员工们还有本地开发者们聚集在一起测试游戏以及得到大家的反馈(据说大部分大学生是奔着免费披萨去的)。

player sin park forepart(from gamasutra.com)

player sin park forepart(from gamasutra.com)

我们几乎每礼拜都去,持续了好几个月,去测试不同的游戏机制。这是学习技术支持的宝贵机会。我们的游戏测试真的很难搞。尽管它在理想情况下运作得很流畅(比如当我们自己在做测试的时候),但当进行真人测试的时候,它的局限性就暴露出来了。距离百分百的准确度还差的远着呢,而且对目标亮度变化尤其敏感,如果有人头部动了一下,甚至非常细微地动,那也都玩完。

随着我们的初始原型随着可行性测试进展为游戏原型,在过程中我们也绞尽脑汁用尽方法做一款有趣的游戏。这些早期原型重心在于被计算机读取。它会告诉你它要的是什么表情,然后被成功识别的第一名玩家就算赢。我们一直试着去用这个游戏的设计来弥补或者遮掩技术局限性。我们测试过框架叙述来解释“人工智能(AI)”这么任性的原因,或者用回合制机制来隐藏缓慢的识别速度。在实验想法时发现行不通的时候是非常叫人沮丧的!但是我们坚持着,不断迭代着。

步骤五:在没有好点子之前别停下

意识到误差幅度检测可以作为游戏设计的一部分而不仅是判断依据,这对我们来说这是一个转折点。一名在实体游戏装配方面经验丰富的游戏设计者和艺术家Matt Parker告诉我们,他做过一个游戏,致力于让玩家躲避被微软Kinects识别。玩家必须扭曲他们的身体凹出诡异非人类的造型来获得胜利。这毫无疑问是个天才的想法:如果玩家努力避免情绪被我们的扫描仪探测出来,而不是费精力顺着有缺点的程序来玩游戏,那我们可以围绕这样的点来构建一款游戏。把漏洞转为特征实在是个很棒的方法,这巧妙地创造出了超级棒的游戏设计。

步骤六:自下而上的设计

游戏成品可能会让人感觉它是自上而下设计的:一个异位识别的未来,机器与情感的对抗,扫描孔等等。但是事实上,所有的实体设计都是完全在游戏设计需求的推动下形成的。你为什么想尝试着小心翼翼地传达情绪?因为情绪明显是违反规则的!我们要如何才能让玩家不动他们的头以及捣乱追踪呢?让他们把脸对着一个洞贴着不让他们动!我们要如何确保持续的打光呢?把相机包在一个盒子里!

实体和叙述性设计的每一部分都是为了游戏而设计的,他们以一种有机的方式组合在一起。那确实是我项目里最引以为豪的一部分!尽管这个盒子的实体设计在不同构建里有大幅度的改进,但重点在于现在它只居然是一个废弃的纸箱做的机箱!

步骤七:永不停止地打磨和游戏测试

所有事,到了后来就是要抛光打磨。我们加入了声音,改进了箱盒设计,试验了不同的游戏定时方法。我们甚至录了由合适的配音员发出的画外音指示(以一种非常友善的速度)。尽管这个过程持续了很久,但这是核心概念的首要渐进式改进。实质上游戏在几个月后就完成了,剩下的时间就都花在了执行过程的改进和确保良好的玩家体验上了。

同时我们也没有停止游戏测试的脚步,无论是在游戏测试星期三,还是在带它到在布鲁克林举办的BQE以及Betas之类的项目的时候,都是一如既往。真人对这款游戏的持续反馈在游戏改进的每一步中都至关重要。甚至在GDC展示它的时候,我们没有人觉得它已经“完成”了,我们展示的很多内容都是我们对反馈鉴定后的做出的改进。

所以我希望这些内容对你有所帮助!这是一个总体概述,但是我想告诉你的是我们如何从一个对脸部追踪的模糊概念一步步向前做成完整的游戏体验的。要想在实体空间设计中运用未经检验的技术是相当困难的,但是同样也是非常值得的。对于有些玩家来说他们可能一生只玩一次你的游戏,但他们的体验却是绝无仅有、趣味无穷的,而且这是传统控制器游戏不可能复制的体验。所以,还等什么,干脆现在就开始设计自己的游戏吧!

本文由游戏邦编译,转载请注明来源,或咨询游戏邦,微信zhengjintiao

Design Lessons from ALT.CTRL’s Emotional Fugitive Detector

by Alexander King on 03/17/17 10:47:00 am
Post A Comment

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

Earlier this month, we showed an alternative controller game that I designed with Sam Von Ehren andNoca Wu at GDC’s ALT.CTRL exhibition. ALT.CTRL is a showcase of games with unique controllers. Throughout the exhibition, attendees were curious what our design and development process had been making Emotional Fugitive Detector. I wanted to share some info on our process, as a sort of mini postmortem, in case it’s useful to any future designers of games using novel input methods. So here are the steps to designing a dystopic face-scanning game!

About the Finished Game

Emotional Fugitive Detector is a two-player cooperative game where players work together to outwit a malevolent face-tracking robot. One player is being scanned, and tries to get their partner to pick the emotion they’re making using their face. If they’re too expressive though, the face-tracking API will detect them, and if they’re too subtle, their partner might pick the wrong emotion. It’s been called “unfriendly tech”, “surprisingly affecting”, and, uh… “the tech itself was a little janky… but the concept is so good”! It’s a game about emotional nuance, where a human face is both the controller and screen. But it didn’t necessarily start off that way…

Step 1: Have A Weird Idea & Be Inspired

Sometime early last year, we had read about some open source face-tracking libraries somewhere or another. And, as game designers often do, we thought it might make for a pretty cool game. We originally wanted to make a face fighting game, where players would make different faces at eachother to attack or block. But it never progressed past the preliminary idea phase until we attended last year’s ALT.CTRL show. Seeing so many amazing games that created unique in-person experiences inspired us to actually move beyond the idea phase and try to actually make something.
Step 2: See If Anyone Else Had That Idea Already

So we knew we wanted to make a game using face-tracking. The first thing we did though is something I often do when exploring a new mechanic: see if someone else has done it before! I think this is a neglected part of the process, because games as an artform has such short memory of its own history. But if you take the time to do some research, you’re not just saving yourself the potential embarrassment of rehashing what you thought was a new idea, you’re also able to learn from the successes or failures of your predecessors. (Also, I just love games history.)
In our case we didn’t turn up much. While body tracking had been explored in many Kinect games, we found very few games using facial input (to wiser people, the dearth of examples might have been a red flag). The only ones we could find were various tech demos, or simple games where the face was just a substitution for a button. Games like Eye Jumper or Face Glider have players using their face to make inputs, but in a very direct manner to steer or jump.

Seeing those other games helped clarify to us that we wanted to use the affordances of both face-tracking and the human face as integral parts of the design. Using facial movement as an input is something you could explore better in VR, for instance. So we wanted to detect expressions, and have that be the primary method of play. Our brains are wired to read faces, but it’s not a skill we’re often asked to use in games.

Step 3: Feasibility Test Your Tech

Now it was time to start putting our ideas into practice. Initially we were using OpenCV, by far the best documented open source library for face-tracking. Using a variety of techniques, it makes it easy to detect facial points in photos or videos, similar to what you see in the face-filters on SnapChat. It’s a great library and has an easy to implement plugin for Unity as well. However, it’s primarily a facial detection method only; it provides you with the points on a face and nothing more. While this is all you need if you’re superimposing cat ears onto someone (a noble goal), we wanted to detect expression changes like smiling or frowning. We tried building our own methods to determine these. While they sort ofworked… it turns out facial gesture recognition is a non-trivial problem! But thankfully other people had trodden this path before us. We ended up finding a Java library called clmtrackr, that extends a similar methodology to OpenCV, but had already been trained against a library of faces so it could output confidence intervals of detecting four emotions (sad, happy, angry, and surprised).

I want to emphasize that we are game designers, not researchers, computer scientists, visual recognition experts, or anything like that. Anyone with genuine interest in this area would likely be horrified at our shortcuts and hacks. Rather, we were consciously repurposing a tool to turn it into a game experience.

So it took some doing, but we had our initial technology up and running within a month or two. We could read four emotions from human faces, and use those as inputs into a game system. While initially we weren’t sure the technology would be even feasible, it hadn’t taken long to get something working relatively well. Though ‘relatively’ being an operative word there.

Step 4: Playtest Forever

Sam, Noca and I met at the NYU Game Center, where the three of us are finishing our MFAs. A core principle in the Game Center’s approach to game design is playtesting. Playtesting all the time. Interactive systems are almost impossible to judge a priori, so something that seems great in your head can fall apart when someone who’s unfamiliar with it plays. The program hosts a weekly playtest night called Playtest Thursday where students, faculty and local developers test games and get feedback from the public (said public being predominantly undergrads there for the free pizza).

We went almost every week for several months, testing different gameplay mechanics. This was invaluable for learning the affordances of the technology. It was very persnickety. While it worked great in ideal conditions (ie, when we tested it ourselves), testing with real people revealed several limitations. It was far from 100% accurate, extremely sensitive to how the subject was lit, and if someone moved their head even slightly then it was all over.

As our initial prototypes grew from feasibility tests into game prototypes, we also struggled making a game that was in any way fun. These early prototypes centered on being read by the computer. It would tell you what was it looking for, and the first player to be successfully detected would win. We kept trying to use the design of the game to compensate or hide the limitations of the technology. We’d test narrative framing to explain why the “AI” was so capricious, or use turn-based mechanics to hide the slow recognition speed. It’s very frustrating testing ideas and finding they don’t work! But we kept at it, iterating constantly.

Step 5: Keep Going Till You Find a Great Idea

The watershed moment for us was realizing the detection margin of error could be an asset to the design, rather than a liability. Matt Parker, a game designer and artist with experience in physical installation games, told us about a game he had worked on where players tried to avoid being detected by a Microsoft Kinect. Players had to contort their bodies into weird non-human shapes to win. The genius of that was immediately obvious: if players were trying to avoid being detected by our emotion scanning, rather than attempting to conform to the faulty algorithm, we could build a game around that. Turning a bug into a feature is a great way to stumble on great game design.

The rest fell into place very quickly. A charades-like format, with one player attempting to communicate with another, worked very well in testing. This, coupled with a hidden information mechanic, provided an excellent framework to the game.

A well designed system is critical to a good game experience. Occasionally players at GDC would remark how interesting the underlying technology is, and would speculate about how fun it would be even without the surrounding game. I can tell you empirically that this is totally false!

Step 6: Bottom-up Design

The finished game seems like it was designed in a top-down manner: A dystopic future, robots versus emotions, a scanning aperture and so on. But in fact, all of the physical design was driven purely by the needs of the game design. Why would you try to convey emotions subtly? Because they’re illegal! How can we get players to stop moving their heads and screwing up the tracking? By having them stick their faces into a hole to constrain their movement! How can we ensure consistent lighting? By enclosing the camera in a box!

Every part of the physical and narrative design is serving the game, in a way that feels organic. That’s actually the part of the project I’m most proud of! Although the physical design of the box improved dramatically over different builds, the core essentials were present when it was just a discarded cardboard box!

Step 7: Polish & Keep Playtesting Forever

Everything subsequently was just polish. We put in audio, improved the box design, and experimented with different timing in the gameplay. We even recorded voiceover instructions with a proper voice actor (at a very friendly rate on account of being married to me). While this process lasted a long time, it was primarily incremental improvements to the core ideas. The game was essentially done after only a few months, and the remaining time was just improving implementation and ensuring a good player experience.

We also never stopped playtesting during this time, whether at Playtest Thursday or by taking it to local events like BQEs and Betas at the Brooklyn Brewery. Constant feedback from real people is critical, at every step of a game’s development. Even showing it at GDC, none of us think of the game as ‘done’, and there are many improvements we identified showing it there.

So I hope you find that useful! This is an overview, but I wanted to give an idea of how we went from a vague idea about face-tracking to a complete game experience. Designing for physical spaces using unproven technology is incredibly difficult, but also very rewarding. For some players they might only play your game once in their life, and the experience they have can be unique and interesting, and it can be something impossible to replicate with conventional controllers. So why not start designing your own!(source:gamasutra.com

 


上一篇:

下一篇: