游戏邦在:
杂志专栏:
gamerboom.com订阅到鲜果订阅到抓虾google reader订阅到有道订阅到QQ邮箱订阅到帮看

反思游戏课程设计之游戏编程(1)

发布时间:2013-10-10 17:12:57 Tags:,,

作者:Paul Gestwicki

上两周,我花了大量时间回顾了下学期的三门课程。课程与上学期一样,尽管改了一些主题。经过艰难的开始,我现在对结果很满意。在本文中,我将描述我对游戏编程课程的校正情况,这是计算机科学专业的本科生和研究生的选修课。这门课的实际变动很小,但代表了我本人的研究和学习情况。(请点击此处阅读本文第2篇

我们在这门课中组织了大量项目和团队合作。例如,上学期,我让学生们制作了 《Ohio River Valley》中的《The Underground Railroad》。我还在这门课中实验了各种评分方法。我希望评分尽量符合表现:只按学生的出勤和参与打分,而不是考试或测验。例如,在期中考时,学生完成练习后,我是面对面给小组的各个成员做评估。

game programming(from kickstarter)

game programming(from kickstarter)

这些方法很管用,但反思之后,我发现了两个潜在的问题。第一,有些学生拒绝参与活动或不能保证出勤,以至于这些方法几乎没有意义。事实上,有时候我也觉得很麻烦,所以我希望想出一种允许我给出我认为值得的得分的评估系统。注意,虽然我承认这种评估方法导致给出的得分比我设想的来得高,但它的失败之处其实是双重的:为了提高或保持自己的成绩,有些学生可能需要更确凿的证据来证明自己的进步,特别是如果这些学生缺少内在动机的话。

另一个潜在的问题来源于我希望学生进行反思,而不只是参加实际操作。我想知道有些高分的小组成员是真正掌握了这门课,还是没有深入考虑自己学习了什么。我的鼓励反思的模型是基于实践的—-特别是敏捷回顾。这个模型叫作周期性回顾评估,也就是要求小组成员在这个学期内间歇性地反思自己的成功和失败,然后在学期末反思自己学习了什么。这是一种很吸引人的评估方法,似乎适用于许多情况,尽管提供的个人反馈太少。

这在学期,我听了Daniel Hickey发表的关于游戏灵感评估技术的演讲。他的模型叫作参与式评估法,其中有一个方面是鼓励评估反思行为本身而不是作品。在他的演讲中,他的大胆主张引起我的共鸣:写反思是21世纪的技能。经过与Brian McNely的多年合作,我渐渐理解,在游戏开发中,“写反思”是一种深刻而微妙的方式。

把这些想法放在一起,我决定保持游戏编程课程的基本结构:学生将加入一个或更多小组,按敏捷开发的原则展开学习活动,在多次迭代下制作原创游戏。为了改进小组实践和增进合作,我们将继续使用周期性回顾评估法。现在,我还增加了写个人反思的任务,可以在各次迭代后完成。我希望这些反思能有效果,所以又加入了另一个让我有兴趣的元素:必要问题。

我第一次知道“必要问题”这个概念是在Grant Wiggins的博客中。必要问题的主要目标是构造学习体验。必要问题没有老套或简单的答案,不是学习的结果,但可以反映学生对学习结果的认同和评估。在众人的努力下,我想出了以下几个用于游戏编程课程的必要问题:

1、游戏设计的本质如何影响游戏编程的实践?

2、游戏软件如何有效地管理资产和资源?

3、在游戏开发团队中,你如何谐调人际活动和个人内在活动?

在研究参与性评估法时,我看到了Karen Jeffrey的HASTA博客。她提出的一个“真正的大想法”的概念似乎与必要问题很相似,所以我在评估反思中也借鉴了她的想法。我要求的反思结果包含以下内容:

1、对本课程的一个或以上的必要问题的描述,要带论据。

2、对个人和/或合作实践的结果的描述。

3、描述的批评。

在课程设计中,确定如何引导学生的注意力是最困难的部分之一。所以我才开始写这些课程校正的文章。我打算削减学生花在开发任务上的时间。这些文章可以作为反思实践的指导。它们尊重学生作品的真实性,如果写得好,应该会让学生更理解课程。这个推论也符合我所倡导的小组回顾会议作为敏捷实践的一部分:通过反思我们所做的,我们可以知道如何做得更好。我一直鼓励我的学生写反思,特别是在我开始写自己博客以后。这些反思文章将会使学生受益。

如果能收到关于课程设计的反馈,特别是关于必要问题的,我会非常高兴的,因为它们将成为学生学习体验的核心。

下一次,我将介绍如何校正我的游戏设计课程和高级编程课程。(本文为游戏邦/gamerboom.com编译,拒绝任何不保留版权的转载,如需转载请联系:游戏邦

Revising Courses, Part I: Game Programming

by Paul Gestwicki

I spent the lion’s share of the last two weeks revising my three courses for the Fall semester. They are the same courses as last time, although some of the themes have changed. After a trepidatious beginning, I am now quite pleased with the results. In today’s post, I will describe the revision to my game programming course, an elective for Computer Science undergraduate and graduate students. The actual change to the course may appear small, but it represents a significant amount of research and learning on my part.

I have been structuring this course as a project-intensive, team-oriented experience. For example, last Fall the students implemented The Underground Railroad in the Ohio River Valley. I have also used this course to experiment with various methods of grading. I wanted the grading to be as authentic to the work as possible: students are evaluated on their participation and commitment only, not on quizzes or exams. For example, instead of midterm exams, I held formal face-to-face evaluations with each team member, modeled after industrial practice.

These methods work well, but reflecting on these experiences, I identified two potential problems. First, these methods fail in the case that a student refuses to participate or keep commitments: in particular, these methods produce little that could be considered evidence in the case of an appeal. Realistically, sometimes I get a bad apple, and so I want a grading system that allows me to give the grade I feel is earned. Note that while I admit to having given grades that are higher than I thought were earned, the assessment failure may be twofold: some students may require more concrete evidence of their own progress in order to improve or maintain performance, especially if such students lack intrinsic motivation.

The other potential problem stems from my wanting the students to engage in reflective practice, not just authentic practice. I wonder if some of my high-achieving team members have gotten through these production-oriented courses without having deeply considered what they learned. My model for encouraging reflective practice is based on industrial practice—agile retrospectives in particular—and is documented in my 2013 SIGCSE paper. This model, called periodic retrospective assessment, requires a team to reflect on its successes and failures intermittently during the semester, and at the end of the semester, to reflect on what it has learned. This sociocultural approach to assessment is appealing, and again, it seems to work in many cases, although it affords scant individual feedback.

While at this summer’s GLS conference, I attended a talk about game-inspired assessment techniques given by Daniel Hickey. His model is called participatory assessment, and a particular aspect of it—which you can read about on his blog—is that it encourages evaluating reflections rather than artifacts. During his talk, he made a bold claim that resonated with me: writing is the 21st century skill. After having worked with Brian McNely for the last few years, I have come to understand “writing” in a more deep and nuanced way. (See, for example, our SIGDOC paper that takes an activity theoretic approach to understanding the writing practices involved in an agile game development team.)

Putting these pieces together, I decided to keep the fundamental structure of my Fall game programming course: students will work in one or more teams, organized around principles of agile software development, to create original games in multiple iterations. We will continue to use periodic retrospective assessment in order to improve our team practice and consider what we learned as a community. Now, I have also added individual writing assignments, to be completed at the end of each iteration. I want these reflections to be guided toward fruitful ends, and so I have brought in another pedagogic element that has intrigued me for the last several months: essential questions.

I first encountered essential questions (EQs) on Grant Wiggins’ blog, and I blogged about this experience in the context of my advanced programming course. The primary purpose of EQs is to frame a learning experience. EQs have no trite or simple answers, and they are not learning outcomes, but they inform the identification and assessment of learning outcomes. With a bit of crowdsourcing, I came up with the following essential questions for my game programming course:

How does the nature of game design impact the practices of game programming?

How does game software manage assets and resources effectively?

How do you coordinate interpersonal and intrapersonal activity within a game development team?

In reading about participatory assessment and the badges-for-learning movement, I came across Karen Jeffrey’s HASTAC blog post. What she called “Really Big Ideas” seem isomorphic to EQs, and so I adapted her ideas in defining a rubric for evaluating reflections. I will be looking for reflections that provide the following:

A characterization, with supporting evidence, of one or more essential questions of the course.

The consequences of this characterization on individual and/or collective practice.

The potential critiques of this characterization.

Deciding how to guide student attention is one of the most challenging parts of course design, and I recognize that by introducing these essays, I am reducing the number of hours I can expect students to spend on the development tasks at hand. However, these essays will afford conversation and intervention regarding reflective practice. They respect the authenticity of student work since, if done right, they should yield increased understanding and productivity from the students. This reasoning is similar to that given my proponents of team retrospective meetings as part of an agile practice: by reflecting on what we are doing, we can learn how to do it better. I have been encouraging my students to write reflectively, especially since starting my own blog; these reflective essays codify the practice and reward student participation.

The official course description for Fall’s game programming course can be found at http://www.cs.bsu.edu/homepages/pvg/courses/cs315Fa13. I am happy to receive feedback on the course design, particularly the articulation of the essential questions, since they will be central to the students’ learning experience.

Next time, I will write about the redesign of my advanced programming and game design courses, both of which involve turning to badges to incentivize and reward student activity.(source:paulgestwicki)


上一篇:

下一篇: