游戏邦在:
杂志专栏:
gamerboom.com订阅到鲜果订阅到抓虾google reader订阅到有道订阅到QQ邮箱订阅到帮看

开发者分享使用Unity制作2D游戏的技巧

发布时间:2013-05-24 11:03:17 Tags:,,,,

作者:Josh Sutphin

我们都知道,Unity是一种容易使用的跨平台3D引擎和工具,但这并不意味着我们不能使用它去创造FPS或第三人称行动冒险游戏。这两年来,我一直在使用Unity创造一款基于精灵的2D游戏(游戏邦注:就像《征服者》和《Fail-Deadly》那样),而我将在本篇文章中描述自己在创造经典2D外观时所使用的技巧。

本文的目标受户

我将简单地介绍自己使用Unity创造经典2D“像素图像”所使用的一些技巧。这篇文章并不是初学者的教程:我希望你已经知道如何在3D背景下使用Unity,并正在寻找如何基于该工具创造2D像素图像的指南。

精灵设置

首先需要理解的便是,尽管你正在创造一些看起来像是2D的内容,但从技巧上来看它仍是基于3D的场景。场景中的每个精灵都是具有触感的嵌块,如果是位于3D空间中就与一般模型无两样了。

perspective(from gamasutra)

perspective(from gamasutra)

你需要创造并导入一个嵌块作为你的网格。我是在Modo上进行创造,即我的建模方案选择。这只是一个简单的单面嵌块,一侧代表一个单位,并且它的面法矢量指向负Z。我同时也使用了平面UV投射而确保UV在面法中的标准化。

modo-quad(from gamasutra)

modo-quad(from gamasutra)

为什么让嵌块朝向负Z如此重要?因为你想要在Unity中设置游戏摄像机朝向负Z,所以世界XY相当于屏幕上的XY,而这就意味着嵌块需要面向相反的方向,如此它才能朝着摄像机并让人所看到。

coordinate-mapping(from gamasutra)

coordinate-mapping(from gamasutra)

你可能会好奇,是否能够只是用Unity的嵌入平面基元而取代创建自己的嵌块原型。我并不建议你们这么做,因为平面基元包含了10×10的嵌块网格,这便意味着每个精灵所需要的渲染是你真正需要的几何数量的100倍。

quad-vs-plane(from gamasutra)

quad-vs-plane(from gamasutra)

在Unity,你需要导入嵌块,然后设置一个包含MeshFilter和MeshRenderer的预制件,如此我们便能看到网格了。你可以面向不同游戏对象创造预制件,如敌人,补拍镜头,效果等等,并且就像你在创造3D图像那样,确保它们都使用了这一嵌块模型。

prefab-setup(from gamasutra)

prefab-setup(from gamasutra)

纹理集

为了创造出不同的精灵,你需要不同的纹理。最简单的方法便是面向每个精灵预制件分配不同的材料,即包含你想要看到的精灵图像,但这也含括着讨厌的性能成本。场景中每个单独的问题都会在运行时触发一个GPU环境;如果你拥有越多独特的纹理,每一帧便需要切换更多环境,那么你的帧率将会变得越低。

你可以通过创造精灵图集而解决这一问题。这是一个在网格中包含了你的所有精灵的纹理:

sprite-atlas(from gamasutra)

sprite-atlas(from gamasutra)

每个精灵预制件都拥有相同的材料分配。你可以编写一个简单的脚本在图集中进行查找:只显示四个数字–min X,min Y,宽度,高度,然后以编程的方式设置精灵的UV去匹配矩形。以下是我所使用的UV分配代码(注意你需要在从纹理空间转向UV空间时翻转V坐标,否则你的精灵便会上下颠倒):

Vector2[] uvs       = new Vector2[m_mesh.uv.Length];
Texture texture     = m_meshRenderer.sharedMaterial.mainTexture;

Vector2 pixelMin    = new Vector2(
(float)m_currentStrand.frames[m_animFrame].x /
(float)texture.width,
1.0f – ((float)m_currentStrand.frames[m_animFrame].y /
(float)texture.height));

Vector2 pixelDims   = new Vector2(
(float)m_currentStrand.frames[m_animFrame].width /
(float)texture.width,
-((float)m_currentStrand.frames[m_animFrame].height /
(float)texture.height));

// main mesh
{
Vector2 min = pixelMin + m_textureOffset;
uvs[0] = min + new Vector2(pixelDims.x * 0.0f, pixelDims.y * 1.0f);
uvs[1] = min + new Vector2(pixelDims.x * 1.0f, pixelDims.y * 1.0f);
uvs[2] = min + new Vector2(pixelDims.x * 0.0f, pixelDims.y * 0.0f);
uvs[3] = min + new Vector2(pixelDims.x * 1.0f, pixelDims.y * 0.0f);
m_mesh.uv = uvs;
}

这背后的原理非常简单。UV空间代表着纹理每个维度的百分比:

uv-diagram(from gamasutra)

uv-diagram(from gamasutra)

面向特定精灵矩形计算实际UV价值总是很无聊。而因为Photoshop的信息面板呈现出了光标当前的像素坐标以及所选择对象的像素大小,所以我们能够更轻松地在像素中呈现出精灵矩形:

info-panel(from gamasutra)

info-panel(from gamasutra)

所以代码只是通过整体的纹理维度去分割像素坐标,从而获得每个轴的百分比:有效的UV坐标轴!

我的脚本不只是分配了一个静态UV设置:它同时还是一个简单的动画管理者。因为你可以使用编程方式去设置UV,所以你能够很轻松地按顺序定义不同的UV,即定义每一帧动画,然后基于编程方式以适当的速度置换UV,从而让精灵能够动起来。我的脚本很简单,要求在面对每一帧动画链时都能手动操作像素,虽然不可否认这么做很乏味,但是因为我未拥有大量的动画数据,所以到目前为止这种方法还是可行的。我们可以直接通过扩展编辑器去完善这一过程(虽然这超过了本文的讨论范围),如通过在编辑器UI上的纹理中直接选择矩形。

精灵着色器

你的精灵仍然需要一个材料去引用纹理集,为此你需要一个着色器。最明显的选择便是默认的Transparent Diffuse,但即使是这一简单的着色器也能提供给你比需求更多的内容(如支持象素光照,这在传统的基于精灵的2D图像中可能不会用到)。Unlit Transparent Cutout更加简单,我编写了一个非常简单的定制精灵着色器:

// Custom sprite shader – no lighting, on/off alpha
Shader “Sprite” {
Properties {
_MainTex (“Base (RGB) Trans (A)”, 2D) = “white” {}
}
SubShader {
Tags {“Queue”=”Transparent” “IgnoreProjector”=”True” “RenderType”=”Transparent”}
//    LOD 100
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Lighting Off
Pass {
SetTexture [_MainTex] { combine texture }
}
}
}

纹理过滤

当你着眼于“像素图像”时,你会发现这便是设置精灵纹理去使用点过滤模式的标准,而不是默认的Bilinear。点过滤在源纹理中保留了硬边,从而让你的精灵看起来更好看且更整洁:

filter-modes(from gamasutra)

filter-modes(from gamasutra)

你也需要避免使用Mip Map Generation(游戏邦注:虽然mip地图会让远处的纹理看起来更清楚,但是这却只能用于3D视图中),并检查你的纹理压缩设置。如果你是面向iOS平台,那么默认的压缩设置便是一些PVRTC,这将可能破坏像素图像。最精确,同时也属于内存密集型的设置是RGBA32。因为大多数像素图像都使用有限的调色板,所以你可以避免RGBA16,并减少一半的纹理内存占用空间。如果你的精灵不需要alpha渠道,那就舍弃alpha组件而设置RGB16去增加额外的内存。

摄像机设置

对于典型的2D风格,你需要使用直角摄像机。基于直角摄像机设置,任何对象不会在后退时变小。在此你可以使用Z轴作为分层机制,即在确保所有对象都排列整齐的同时去控制哪个精灵该位于最上方。

ortho-properties(from gamasutra)

ortho-properties(from gamasutra)

将摄像机放置在世界的起点(0,0,0),并朝向负Z。注意世界轴在视窗中的呈现:当你朝向负Z时,世界X相当于屏幕X(向右增加),世界Y相当于屏幕Y(从底部向上增加)。这让我们能够很轻松地1)认为游戏是在传统的XY坐标轴上,2)在世界空间,屏幕空间以及GUI空间之间转变。

直角的规格

如果你所追求的是“像素图像”,那么摄像机的直角规格便很重要;这是在Unity上创造2D内容中较为复杂的一部分。

直角规格传达的是在摄像机投射上半部分包含了多少世界单位。举个例子来说吧,如果你设置的直角规格为5,那么视口的深度将包含10个世界空间单位。(水平范围则是基于呈现比例。)

回想你的精灵嵌块是一侧代表一个单位。这便意味着直角规格将能告诉你在一个视口中将能垂直堆叠多少个精灵(除以2)。

为了整洁地渲染像素图像,你需要确保每个精灵的像素的源纹理将在视口上的映射为1:1。你不会想略过源像素或翻倍,否则你的精灵将会变得很扭曲或“很丑”。保证1:1比例的诀窍便是确保直角规格等同于垂直屏幕分辨率除以精灵的像素高度的数值。

让我们假设你运行于960×640像素的图像中,并且你正在使用64×64的精灵。垂直屏幕像素(640)除以精灵像素高度(64)后为10,这便是能够在640像素中垂直叠加的64×64精灵数量。要记得直角规格总是只有一半高度,所以在此的直角规格为5。如下所示:

ortho-size-clean(from gamasutra)

ortho-size-clean(from gamasutra)

如果你设置了直角规格而减半或翻倍了目标,你也仍会获得可用的结果,因为精灵的直角规格将被匀称地整合到视口的垂直规格中。但是如果你未能有效地设置直角规格,你便会发现一些像素被略过或翻倍了,而最终将呈现出一些糟糕的效果:

ortho-size-dirty(from gamasutra)

ortho-size-dirty(from gamasutra)

各种分辨率

你不需要为了渲染整洁的像素图像而局限于一个固定的分辨率上。处理各种像素的最简单的方法便是在摄像机上附加一个定制脚本,在此根据当前的垂直分辨率和已知的精灵规格去设置直角规格:

// set the camera to the correct orthographic size
// (so scene pixels are 1:1)
s_baseOrthographicSize = Screen.height / 64.0f / 2.0f;
Camera.main.orthographicSize = s_baseOrthographicSize;

虽然进行了简单的修改,但仍具有一个缺陷,即随着屏幕像素的下降,你所看到的世界将变得越来越小,精灵将占据越来越多的屏幕。这便是保持来源和屏幕像素1:1比例的结果:比起在1920×1200世界,在640×480世界中一个64×64精灵反而占据着更多空间。而这是否是问题所在还需要依靠于你的特定游戏的需求。

如果你希望不管屏幕分辨率如何,精灵都能保持着同样的规格,那就将直角规格设定为固定值,并且不管屏幕分辨率发生怎样的改变它都保持不变。这里所存在的缺点在于,你的精灵将不可能拥有1:1的源像素和屏幕像素比例。你可以通过减半或翻倍目标分辨率而缓解这种情况。

GUI注意事项

如果你正在使用Unity的即时模式GUI,你便能够自动调节GUI去适应当前屏幕的分辨率,即使你是使用硬编码去设置所有GUI坐标。将如下代码放置在OnGUI调用上方:

void OnGUI()
{
// scale the GUI to the current resolution
float horizRatio = Screen.width / 1024.0f;
float vertRatio = Screen.height / 768.0f;
GUI.matrix = Matrix4x4.TRS(
Vector3.zero,
Quaternion.identity,
new Vector3(horizRatio, vertRatio, 1.0f)
);

你可能需要不时地在世界空间坐标和屏幕空间坐标间转换着。built-in Camera.WorldToScreenPoint和Camera.ScreenToWorldPoint功能非常适用于直角摄像机,但还有一个问题是:它们的屏幕空间理念和GUI系统的屏幕空间理念都使用的是倒转的Y轴。

当你使用Camera.WorldToScreenPoint时,你需要回到X向右边提升,Y从底向上提升的位置上,而(0,0)处在屏幕的左上方。如果你要在世界空间和GUI空间之间转换着,你就需要倒转Y坐标:

y = Screen.height – y;

2D中的物理性

你可以推动Unity的物理模拟在2D中运行。创造一个物理对象并在上面附加一个ConfigurableJoint组件,然后设置“ZMotion”,“Anugular XMotion”和“Angular YMotion”属性为“锁定的”。这将避免物理对象沿着Z轴向前移动,并限制其旋转只会发生在同样的轴上(如此它便不会倾斜或弯曲地呈现在屏幕上)。这并不是Box2D,但却能发挥功效。

你必须在场景的每个物理对象上设置这种ConfigurableJoint。不幸的是,并不存在任何方法能够将整个物理模拟对象整合到二维空间里,这必须以每个对象为基础。

粒子系统

在2D中,你通常不需要采取任何特殊行动去使用粒子系统。基于预期效果,你可能希望确保Z速度总是为零。因为你正在使用直角摄像机,粒子中的任何Z移动都不是很明显。(如果你看到粒子奇怪地移动着,这便是你最先需要检查的内容。)

如果你希望粒子也能像精灵那样拥有整洁的“像素图像”,那就在ParticleRenderer组件中使用精灵着色器去分配材料。

本文为游戏邦/gamerboom.com编译,拒绝任何不保留版权的转载,如需转载请联系:游戏邦

Making 2D Games With Unity

by Josh Sutphin

Unity is well-known for being an easy-to-use, cross-platform 3D engine and toolset, but that doesn’t mean you’re forced to make an FPS or third-person action-adventure game. I’ve been creating 2D sprite-based games in Unity for two years now – games like Conquistador and Fail-Deadly – and in this article I’m going to show you the techniques I used to achieve the classic 2D look.

Who This Article Is For

I’m going to present a brief overview of a number of techniques I’ve used to create a classic 2D “pixel art” look in Unity. This article is not a beginners’ tutorial: I’m assuming you already know how to use Unity in a 3D context and are just looking for some pointers on how to make it work for 2D pixel art.

Sprite Setup

The first thing to understand is that even though you’re making something that looks 2D, it’s still technically a 3D scene. Each sprite in the scene is a single, textured quad, positioned in 3D space just like a regular model.

You’ll need to create and import a quad to use as your mesh. I made mine in Modo, my modeling package of choice. It’s just a simple one-sided quad, 1 unit to a side, with its face normal pointing down negative Z. I also applied a planar UV projection to normalize UVs across the face.

Why is it important for the quad to face down negative Z? Because you want to set up your game camera facing down positive Z in Unity so that world XY correspond to screen XY, and that means the quad will need to face the opposite direction so that it’s facing the camera, and thus can be seen.

Incidentally, you may be wondering if you can just use Unity’s built-in Plane primitive instead of modeling your own quad. I don’t recommend this, because the Plane primitive actually consists of a 10×10 quad grid, meaning each sprite will render 100 times the amount of geometry that you actually need!

In Unity, you’ll import your quad and then set up a prefab consisting of a MeshFilter and MeshRenderer, so that the mesh can be seen. You can make prefabs for different game objects – enemies, pickups, effects, etc. – like you would in 3D, just making sure that they all use this quad model.

Texture Atlassing

To create different sprites you’ll need different textures. The simplest way to do this is to assign a different material to each sprite prefab, which contains an image of the sprite you want, but this actually has a nasty hidden performance cost. Every unique texture in the scene triggers a GPU context switch at runtime; the more unique textures you have, the more context switches have to happen every frame, and thus the worse your frame rate.

You can solve this problem by creating a sprite atlas. This is just a single texture with all of your sprites contained in it, in a grid:

Each sprite prefab has the same material assigned (more on the material assignment in a minute). You can write a simple script to handle the atlas lookup: just expose four numbers – min X, min Y, width, height – and then programatically set the sprite’s UVs to match that rectangle. Here’s the UV assignment code I used (note that you have to flip the V coordinate when translating from texture space to UV space, otherwise your sprite will be upside-down):

Vector2[] uvs       = new Vector2[m_mesh.uv.Length];
Texture texture     = m_meshRenderer.sharedMaterial.mainTexture;

Vector2 pixelMin    = new Vector2(
(float)m_currentStrand.frames[m_animFrame].x /
(float)texture.width,
1.0f – ((float)m_currentStrand.frames[m_animFrame].y /
(float)texture.height));

Vector2 pixelDims   = new Vector2(
(float)m_currentStrand.frames[m_animFrame].width /
(float)texture.width,
-((float)m_currentStrand.frames[m_animFrame].height /
(float)texture.height));

// main mesh
{
Vector2 min = pixelMin + m_textureOffset;
uvs[0] = min + new Vector2(pixelDims.x * 0.0f, pixelDims.y * 1.0f);
uvs[1] = min + new Vector2(pixelDims.x * 1.0f, pixelDims.y * 1.0f);
uvs[2] = min + new Vector2(pixelDims.x * 0.0f, pixelDims.y * 0.0f);
uvs[3] = min + new Vector2(pixelDims.x * 1.0f, pixelDims.y * 0.0f);
m_mesh.uv = uvs;
}

The principle behind this is quite simple. UV space represents a percentage of each dimension of the texture:

Calculating the actual UV values for a particular sprite rectangle is tedious. It’s much easier to express the sprite rectangle in pixels, especially since Photoshop’s Info panel shows you the cursor’s current pixel coordinates and the pixel size of the selection:

So, the code simply divides the pixel coordinates by the overall texture dimension to get a percentage along each axis, and voila: valid UV coordinates!

(Remember the gotcha, though: the V coordinate has to be flipped!)

My script actually does more than just assign a static set of UVs: it also functions as a simple animation manager. Since you can set UVs programatically, it’s easy to define an array of different UVs in sequence which define each of the frames of an animation, then programatically swap the UVs at the appropriate rate in order to animate the sprite. My script is simple, and requires manually entering pixel coordinates for each frame of each animation strand, which is admittedly tedious… but since I don’t have a ton of animation data, it’s been acceptable thus far. It would be straightforward (though beyond the scope of this article) to extend the editor to improve the process, for example by visually selecting rectangles directly on the texture in the editor UI.

Sprite Shader

Your sprite still needs a material to reference your texture atlas, and for that you need a shader. The most obvious choice is the default Transparent Diffuse, but even this simple shader does more than you need (such as supporting per-pixel lighting, which you’re probably not using in a traditional 2D sprite-based art style). Unlit Transparent Cutout is simpler, but we can get simpler still. I wrote a custom Sprite shader which is as bare-bones as I could get it:

// Custom sprite shader – no lighting, on/off alpha
Shader “Sprite” {
Properties {
_MainTex (“Base (RGB) Trans (A)”, 2D) = “white” {}
}
SubShader {
Tags {“Queue”=”Transparent” “IgnoreProjector”=”True” “RenderType”=”Transparent”}
//    LOD 100
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Lighting Off
Pass {
SetTexture [_MainTex] { combine texture }
}
}
}

(I suspect this can be cheaper still, but my knowledge of ShaderLab is limited at best.)

Texture Filtering

If you’re going for the “pixel art” look, then it’s absolutely critical that you set your sprite textures to use Point filtering mode, not the default Bilinear. Point filtering preserves hard edges in the source texture, keeping your sprites nice and clean:

You’ll also want to disable Mip Map Generation (mip maps make faraway textures look better, but this only applies to a 3D perspective view) and check your texture compression settings. If you’re building for iOS the default compression setting is some flavor of PVRTC which will ruin pixel art. The most accurate setting, but also the most memory-intensive, is RGBA32. Since most pixel art uses a limited palette, you can typically get away with RGBA16 with no visual degradation, and reduce the memory footprint of the texture by half. If your sprite doesn’t need an alpha channel (perhaps this texture atlasses a bunch of background tiles?) then set RGB16 to save additional memory by discarding the alpha component.

Camera Setup

For a typical 2D style, you’re going to want to use an orthographic camera. With an orthographic camera setup, objects do not get smaller as they recede into the distance. This allows you to use the Z (depth) axis as a layering mechanism, controlling which sprites draw on top of which while ensuring everything still lines up nicely.

Place your camera at the world orgin (0, 0, 0) and orient it to face down positive Z. Take note of the world axis display in the viewport: note that when you’re facing down positive Z, world X corresponds to screen X (increasing to the right) and world Y corresponds to screen Y (increasing from bottom to top). This makes it very easy to a) think of your game in traditional XY coordinates, b) translate between world space, screen space, and GUI space (more on that in a minute).

Orthographic Size

If you’re going for the “pixel art” look then the camera’s orthographic size is of critical importance; this is the trickiest part of nailing 2D in Unity.

The orthographic size expresses how many world units are contained in the top half of the camera projection. For example, if you set an orthographic size of 5, then the vertical extents of the viewport will contain exactly 10 units of world space. (The horizontal extents are dependent on the display aspect ratio.)

Recall that your sprite quad is 1 unit to a side. That means the orthographic size tells you how many sprites you can stack vertically in the viewport (divided by 2).

To render the pixel-art look cleanly, you need to ensure that each pixel of the sprite’s source texture maps 1:1 to the viewport display. You don’t want source pixels being skipped or doubled-up, or your sprites will look distorted and “dirty”. The trick to ensuring this 1:1 ratio is to set an orthographic size that matches your vertical screen resolution divided by the pixel height of a sprite.

Let’s say you’re running at 960×640, and you’re using 64×64 sprites. Dividing the vertical screen resolution (640) by the pixel height of a sprite (64) yields 10, the number of 64×64 sprites that can be vertically stacked in 640 pixels. Remember that the orthographic size is a half-height, so your target orthographic size in this case is going to be 5 (one-half of 10). It should look like this:

If you set your orthographic size to half or double that target you may still get usable results, because the sprite’s vertical size will still divide evenly into the viewport’s vertical size. But if you set the orthographic size incorrectly, you will see some pixels skipped or doubled, and it will look very bad indeed:

Variable Resolution

You don’t need to be confined to a single, fixed resolution in order to render clean pixel art. The simplest way to handle variable resolutions is to attach a custom script to your camera which sets the orthographic size according to the current vertical resolution and a known (fixed) sprite size:

// set the camera to the correct orthographic size
// (so scene pixels are 1:1)
s_baseOrthographicSize = Screen.height / 64.0f / 2.0f;
Camera.main.orthographicSize = s_baseOrthographicSize;

While that is a simple fix, it does have a drawback: as the screen resolution decreases, you’ll see less and less of the world, and sprites will take up more and more of the screen. That’s the consequence of keeping a 1:1 ratio between source and screen pixels: a 64×64 sprite takes up more apparent space at 640×480 than it does at 1920×1200. Whether this is a problem or not depends on the needs of your specific game.

If you want your sprites to remain the same apparent size regardless of screen resolution, then simply set the orthographic size to a fixed value and leave it there regardless of the screen resolution. The drawback there is that your sprites will no longer have a 1:1 source-to-screen pixel ratio. You can mitigate the ill effects of that by only allowing resolutions which are exactly half or exactly double your target resolution.

GUI Considerations

If you’re using Unity’s immediate-mode GUI, there’s a simple trick you can use to automatically rescale the GUI to fit the current screen resolution, even if you’ve hard-coded all your GUI coordinates. Simply put the following at the top of your OnGUI call:

void OnGUI()
{
// scale the GUI to the current resolution
float horizRatio = Screen.width / 1024.0f;
float vertRatio = Screen.height / 768.0f;
GUI.matrix = Matrix4x4.TRS(
Vector3.zero,
Quaternion.identity,
new Vector3(horizRatio, vertRatio, 1.0f)
);

You may occasionally need to translate between world- and screen-space coordinates. The built-in Camera.WorldToScreenPoint and Camera.ScreenToWorldPoint functions work perfectly well with an orthographic camera, but there is a gotcha: their notion of screen-space, and the GUI system’s notion of screen-space, use inverted Y axes.

When you use Camera.WorldToScreenPoint you’ll get back a point with X increasing to the right and Y increasing from bottom to top, with (0, 0) at the lower-left of the screen. The GUI system expects coordinates with X increasing to the right and Y increasing from top to bottom, with (0, 0) at the upper-left of the screen. So if you’re translating between world space and GUI space you’ll need to invert the Y coordinate:

y = Screen.height – y;

Physics in 2D

You can constrain Unity’s physics sim to run in 2D… sort of. Create a physics object and attach a ConfigurableJoint component to it, then set the “ZMotion”, “Angular XMotion”, and “Angular YMotion” properties to “Locked”. This prevents the physics object from moving along the Z (depth) axis, and constrains its rotation to only take place around that same axis (so it can’t pitch or twist “into” the screen). It’s no Box2D, but it’ll get the job done.

Note that you’ll need to set up this kind of ConfigurableJoint on every physics object in your scene. Unfortunately there is no way to globally constrain the entire physics sim to two dimensions; it must be done on a per-object basis.

Particle Systems

You don’t generally need to do anything special to use particle systems in 2D. Depending on the desired effect, you may wish to ensure the Z velocities are always zero (for example if you want to ensure a more-or-less even spread of particles in the camera plane, e.g. for an explosion). Because you’re using an orthographic camera, any Z motion in the particles will not be obvious. (If you see particles moving strangely, this is the first thing you should check.)

If you also want your particles to have a clean “pixel art” look just like your sprites, simply assign a material using the Sprite shader (discussed earlier) in the ParticleRenderer component. (Unfortunately I have yet to devise a way to atlas sprites in particle systems.)(source:gamasutra)


上一篇:

下一篇: