Share via


Work with 3D graphics in your DirectX game

[This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation]

Learn about the development of 3D graphics and effects, and how to incorporate them into your game.

A brief overview of 3D graphics

3D graphics refers to the specific techniques used to create and display game visuals that rely on a defined 3D coordinate space, even when the play is confined to one plane. Specifically, 3D graphics decouple the screen coordinate space (2D) from the world coordinate space (3D), and you implement the visual components in this defined world space.

Alternatively, 2D graphics deal with the X and Y axes exclusively, with any notion of depth handled as part of the rasterization process, where the priority of one sprite or bitmap over another in the draw list is based on rules that have nothing to do with depth.

There are many techniques that you can use to create 3D visuals. But the most common approach is the one used by the majority of games and graphics hardware today: geometric objects and surfaces that are textured and shaded using a specific processing pipeline, and which is invoked by your game's rendering loop.

This process has 8 stages:

  1. Loading and transforming the geometry data
  2. Applying per-vertex shader operations
  3. Setting up the geometry
  4. Rasterizing the scene (flattening the 3D scene to a 2D projection)
  5. Culling the hidden surfaces
  6. Fragment shading (texturing, lighting) and other per-pixel operations
  7. Frame buffer post-processing (filters and full-frame shader effects)
  8. Outputting to the display

The objects in your 3D world coordinate system are collections of points (called vertices) that describe a geometric object. We call these collections geometry or polygons, and we call the data implementations of these collections meshes. A key process of 3D graphics development is working with these meshes: loading them, managing them, and transforming them.

After you load the meshes for the objects that define your scene into memory, you transform them into the coordinates of your world. To manipulate these meshes, you create vertex shaders, or micro-programs written in a shader language such as HLSL, can be loaded into the graphics processing unit to perform various per-vertex operations and effects such as simple lighting effects, color values for gradient interpolation, and deformation and tessellation. The stage is often referred to as T & L, or transformation and lighting. You compute the coordinates for the application of texture fragments -- bitmaps that contain pixel-level detail to be applied to the surfaces defined by the vertices -- at this stage.

After that, you set up the geometry for the scene, and determine what is inside the camera's view and what is outside. This rectangular pyramid -- the shape that contains everything that the camera for your scene is viewing -- is called the frustum. Everything outside the frustum is clipped, and will not be rasterized in the next step. Geometry shaders, when available, can execute microcode to assist in the management of the scene's overall detail.

Rasterization is the process of flattening a 3D scene into a 2D projection of that scene, that is, eliminating the z-axis from rendering. Because the display is a 2D surface -- an effective x-y plane -- you must rasterize the scene to be able to display it. This also translates the world coordinates into 2D screen coordinates, and our graphics are now defined in terms of pixels rather than vertices.

Because we can't draw surfaces that aren't visible in the 2D projection, like the back side of an object, we remove those surfaces. Now, instead of polygonal surfaces, we are working with fragments, or 2D pixel areas that represent those surfaces.

Although you can theoretically create a mesh so complex that every detail is represented as geometry, this can be expensive, both in terms of developing the mesh and for performance. Texturing allows you to apply bitmaps to the surface fragments, and to express visual detail with less of a performance penalty. Multitexturing allows you to express further detail and style, by layering textures on top of each other, like dirt maps or baked lighting effects. You can also use fragment shaders and the micro-programs called shader routines to apply per-pixel calculations dynamically to extend the illusion of detail, or to simulate complex lighting effects, or to add stylistic visual properties to your 3D objects.

After the you load your meshes and transform them, rasterize them, and apply detail textures and/or fragment shader effects, you can apply post-processing effects to the frame buffer to convey further detail or stylistic elements, including motion blur, filters, and full-frame lighting effects. Often, this involves rendering the frame buffer to a single texture and applying shader operations to it. A great deal of the unique look of your game can come from your investment in post-processing.

Finally, you'll take the frame, as drawn in your back buffer or swap chain, and send it to the front for display. This appears on the monitor as a single frame of your game, and the process starts with the next cycle of the rendering loop.

Design considerations

3D development can be a complex process. Good planning can go a long way toward simplifying it. Here's a few things to consider if you are new to 3D graphics development.

  • Start small. Focus on small scenes and simple meshes. You can make a great game with basic primitives. While graphics can be tremendously rewarding to work on, don't lose sight of your game. A small, tightly-scoped game with limited graphics is more fun to play than an ambitious, glitchy game with a sprawling scope and every graphics trick in the book.
  • Invest in good tools. Creating and organizing the complex meshes and textures that define the models for your game can quickly become overwhelming. Planning and creating complex levels and environments often demand intermediate tools that couple the layout of the environments to the gameplay mechanics, such as events and AI behaviors, that are otherwise extremely tedious to bake into your code itself.
  • Use the power of shaders. You can do a lot with shaders and the High Level Shading Language (HLSL) that they execute as microcode. DirectX 11 makes the effective use of shaders easier than ever, but they can be a drain on performance if not used properly. Knowing when and how to load them is more important than the individual efficiency of the algorithms they implement. They are the most important feature of modern graphics programming.
  • Art is more than just artists; it's the skill of the developer in using shaders to realize that style. Aiming for photorealism won't necessarily give you the biggest impact, or the best performance.
  • Technology can also be a big draw when implemented in the service of your game play. One exciting new feature that you can add to your game with Direct3D is stereoscopic 3D graphics. Of course, your game must be developed with stereoscopic 3D effects in mind: this means creating a device and swap chain that support stereoscopic 3D, setting up left and right eye render target views, and using stereoscopic 3D projection transforms, and querying and handling stereo 3D status and events. Phew! (For the stereoscopic 3D to work, your target hardware platform needs to support at least feature level 10_0 or above of Direct3D, a stereoscopic 3D-capable configuration and WDDM 1.2 drivers.)
  • Know your scene. The biggest performance gains can be picked up by knowing where detail is necessary and where it isn't, and where the camera will and won't be at any given time. Knowing your scene means developing around what the camera sees.
  • Be aware of your frame rate. A smooth, consistent 30 or 60 frames drawn and displayed every second makes for a much more pleasant experience for the player. Remember that not all graphics hardware is equal, and while your game may run at a smooth frame rate on your development computer, it may not run so well on a netbook. Find a good compromise between effects and performance.
  • Not everyone in your audience is on the cutting edge. If you're making a game with broader appeal, be sure to support lower feature sets, such as DirectX 9_1, as well as DirectX 11 for the cutting-edge.
  • Don't be afraid to experiment! Shader code is self-contained, and with a little practice, you can tweak effects and manage geometry without impacting the whole game system.

Getting started

Still confused about vertexes and shaders? Here's more info to help you get started.

In this section

Topic Description

Create 3D graphics with DirectX

We show how to use DirectX programming to implement the fundamental concepts of 3D graphics.

 

Reference