Another widely used framework for rendering games is DirectX. It’s a collection of APIs for media applications, but is limited to Microsoft Windows systems, which includes Xbox. The goal of this project was to get myself familiar with the rendering pipeline of Direct3D and to apply some rendering techniques on the GPU that I haven’t done before, like reflections, refractions, a skybox, and normal mapping.
I started off by following this tutorial which explains in much detail how everything works. Just like with the OpenGL project, the first thing to do was to create a textured cube. Following the tutorial, this was no problem at all. It was interesting to see the differences and the similarities of OpenGL and Direct3D. I found the way HLSL shaders are written more difficult to learn, because there’s so much you need to do before it works, but once you’ve got something working, the strictness allows for catching errors early on.
Time for something I hadn’t created before: a skybox. The cencept is quite simple, a skybox is literally a box that is positioned on the camera with a fixed rotation, and no depth, meaning that anything in the scene will appear in front of it, no matter how small the box gets. Direct3D has support for loading skybox textures easily and in the shader you only have to tell them the direction in which you’re looking to make it return the pixel for you, with interpolation and other filters already applied for you. The result:
The skybox is taken at Fisherman’s Bastion in Budapest. I just picked this one because it was free and looked good.
The next challenge was to have objects that are transparent or glossy to refract or reflect the skybox. Again, this was relatively easy. You need to compute a few vectors – the camera to vertex vector, the reflection vector, the refraction vector, the refraction weight, and the ratio between them – which all get interpolated by the pixel shader. The resulting vectors are used to find what colour the pixel should use. For example, here is a sphere with reflection index of 1 (100%), making it reflect the skybox completely:
And for refractions, you set the refraction index to any number and the reflection index to zero:
These can be combined to have an object that is both reflective and refractive:
Glad that’s working! But there’s something we’re missing. When you look at a real glass ball, you’ll notice that near the edges it’ll reflect like a mirror while at the center it won’t reflect at all. You can even see this happening when looking at a window when your eye is really close to it and look almost perpendicular to it. You’ll see some kind of gradient between where it’s transparent and where it’s reflective. There’s a whole formulat that can give you the ratio between how much something reflects and refracts. This is called the Fresnel Equation. This equation is rather expensive to compute for every vertex every frame, but luckily there’s a much simpler approximation that is unnoticably close to the real results, called Schlick’s Approximation.
Added Schlick’s approximation to the pixel shader made the results look much more realistic, especialy for completely transparent objects.
A downside of my implementation is that is overrides the weight between reflecting and refracting, meaning that it’ll only work good on objects that are already very transparent. This can probably be fixed by interpolating the new ratio between the weight and 1 or something, I still have to figure out a way to fix this.
Next on the list: Normal mapping. It’s a way to make surfaces appear bumpy while they are really still flat. A normal map is basically a second texture that is also applied to the same surface, only this texture is used to change the normal of the surface, rather than to apply a colour. The newly calculated normal is then used for lighting. This gives the texture a sense of depth.
Getting it to work was a bit tricky, especially debugging mistakes in the calculations. The only thing to see if it’s correct is to see if it doesn’t look off. And even if things appear correct, it may look wrong from a different angle. And then it can look off, because the texture coordinates of the object itself are maybe mirrored. A simple way of checking if the texture is at least displayed right, is to add some text to the texture. The text may be rotated, but not mirrored.
Once the texture coordinates were figured out and the normal map matched with the texture, it was time to apply it with the texture. The result is just awesome!
I made the lights move to see how it looks in real time. There are two lights; a cyan and a white coloured one, and they spin just above the cube.
This project has turned out to be very educational. It’s great to have a better understanding of DirectX’s Direct3D. Personally I prefer the way that buffers are handled over how OpenGL does it, and the HLSL shaders are also somewhat nicer to work with than OpenGL’s GLSL shaders. The learning curve is a bit steeper, but its strictness is something I can aprove. I may be a bit biassed about this, because Visual Studio and Intelisense were able to ease the process, while for HLSL shaders I didn’t use a smart editor.
3 thoughts on “DirectX 11 Shader Programming”
Comments are closed.