top of page

3D rendering (colorful box and ground plane)

  • itzvnodm
  • Oct 7, 2014
  • 4 min read
Purpose:

The assignment was to

  • Change from 2D to 3D with an addition object floor along with a cube.

  • Add camera and controls with depth buffer.

Program output:

Moving the cube: (Arrow keys)

Right, positive x-axis

Left, negative x-axis

Front, positive z-axis

Back, negative z-axis

Moving the camera: (WASD)

W, positive y-axis

A, negative x-axis

S, negative y-axis

D, positive x-axis

Until now we were rendering a rectangle in the screen space (or bounds). Now that we have world and multiple objects that move in 3D space, we take advantage of several coordinate systems to handle the rendering of all the objects and display only those objects that logically fit in screen space that the camera is looking at with depth perception. In order to do this we need to have some kind of representation of coordinate systems (transformation matrices or spaces) in which the object can exist. Camera is a special object that looks at the world and decides what objects can fit in screen space and has its own movement.

Model Space: Every object has its own model space where it is centered at origin (0.0f, 00f, 0.0f) and facing forward. The position is called location position which is mostly set to (0.0f, 00f, 0.0f), unless for some artistic reasons we need a pivotal point that is not the center of the object. In such cases the object rotates around this pivot whenever rotated.

transformation1.jpg

World Space: When an object is placed in the game world (Or in other words the World space), its position can be calculated in world coordinate system. Every object placed in world has a world location relative to world origin as shown below.

Graphics3D_LocalSpace.png

Camera Space: Camera can be considered as the eye through which we see our rendered objects. Every object in world can be moved to camera space.

Graphics3D_CameraSpace.png

It accepts three parameters when creating the view frustum. View frustum is the volume in the world where every object distinctly visible to the camera gets rendered on screen

  • Position or Eye

  • Look at or direction to point capture

  • Up vector

  • Field of view

  • Aspect or (Width / Height)

  • Near plane and far plane.

Graphics3D_CameraPerspective.png

Screen space: When the human eye views a scene, objects in the distance appear smaller than objects close by - this is known as perspective. Rendering 3D objects on 2D screen is pretty much capturing the Z or depth value of each of them and squish the object on the screen relative to their depth value. As seen below, considering the screen is below the object, the 3D cube is squished on 2D plane.

pic010.png

Or even better example would be below,

img00006.jpg

New Vertex Shader: The vertex shader now has even more constants compare to before. The 4x4 matrices are the one that represent spaces or coordinate systems in the game. Per-view constants are the related to camera whereas the per-instance constant is related to each object.

Vertex.png

Pix output:

This Pix output shows the draw call of the 3D cube. The debugger tab on the right show the stepping through the vertex shader and how the value of per-instance variable is multiplied with per-view constants to get the final screen position.

pix5.PNG

Extra:

  • I made an effort to create a more robust material constants handling that now handles setting and getting floats, float arrays, vector4 and D3DMatrix (MaterialConstantData)

  • Each Material object has 3 std::vector<> of material constants (MaterialConstantData), viz., Per-View constants, per-material constants, per-instance constants.

material1.PNG
Material.PNG

How I went on to desiging above:

I wanted to create a struct that held values of shader material constants. Since constants could be of any types (including float, float array vector4, Matrix4x4), I initially create an enum with typename and based off of that the constructor of struct received a void pointer and type casted it to do a new, memcpy. Now for delete to work I had to create a destructor that deleted the VoidPointer using case by case typecast and delete.

In order to better that, I created a template class and with that new, memcpy and delete could be managed a lot easier with lot lesser code. But I later thought, could I copy an object or a class using memcpy? Found out that’s not suggested and could create undefined behavior in constructor or destructor data of the copied object. So in order to fix that I had it so that the constructor received an array of T<templated> default values and count and I would loop through every array value to use its default copy constructor to copy it in the VoidPointer. With this to work in material class I just had to create a vector of these materialconstants<T>. Well I now realized that each constant could be of different type and I can’t create a vector of template with no type defined or with multiple types.

In order to get around this situation, I created a non-templated base class (Which is pretty much an interface class that had all the function declaration of the templated derived class). Now that I had this base class, I created a Vector of BaseClass* and let polymorphism do its work to call the appropriate template derived class functions. Having BaseClass pointer as a std::vector took care of object slicing as well. One more thing, the destruction of the BaseClass has to be virtual so that it calls the derived destructor on its delete.

Finally, I dont know if it was an over kill as an engineering task, but it was fun to learn designing.

Problems faced:

I had my near and far planes flipped around while initializing the camera and had a hard night finding the bug. Interested in knowing how that affects the output?

Time Spent:

Extra: 5hr

Handling material constants, setting and getting: 1hr

Creating camera and adding controller: 3hr

Making the output 3D and adding depth buffer: 1hr

Write-up: 2hr

Total: 12hr


 
 
 

コメント


Recent Posts
Archive
Search By Tags
bottom of page