Ray Tracing Meshes
In a nutshell: We will run through how to write a ray tracer which can render trianglemesh geometry using the help of tinyobjloader.
Ray Tracing Triangle Meshes
Learning ray tracing nowadays is really intuitive as there is a wealth of (free!) information out there. Online tutorials, blogs, books all do fantastic jobs of introducing the basics of ray tracing (such as Pete Shirley’s ‘Ray Tracing In One Weekend’). There are also really in-depth deep dives into ray tracing (such as PBRT). However, in my experience I struggled to find a good middle-ground, especially as someone with a non-technical background. Last year, I found myself in a situation where I felt comfortable with the basics, but the climb towards reading material such as PBRT seemed too far out of my reach. However, I personally found breaking things down into smaller project-based goals made that climb more achievable.
One particular area I found tough when progressing to more advanced ray tracing was to render more complex geometry such as meshes. As CG Artist, I had a bunch of personal projects where I modelled, textured, shaded assets from scratch. How cool would it be to now render them in my own ray tracer? Unfortunately I struggled to find many resources online that easily explained this in a plain (artist friendly!) and step-by-step way, so this is the goal for this blog post- To help anyone who is at the stage where they have a non-technical background, but are comfortable with basic ray tracing and want to progress further.
For completion, I will be covering how to write the full ray tracer but feel free to skip down to Step 6 if you are only concerned with getting the TriangleMesh functionality.
Assumptions
-
We are familiar with setting up projects in an IDE with C++ (I will be using Visual Studio Code on MacOS, but any OS should be fine)
-
We are familiar with the basics in a ray tracer (vectors, matrices, camera projection, intersection routines, image writing)
-
We are familiar with setting up CMake in our chosen IDE
Step 1 - Setup a Visual Studio Code project
We will start with a standard C++ and CMake project where the main.cpp is just a simple “hello world\n”. For convenience I will use GLM for our Vector/Matrix math, but you can use your own, or a different one such as Embree or Eigen. I’ve made a math.hpp file to include the GLM files needed. To make it a bit more generic, I typedef glm::vec3 and glm::mat4 as Vector3f and Matrix4x4f but again this is optional. So at this stage, we should have a CMakeLists.txt, main.cpp and math.hpp/cpp that looks like:
math.hpp
main.cpp
Step 2 - Simple Ray-Sphere intersection
Just to make sure our image writing and ray tracing have some basic functionality, lets go ahead and make a simple ray-sphere intersection routine. Assuming we are familiar with a simple ray-sphere tracer, this should all be straight-forward. Lets make a ray class (which we can add to the math.hpp file).
We will then make a struct to set the render options.
And then a function for ray-sphere intersection:
And a cast_ray function. If we hit a sphere, we will just draw the normal for that pixel.
Then finally in the main function, make the intersection routine
Which should give us this image:
Step 3 - Generic Shape class
Okay, this is working well as a starting point! We’ve got a simple project that can be built using CMake, and write out successfully ray traced images. Now we can make a more generic “shape” class with a bare bones intersection function. Pretty standard OOP practise here. From there, lets make a Sphere class. I also make a “Scene” class which basically abstracts a list of shapes (so we can have multiple objects easily). If you’d rather structure your ray tracer in a data oriented manner, that’s fine too. Firstly, we should update our math.hpp file with some more functionality for the Ray class to store t min and t max values. We also will add a struct for storing ray-shape interaction attributes such as intersection point, normals etc. These can be used for shading calculations. Also a deg2rad function is useful for the camera class later on!
math.hpp
Generic shape class looks like:
And the Sphere and Shape classes like:
sphere.hpp
sphere.cpp
scene.hpp
scene.cpp
Visually, our ray tracer will produce identical results. This step is more just to restructure our code in a way that implementing triangle meshes is more straight-forward. To recap, we should have math.hpp/cpp, shape.hpp/cpp, scene.hpp/cpp, sphere.hpp/cpp and finally our main.cpp files!
Step 4 - Adding a camera class
This part is optional. However, under the assumption that one might use this tutorial as a launch pad for implementing additional features (BVHs, materials, lights etc) I will go through quickly adding a basic camera class. Since Triangle Meshes can be of many sizes, it may be useful to have a more flexible camera system to implement. My preference is to make one that uses a camera-to-world matrix, so we can easily copy transformation matrices from Maya. This is pretty common workflow for me, I will figure out the layout and composition of a scene in Maya, do some preview tests and then export the data for my hobby renderer to read. But you can use a LookAt camera set-up or skip this step entirely if you prefer. For the camera class, the only function we need for now is one to generate a ray direction based on FOV, resolution and image aspect ratio.
camera.hpp
camera.cpp
Again, if you don’t feel like implementing this, you can skip this! However, I think it makes positioning the camera much easier.
Step 5 - Rendering a triangle!
Before we can render whole triangle meshes, we will need to be able to render at least one triangle successfully! To do this, I’ll create a struct for a Vertex, which basically means we can store more information than a point in space. A vertex will hold a Position value, Geometric Normal and UV coordinates.
Here’s the vertex struct, no surprises here!
And for the triangle class we will just have an additional function for getting barycentric coordinates to interpolate vertex attributes. I’m assuming we are all familiar with ray-triangle intersection and how to set up a triangle shape. If not, feel free to jump to the end, where I link the source code.
Okay, so now we have implemented a Triangle class that inherits from Shape. It has its own intersection routine and a function to obtain barycentric coordinates. So, if we make a test triangle and try render it, we can hopefully end up with something like this:
Great stuff! We can render a single triangle, or more if we want to. We are now ready to start loading in triangle meshes.
Step 6 - TriangleMesh Rendering
Hopefully up until now, everything has been pretty straight-forward and familiar. If not, I would really recommend taking a look at Scratchapixel or Ray Tracing In One Weekend. Now for the main part! We will use the OBJ file format. Whilst certainly not the industry standard geometry file type, it is nevertheless pretty easy to export from Maya, Modo, Zbrush, which is well suited for small toy ray tracers. To read obj data, we will use Syoyo Fujita’s wonderful tinyobjloader, which you can find here. Loading in the data into our program will be quite straight forward, we essentially need to wrangle the vertices from the OBJ file. Each vertex can have a number of attributes associated with it. The main ones I use are Vertex P values, Vertex Normals and UV coordinates. We can grab these and add them to a list holding all the vertices for the obj. Luckily, tiny_obj_loader makes this really straight-forward:
So essentially we loop through the whole obj file, scooping up the positions of all the vertices, Normal values and UV coordinates. We then can add this to a container (I’m using a std::vector for ease of use). However, currently we have a list of unconnected vertices. So we are not quite done yet. We need to have a list of triangles for the ray tracer to test intersections against. Luckily this is pretty simple. We can make a loop through the vertices list and create triangles from 3 vertices like so:
Then for the intersection routine for a trianglemesh, we just loop through all the triangles testing for intersections, and whichever intersects and is closest to camera is the triangle we use for shading! Obviously this is really slow. We are essentially checking every single pixel for intersection with every single triangle. That’s pretty slow already when we are just firing one visibility ray for each pixel. Imagine if we had a path tracer where we had 128 spp- this would be disasterously slow! Luckily we can leverage binary search trees in the form of Bounding Volume Hierarchies (BVHs), which are used in production renderers to accelerate ray tracing. Acceleration structures are probably worth their own blog post.
This should now work! Here are some test renders I made:
So that’s how we can create a simple obj ray tracer! There’s a bunch of ways we can extend this. For example, currently the only “shading” we have is filling in pixels with the background colour or some arbitrary value such as barycentric, NdotV or UVs. This could be extended by adding in lights. We could also implement some materials such as lambert, specular, glass BxDFs. Since we already know how to load in UVs we could even start adding texture maps. For performance, we could look into either adding simple multi-threading or acceleration structures. Now that we can render triangle meshes, one could extend the code to support subdivision surfaces, or maybe curve primitives. For now though, hopefully this has been useful to explain how to ray trace custom obj files! Thanks for reading.
Source code is here.