Rendering a entire MV with a homemade renderer!

So I am a computer graphics guy. I wrote my own renderer from scratch and made it support the Embree ray tracing library. It is semi-fast and had shown been able to render complex scenes. Hmm… maybe I can render a short MV in 1080p with it? Maybe I can!

What my renderer can’t do.

Because I’m using my homebrew renderer. There are some limitations I have to work with. These limitations influenced some decisions made later on. Including

  1. No animation support.
  2. Only reads the PBRT scene format.
  3. No tone mapping built in.
  4. No distributed computing support.
  5. Triangles only.

With that out of the way, let’s begin.

Preparing the scene.

I’ve come across MMD motion files for the song ぶれないアイで a while ago. Seems to be a perfect fit for what I need. I fired up Bender and try to load models and the motion files using mmd_tools into it.

Then I met my first problem. mmd_tools seems not willing to import camera motion. Without it, the entire video will look boring. Nothing works after a lot of trial and error, So I looked into mmd_tools’ code.

NOPE. I said to my self. The code is pretty much a mess with all the binary IO and parsing. So I got back to fiddling with Blender. Fortunately I suddenly successfully imported the camera motion without knowing why.

As I’m having fun watching MMD playing back in Blender. I found the physics is screwed up. So I cranked up the physics simulation accuracy and baked them. Doesn’t work. Cranked up again. Doesn’t work. I tried and tried. It finally works when I’m almost 2x of Blender’s default setting.

Screenshot_20180107_214711.png
Yeah…

Good! Now I have all the frames I want to render. I wrote a short script to dump each frame as OBJ files (since my renderer can’t handle animations).

scene = context.scene

for frame in range(scene.frame_start, scene.frame_end):
scene.frame_set(frame)
scene.update()
the_file = scene.name + "_" + str(scene.frame_current) + ".onj"
bpy.ops.export_mesh.obj(filepath = "/run/media/marty/MYHDD/burena ai de/" + the_file)

I also dumped the camera position and orientation basically the same way and generated a list of camera data for each frame. I have lost the script I use, never the less here is the list.

Screenshot_20180107_220856.png

Each line represents where the camera is, where it is looking at and which way is “up” for the camera. The first line stores data for frame 1, the second line is for frame 2, etc…

Since I only export the main character’s mesh. I need to setup the lighting for my scene. I grabbed some royalty-free environment map from the internet and placed some light source into a “base” scene file. Which serves as the stage it self. And the character model can be loaded to render different frames.

A dirty yet effective distributed rendering system.

It is impossible to render the entire film on my desktop PC. It simply will take forever. I need a distributed rendering system to speed things up.

What I need is simple. There is a server assigning which frame a client needs to render and send the necessary files to the client. And the clients render the frame. When the rendering rendering the scene. They send the rendered frame back to the server. Fortunately because the only thing changing from frame to frame are the the main character’s mesh and the camera’s position. I can store the textures and other scene geometry on the client side and only send the obj files when needed.

But how would the server handle task scheduling? Simple! Let’s suppose I store rendered frames in ./img and the image files are name 0001.hdr, 0002.hdr, etc… The server scans the storage directory and find missing files at start-up and add all of them to a queue waiting for rendering. Then when a client asks for a task, send whatever is in the front of the queue to render.

Screenshot_20180107_224403.png
Server side control flow

And the client renders the frame, sends it back and ask for more task.

screenshot_20180107_224828.png
Client side logic

This design is really simple and have some good property. For what ever reason a client never responses or crashed. I can simply restart the client and it will instantly get a new frame to work on. And the lost frames can be re-assigned by restarting the server.

And an, added bonus. Since the I’m only sending files and text between server and client. I ended up running the entire system over HTTP!

Note that I’m sending obj files to my clients but they only have the ability to load PBRT files. So I wrote a program called model2pbrt that converts model files into PBRT scene files (source available here).

Rendering the film and post processing

Now the exciting and boring part. – Launching every computer I have to render the film.

I installed my rendering client software on every computer that I have access to. My PC, my laptop, etc… and launch all of them!

Looking at the rendering server is like watching a hand make cake being baked. I am super excited to see what is going on and what is the current status. – Also, who in the world can resist multiple opened terminals with all the numbers changing on it. So I ended up staring at the terminal for a very long time and finally minimize it after I feel all the counter-productiveness creeping up my leg.

Screenshot from 2018-02-06 16-41-16
Render a patch of my MV using 2 computers

It is a waiting game now. I can’t do much now besides not being stupid and turn off my PC (It has both the server and a instance of clience on it). Fourtunatelly that didn’t happen. Ultimatelly it takes around 8 days to finish all the computing. (Special thanks to dic1911 lending me his old laptop to offload some task.)

After all result being submitted I started processing them. First of all, I saved all rendered images as HDR files so I need to tone map them into LDR formats. I fired-up LumianceHDR to do that. This is not a smooth process, LumianceHDR is capable to doing batch processing, but it is not designed to handle thousands of images at a time. (I can’t get multi thread going and it can only do ~3000 files a time.)

I got 7000 BMP files as the result of processing all the files. Then I need to chain them together to make a video. Fortunate FFMpeg is to the rescue. This commands collects all of my images into one video.

ffmpeg -r 30 -f image2 -s 1920x1080 -i %04d.bmp -i buranaaide.wav -vcodec libx264 -crf 18 video.mp4

Valla, I’m done! I got a VM rendered with my hombrew renderer!

You can watch it on YouTube!

The inperfects

I discovered that I have some bug when looking through the HDR files. The edge of transparent objects seems too bright. Like so:

Screenshot from 2018-02-07 00-36-37.png

Also sometimes the image flickers a little in the final video. I suppose that is cause by the tone mapping algorithm.

Please leave a comment or contact me if you have any questions!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

Powered by WordPress.com.

Up ↑

%d bloggers like this: