Camera-Based Culling in Houdini
I’ll be doing a quick overview of how I set up our camera-based culling system for our film. We faced a number of obstacles on this project, some of them internal and many of them external. One significant later hiccup was that we had access significantly less render power than we were initially promised, and so we had to come up with solutions that would allow us to render scenes entirely on individual, moderate-power lab computers. This ranged from generous use of render layers to the implementation of a rudimentary LOD system on the building generator to significant reduction of detail on certain distant assets not incorporated in the LOD system. As helpful as these techniques were, they weren’t enough to counteract the lockups that Redshift experienced when doing initial scene setup. After setting up this system, we saw enormous performance increases, and not just at rendertime. For example, one of the functions of the system culled city blocks before any procedural generation could happen, dramatically increasing viewport performance and calculation times.
First, I’ll go over the technical details of how this was accomplished, then I’ll talk about how we integrated this system very quickly into our pre-existing workflow. To begin, the code behind it is quite simple. After a specific camera was designated to be the center of culling, various parameters could be pulled from it, such as its Euler rotations, its aperture, and focal length, and from this, its field of view could be calculated (more useful formulas can be found in Houdini’s documentation: https://www.sidefx.com/docs/houdini/ref/cameralenses). Due to our specific needs and time constraints, I only worried about the horizontal field of view, and as our camera never rolled significantly along its z-axis, I only needed to worry about y-axis rotations. As such, my next step was to convert the initial camera values to radians and compensate for field of view on the left and right camera bounds. In addition, I added an additional user-editable parameter to give additional range to the culling system for specific objects. This was important as some objects, like city blocks, were culled from their centers, and as such the culling system needed a little extra field of view to pick up the blocks on the edges of the viewing field. Furthermore, sometimes the camera had its pitch adjusted, and a couple extra degrees of sight were necessary to compensate.
After compiling the vectors representing the Euler rotations of the center of the camera’s field of view, as well as its left and right bounds, I converted those Euler rotations to directional vectors using a little math, as Houdini’s built-in clipping node defines its clipping plane using a directional vector. I then wrote out these vectors to detail attributes that would then be referenced in a couple of clip nodes. All in all, a pretty basic system, but it fit our needs well enough. However, the problem at this point became: How do we integrate this new culling system across all of our different shots which have already been set up and are nearly ready to be rendered?
Luckily, when I set up all of our scene files, I made liberal use of Houdini’s digital asset system, meaning I could make changes to any given subsystem, and they would be added to any scene using that system. If you’re not familiar, digital assets essentially allow you to save out specific nodes trees out to disk, separate from their scene files. Thus, one could edit just that asset file without making changes to any given scene, much like Maya’s referencing system. Because of this, the answer became pretty obvious. I packaged the culling system into its own digital asset, and embedded it within all of our different environmental HDAs. Then I moved up the extra FOV parameter to the top level of environmental assets that might need it, for convenient tweaking on a scene to scene basis. All this worked great, but there was one more issue to grapple with: How would we get each digital asset to recognize the appropriate camera consistently on a shot to shot basis where camera namings were not consistent? The solution I ultimately came up with was create one more digital asset whose sole purpose was to have a single field filled out on it. No nodes within, just a string field that a user could drag their camera into. I chose this method as our Houdini artists were needing to reinstall HDAs at the beginning of every work session due to external factors outside of our control. Thus, anyone working in our scene files would inevitably install this camera selector asset at the beginning of their work session, so the only directive I had to give other artists was to place down a new camera selector node in any scenes they worked in if it didn’t already exist, and plug the camera in. Relative referencing and consistent naming behaviors took care of the rest.