Partio / PartioHoudini builds

After hours of going borderline crazy, I’ve just managed to build the latest version of Partio for Windows (Command Line and Houdini). This was way harder than it should be, thanks to all kinds of weird peculiarities in Visual Studio 2012 and 2013. Jeeez.

This build is based on the current redpawfx partio repository (as of 28/05/15).

It’s not perfect. The Houdini binary does not output any messages regardless of the verbosity level due to cout/cerr/endl not linking. I’m sure it’s something obvious for someone – for now I can live without it 😉

The Houdini plugin is compiled with VS2012.

The Command Line tools are compiled with VS2013.

There’s a partio.lib for both VS2012 and VS2013.

If you can use it, enjoy, no guarantees or warranty. It may set your machine on fire.
(for example, I had to randomly replace dynamically sized arrays with std::vectors because VS does not support them – who knows what that will cause… seems to work for now)

Anyway, here’s the download:

partio binary may2015 (rar)

Berlin Geo Data (or, why can nothing ever be easy?)

Following my previous adventures with geo data, today I had to deal with some from Berlin. First of all, it is fantastic that pretty much everything you want is available for free. So that’s a plus.

Not so cool: finding the right pieces of data is a bit of a hassle.

First step seemed straight forward: find the right coordinates. I used this page to do it. I was expecting Gauß-Krüger coordinates to be used, and they looked similar enough to this to be right. Not so. They are using UTM WGS84, so make sure you’re looking them up correctly.

The actual data is here: Stadtentwicklung Berlin Geoinformation

What I wanted could be found under “ATKIS® DGM (2m-Rasterweite)” (which gives you XYZ based elevation data) and “Digitale farbige Orthophotos 2014 (DOP20RGB)” which gives you ortophotos of the Google Maps style variety. Yay! But: they are in the ECW format, ye olde trusty “ERDAS Compressed Wavelets”. Right. Totally. Which nothing can read. Irfan View tried and failed in 90%.

So, next issue: how to convert ECW to something usable? gdal_translate is what you need. In theory. Command line tool. Deal with it. Get the binary package at gisinternals

You can then find it under bin\gdal\apps\gdal_translate.exe. Copy it back to the bin folder, since otherwise it can’t find the DLLs (unless you want to add the folder to your path).

Here’s how to use it.

All good? Nice. In my case however, gdal_translate would not read the bloody ecw files, no matter what.

Turns out there is a much simpler solution: XnView! There’s a plugin for it that reads and converts the files. Hi-Fives all around! Get it here!

Feed it all into Houdini via the XYZ Python node and an Attribute From Map node and you get yourself a point cloud with the aerial picture assigned to the color values.

Then find out that, alas, the resolution still isn’t nearly enough for what you planned to do with it.

The end. (;_;)

WordPress 4.2 update did a number on my site

Well then… this was fun. Updated to 4.2 and everything fell apart. The wheels came off. Fire in the attic. So, currently the theme is semi-fucked up and Gust isn’t working either… That’s all. Update: mostly working again now.

Update: Arūnas Liuiza has released Gust 0.4.1 which fixes the issues with WP 4.2.

Unreal: Invert middle mouse navigation to move like Maya / C4D / …

Nothing throws me off more when working with multiple application than different shortcuts and mouse controls. It’s just super annoying to constantly having to readjust and actively think about how your hands are moving, instead of concentrating on the actual task at hand.

Having just started looking into the Unreal 4 engine, this was the first thing that immediately put me off a bit – the middle mouse navigation (Pan) acts inverse to how one is used to from other 3D apps. Remember the first days when Apple changed the default touchscreen scroll direction? Yeah. Like that. Only in more dimensions.

Luckily – and I admit, this barely even qualifies as a “quicktip” – there is an option in the preferences that lets you invert the inverted, and since minus times minus makes plus, we’re golden.

DEM data coordinate system conversion

When dealing with DEM data, you can run into a whole bunch of coordinate systems other than the commen longitude/latitude. For example, DEM data in Germany is often provided based on the Gauß-Krüger and UTM system instead. Never heard of it? Well, it’s nice in so far as it gives you a rectangular grid. It’s not nice as it is really not straight forward to convert between the two… luckily there are a number of conversion tools available online:

Orchids LL to various using GMap

Delattinia LL to GK using GMap

There are a number more (just google ‘lon lat utm convert’ or something), but it is important to know which reference system you are working in (ETRS89 vs WGS84 for example) and also which UTM grid you are in. And so on. Which is why I really like the two links above, because you just get your position from the map directly and can be pretty sure you’re not too far off then…

Houdini: Dealing with Digital Elevation Model (DEM) data

This article is going to be a work in progress, since the subject is something I’m currently working on in a project and things change daily.

DEM data is basically a bunch of number giving you the elevation of a specific spot on earth. That spot is usually encoded in longitude and latitude or some other horizontal and vertical value. So, yes, basically XYZ data. Which makes it pretty easy to work with.

I’ve worked with two file formats so far, but there are a number more out there (though the principle stays the same, some come with extra header or meta data that needs to be evaluated before the data becomes useful).

SRTM data (.hgt)

SRTM stands for the Shuttle Radar Topography Mission by NASA, which kindly offers its data to the world. There are multiple versions in different resolution and states of data quality, but they all use the hgt file format. It’s just a list of values that stand for height measurements, and you get the location of those points from its filename (lon/lat). All the files are 1201×1201 measurements, so you end up with a one degree longitute and latitude grid.

Btw., the amount of data can add up quite a bit – the full set of points for Germany from the SRTM data came to roughly 125 million points.

I’m using numpy to read the data and write them into an array. You should too. Numpy is great.

Here’s the basic idea. Put it into a Python node and be amazed.

import numpy as np
import os
import math

node = hou.pwd()
geo = node.geometry()
hgt_file = "c:/data/N00E000.hgt"
skip_amount = 10

lon = 10
lat = 47

basedir, fname = os.path.split(hgt_file)

lon_EW = 'E' if lon > 0 else 'W'
lat_NS = 'N' if lat > 0 else 'S'

# this builds the correct filename based and the lon and lat variables
base_fname = lat_NS + str(int(math.floor(lat))).zfill(2) + lon_EW + str(int(math.floor(lon))).zfill(3)
fname = base_fname + ".hgt"
hgt_file = basedir + os.sep + fname

if os.path.exists(hgt_file):
    siz = os.path.getsize(hgt_file)
    # could hardcode dim to 1201, but future 1" measurement files will have higher dimensions
    dim = int(math.sqrt(siz/2))
    assert dim*dim*2 == siz, 'Invalid file size'

    # >i2 means 16bit integer, big-endian (which is how the hgt data is stored)
    elev = np.fromfile(hgt_file, np.dtype('>i2'), dim*dim).reshape((dim, dim))

    # define iterator outside so wwe can access index inside loop
    it = np.nditer(elev, flags=['f_index'])
    geo.addAttrib(hou.attribType.Global, "hgt", base_fname, create_local_variable=True)
    geo.setGlobalAttribValue("hgt", base_fname)

    # create a nice dialogue to let us interrupt the operation if it takes too long
    with hou.InterruptableOperation(
        "Creating points from HGT file", open_interrupt_dialog=True) as operation:

        while not it.finished:
            # skip_amount to reduce the data
            # -32768 is the measurement in the hgt files if data is missing or faulty
            if it.index % skip_amount == 0 and it[0] > -32768:
                point = geo.createPoint()
                col = math.floor(it.index % dim)
                row = math.floor(it.index / dim)
                height = it[0]
                point.setPosition((row, height, col))
            percent = float(it.index) / float(dim*dim)

XYZ ASCII data (.xyz)

XYZ is a simple format. One data entry per line, three space separated values for x, y and z.

Keeping things simpler, without the fancy automated name generation etc., here’s how to read the data:

import numpy as np
import os

node = hou.pwd()
geo = node.geometry()

xyz_file = hou.parm('/obj/geo1/python1/xyz_file').evalAsString()
x_scale, y_scale, z_scale = hou.parmTuple('/obj/geo1/python1/xyz_scale').evalAsFloats()

xyz = None
if os.path.exists(xyz_file):
    xyz_array = np.genfromtxt(xyz_file, dtype=[('x', np.float), ('y', np.float), ('z', np.float)], delimiter=' ')

    for xyz in np.nditer(xyz_array):
        point = geo.createPoint()
        point.setPosition((xyz['x'] * x_scale, xyz['y'] * y_scale, xyz['z'] * z_scale))

Creating a surface

Houdini offers a lot of ways to surface point clouds. For example, how about creating a VDB from the points and then convert it to a PointSoup? That’s one way. Not super fast though.

But, remember that all the points are in an even grid. What else is in an even grid? A Grid object. So, here’s the, I think, fastest way to surface your map:

  • create a Grid, sized to the bounding box of the point cloud and makes sure the grid and point cloud are at roughly one level
  • write all the P.z values of the points to an attribute (e.g. “height”)
  • use Attribute Transfer to transfer “height” from the point cloud to the Grid, make sure to use a narrow kernel radius
  • use Attribute Wrangle to set the Grid’s @P.z to its height
  • if your pointcloud has unregular edges, you could use Blast to remove everything from the Grid where height was e.g. 0.0

This method has another great advantage: playing with the Grid resolution lets you scale between high- and low-poly versions of your map.

Houdini: remember to lock your door – and your nodes

Something that’s often forgotten (and then sometimes remember when waiting for the same, unchanged node to cook for the gazillions time) is the fact that you can lock nodes in Houdini. You can think of the lock as taking a snapshot of your node-graph up to the point of that node and then working off of that snapshot.


  • node is locked. So no changes.
  • HIP file grows bigger, because node’s contents are cached to the scene file


  • contents are cached in the file – which size-wise can be a problem, but it also means no reliance on external files. File nodes can be cached and that caches the contents of the actual file read in.
  • if memory is no issue, the scene grows potentially a lot faster, since none of the previous nodes need to be re-cooked

Cinema 4D and After Effects: OpenEXR Multi-Layer workflow

What follows is a simple step by step guide to set up your project for OpenEXR output in Cinema 4D in a way that makes sure the linear workflow is kept intact from the render to the compositing in After Effects (or other similar apps).

Cinema 4D

Project setup

Open your Project Settings either through the menu Edit > Project Settings… or by pressind Ctrl-D (for OSX it’s usually the same shortcut, only it’s Cmd instead of Ctrl). Here, make sure that Linear Workflow is checked and that the Input Color Profile is set to sRGB. This should be the default setting in Cinema 4D these days, but it’s better to check and be sure.

Background (skip if you don’t care): Linear workflow refers to the absence of gamma correction from you color workflow. Right. So what is gamma correction about. Our eyes read brightness not in a linear fashion. A doubling of light does not mean a doubling of brightness. What appears as 50% grey to us is actually just 18% grey. Gamma correction takes the fact that we recognize brightness on a curve into account and encodes colors accordingly, so we have more shades of those colors available where we actually notice the difference, and fewer where we wouldn’t notice anyway. This is great from a visual perspective (quite literally so), but not so much from a computer’s or camera sensors. Neither really care for our eyes’ lack of linear reception. And having to constantly add and substract gamma curves can cause all kinds of color shifts, even more so when there are slight variations in the methods these are applied or even worse, different gamma values. So what is being done instead is to internally treat all color linear, without gamma. 50% grey is 50% grey, everywhere. And thanks to 32 bit value ranges we have plenty of room for pretty much every color value, unlike in the days of 8 bit where we had 256 grey values and that was that. So, to get back to the project settings, what we set up here is this: we tell Cinema 4D to internally calculate everything in a linear fashion (Linear Workflow checked on), but to also expect textures and stuff coming from outside to be using the sRGB profile (which is the linear color multiplied by a gamma curve of 2.2) or whatever other profile is embedded inside the image file. This means you can work in Photoshop etc. as usual, save your sRGB based images (which is what you mostly work in when you work on a computer monitor) and give them over to Cinema 4D without thinking about color profiles. Beware though: if you use HDRI for lighting, those usually come as 32bit HDR or EXR files that are linear and not sRGB! So for those you have to make sure their profile is set to linear (which usually Cinema 4D does automatically).


JPG, PNG… textures are usually encoded with sRGB. Since this is the default settings, nothing should need to be done. In 99% of all cases the fact that the file is using sRGB is embedded in the metadata and the default “Color Profile” setting of “Embedded” will recognize this.

Same goes for EXR, which are always linear.

If in doubt check the preview image. The wrong color profile will almost always look very significantly off.

Little tip: right click on the preview image and click on “Open Window…”. It’ll open the preview in a separate, scalable window that’ll let you render larger previews.

Render Settings

There are a few things to look out for in the render settings.

Obviously Save and Multi-Pass should be checked, with all needed passes added to the Multi-Pass. Since we want everything in a single multi-layer OpenEXR file, make sure to also add an RGBA Image pass to your multi-pass and to disable the Save under Regular Image. Otherwise you get an additional file with another Beauty pass, which just clutters up the folder and isn’t needed.

Make sure under Multi-Pass-Image you changed the Format to OpenEXR. It’s always 32bit. Check the Multi-Layer File box, otherwise you get one separate EXR file per pass. Layer Name as Suffix does not have any effect on multi layered files, but leave it checked on just in case. Straight Alpha should be left on if you need to composite the render on top of something else. It’ll give you a clean alpha channel that is not tinted by the background, unlike a premultiplied alpha. Use Premultiplied (Straight alpha unchecked) only if you know exactly what the background color will be and you rendered the image on the very same background.

Important: if, like in the screenshot, the Regular Image file format was set to something other than OpenEXR, the Image Color Profile was probably defaulting to sRGB. Even though it is not obvious, the Regular Image’s Image Color Profile will affect the Multi-Pass Image too! Thus, it needs to be set to Linear Color Space manually! Otherwise you end up with sRGB embedded into your EXR and have to manually counteract that again in After Effects, causing all kinds of confusion…


If all went well we should now end up with a single file EXR that contains all the passes, is compressed and uses linear gamma. This is perfect for compositing for several reasons. As explained, linear workflow prevents color shifts between applications or even inside a single application. A single file per frame also prevents chaos, where otherwise for every frame there would be 20 files in a folder (making it 20.000 in a 1000 frame sequence). And also the compositing app only needs to cache and access a single file as compared to many, which can make a big difference over a network.

After Effects

Project setup

Just like in Cinema 4D, your first action should be to set up the project correctly. Go to File > Project Settings… and under Color Settings change the Working Space to sRGB IEC61966-2.1 and check the Linearize Working Space box. Now we have basically the same setup that we have in Cinema 4D. Ideally you could now also work in 32 bits per channel depths, but this is very memory intensive and may slow things down quite a bit. It shouldn’t matter now (except for some Layer Effects that behave differently in different depths! Trial and error….)

Interpret Footage

Import the EXR sequences as normal. Then right click and Interpret Footage. For some reason AE never guesses the Alpha right on a straight alpha EXR, so change that to Straight – Unmatted. (in the screenshot the Frame Rate is greyed out because it was a single still image, usually you’d have to set the correct frame rate here)

More important though are the Color Management settings. Assign the sRGB IEC61966-2.1 profile and change the Interpret As Linear Light to On (not just for 32 bit).

If you add non-linear files (files that use sRGB encoding for example) to the project, set everything up as above but change the Interpret As Linear Light to Off (which AE should do by default).

Accessing the Layers in the EXR

Accessing the layers in a multi-layer EXR is being done through the “EXtractoR” effect, which is basically a stripped down version of the ProEXR plugin that comes with AE. You can find it under Effect > 3D Channel > EXtractoR. Add the EXR to your comp and add the EXtractoR effect to the layer like so:

Click not where it tells you to click (I know…) but below, where it lists the Red/Green/Blue channels. A new dialog pops up.

Under Layers you should see all your multi-pass renders. In the case of RGB passes like Reflection, Diffuse etc. the dialog will fill out the Red, Green, Blue and Alpha channel automatically with the correct layer. Object Buffers however only have a single color channel and this trips up the dialog. No worries though, just set Red, Green, Blue to the Object Buffer’s Y channel (Y for luminance). Leave the alpha on (copy), since the Object Buffers don’t have an alpha channel (they are full frame greyscale where black = alpha).

Duplicate the layer as many times as required to get all your passes. Now you have every pass on a layer, but AE only has to read a single file for all those.


This should be all, things should now look correctly from render to output! But don’t trust my word for it.

Here’s how the image looked in the Cinema 4D picture viewer:

And here’s the same in AE:

And here’s a screenshot from an MP4 generated from that same comp:

Looks the same to me. Success :)

Further reading:

Cinema 4D: Incremental Save using Python

Here’s a little script for you to use to implement an incremental document save that takes into account letters following the version number, if your naming convention calls for it. For example, here we use a convention using this sort of naming:

  • filev001AB.c4d
  • filev002YZ.c4d

Why? Because we like to sort by version number and this way the files still sort correctly in the explorer/finder despite artist initals.

Anyway, here’s the script:

import c4d
import os
from c4d import gui

# Saves the current document with incremented version number

def IncreaseVersion(fullPath):
    fn, ext = os.path.splitext(fullPath)
    fnreverse = fn[::-1]  # reverse string

  versionString = ""
  versionNumber = 0
  beginVersionString = False
  for letter in fnreverse:
      if letter.isdigit() is True:
          beginVersionString = True
          versionString += letter
          print("Digit: {0}").format(letter)
          print("Non-Digit: {0}").format(letter)
          if beginVersionString is True:
              versionString = versionString[::-1]  # reverse string
              versionNumber = int(versionString)
              break  # first non-digit ends version string

  versionNumber += 1
  versionStringNew = str(versionNumber).zfill(len(versionString))

  fnreverse = fnreverse.replace(versionString[::-1], versionStringNew[::-1], 1)
  fullPathNew = fnreverse[::-1] + ext

  print("Increased version {0} to {1}").format(versionString, versionStringNew)
  print("Changed path {0} to {1}").format(fullPath, fullPathNew)
  return fullPathNew

def main():
  currentPath = doc.GetDocumentPath()
  currentName = doc.GetDocumentName()
  fullPathNew = IncreaseVersion(currentPath + os.sep + currentName)
  newPath, newName = os.path.split(fullPathNew)

  if c4d.documents.SaveDocument(doc, fullPathNew, saveflags=c4d.SAVEDOCUMENTFLAGS_0, format=c4d.FORMAT_C4DEXPORT):
      print("Saved as {0}").format(fullPathNew)

      print "Error whilst saving." + newPath
      gui.MessageDialog('Error whilst saving to ' + newPath)

  if __name__=='__main__':

And, if you like, an icon too:

TIF file

Quicktip Octane: linear response camera settings

So, here’s a quick tip for Octane (Cinema 4D, but I assume it’s the same in other plugins as well). By default, when you add an Octane camera, under the Camera Imager tab Agfacolor_Futura_100CD is selected as the Response type. Even if “Enable Camera Imager” isn’t turned on, this response type is being used and causes your colors to be tinted in a certain way (presumably emulating the Film stock of that response setting). While often looking nice, it will change the way that your textures are rendered. You may have noticed that texture color sometimes looks off. This is probably why. To fix this, simply change the Response to Linear and the Gamma to 2.2. This should show you the colors 1:1 as they were in the texture file.