Killing Caustics Cleverly

Alliteration aside, this blew my mind.


The two images above were both rendered in nearly 18 minutes on a 12 core i7 CPU. All materials and settings everywhere were exactly the same… sort of.

I’ve been playing with image stacking lately, mainly as a tool to render images and animations progressively, but when rendering some glass the other day, I realized that the only reason it renders so slow is because the noise and fireflies don’t change all that much, only more and more of it gets added and eventually averaged out. So if we change the noise pattern and render less samples a couple of times…

Continue Reading…

Commonly Ignored Feature #4: UVs are Colours

Not just UVs, but any texture coordinate.



If you connect the UVs of an (unwrapped) plane to a shader’s colour, this is what you’ll see. Basically, texture coordinates are simply an ‘image’ of sorts, where the red value corresponds to the X-axis, and the green value corresponds to the Y-axis (and for 3d coordinates like Generated or Object, blue is the Z-axis.)

Why do we care?

It allows us to manipulate these coordinates as if they were colours (which I suppose they are):



Here I’ve simply added a Musgrave texture to the UVs, and plugged that into the Vector input of the image texture:



Don’t forget that, except in the case of shaders, a socket’s colour is only a visual indication of the data it gives/takes and not a strict law – vectors and colours are essentially the same thing, three values, hence we can mix them up and use them together.

But those bricks look ugly you say? Well yeah. A more practical example would be to use the Random output of the Object Info node (yeah, again) to give each object (which have the same UVs) different offsets – so if you have a billion objects that are all the same, an easy way to break up the repetitiveness is to change the texture coordinates of each of them randomly.



That’s all folks!

I’m trying to post these ‘Commonly Ignored Features‘ twice a week, so let me know if you think of something I could share!

Commonly Ignored Features #3: Object Index Node


Now before you ignore me, I’m not talking about compositing here. I’m talking about the Object Index output of the Object Info material node:


The cool thing about this is that if you give a whole bunch of objects a single material, then you can still vary the material for each object by using the Object or Material index:


In the image at the top, I used a little bit of python to give each selected object a random index inside a certain range:

import random
for obj in bpycontext.selected_objects:

The range (between 440 and 640) is actually the range of colours in wavelengths. So plugging that into the wavelength node will give me a random colour for every object:


But we could do that with the Random output of the same node, albeit with less control, so here’s an even better example:

When you have a bunch of objects using the same material and you only want to change one of those objects, for example scale the texture up or down, you can use that object’s index as a mask and mix two things together with it. In the case below, making the object that has a pass index of 5 use tighter textures:

passindexSince we currently lack an “Equal To” mode for the math node, we need to use the product of Less Than and Greater Than modes. The object index is an integer, so using 5.1 and 4.9 for those nodes respectively we can be sure that we’re only getting the index of 5.0

Optimization Tip:

The mapping node’s Scale ability is really just multiplying the UVs (which is really just an ‘image’ where red and green represent X and Y coordinates, plug the UVs into an a shader’s colour if you don’t believe me). So instead of using two mapping nodes in the example above, use a Colour Mix node on Multiply with the same scaling values for the R, G and B and use the Object Index for it’s Fac.

Baking for Cycles

I stumbled upon a thread on BlenderArtists the other day – a guy called Simon Flores wrote a script that allows you to bake stuff in cycles and any other render engine for that matter.


I wouldn’t quite call it baking, but in simple cases it could be quite useful. It goes through every face in the whole scene and places the camera facing it, renders it, and at the end of all that it joins all the pieces together. Genius right? Sort of.

It’s a great start, but there are some serious limitations:

  • The object you’re baking must be clear of any obstructions otherwise the camera cannot see it. That means that any non-manifold meshes will have artifacts
  • It’s really really slow, rendering a whole image for every single face. It took several minutes to bake a couple cubes.
  • The meshes must be triangulated first, since it can’t work with quads.
  • All meshes must be UV mapped and have the same image assigned to them in BI before baking (with no overlapping UVs of course)
  • Did I mention how slow it is? I started baking the Matball used on the wiki, with just 20 samples it was going at about 0.001% per second. That’d take 28 hours.
  • Since it places the camera on each face, any view-dependent shaders (like glossy, glass and anything with fresnel) will come out really weird.

If you can ignore or manually fix those limitations, then I’m sure you could do some pretty powerful stuff with it, perhaps creating a GI lightmap for a game. For now, I’ll wait until someone takes Simon’s code and gives it a UI and addresses some of those limitations. And speeds it up. A lot.


Commonly Ignored Features #2: Contrasty HDR Lighting

Update 2016: This method is pretty much just wrong. See this post instead.

At the time Cycles came out in Alpha, I was playing around with HDR lighting in Blender internal. I found BI to be annoyingly slow and impossible to get that magic one-click lighting that all the non-blender websites had promised HDR could do.

You might remember some fool using this in some tutorial...

You might remember some fool using this in some tutorial…

So when Cycles showed up, I figured the day was saved and all my problems would magically disappear.

Well most of them did, but HDR lighting still looked flat and boring, lacking the shadows from the sun even in simplest of outdoor images.


However, a couple years (has it really been that long?) later and I can now tell you where that magick one-click-lighting button is!


Well its not technically a button… but a single node connection:


All you need to do is connect the image to the Strength input of the Background shader and it’ll use those awesome bright pixels of the sun as the strength, meaning the sun is actually brighter than all the other white things! Hence the awesome shadows and realistic lighting.

The Multiply node is there to control the strength of the light, I increased it to 2.0 since it was a little dark, but it could probably even go higher since the shadows are a bit dark here.


Don’t forget that we’re working with nodes here people! Anything’s possible!


On the left is the plain setup with the HDR plugged just into the colour.
The middle is with the node setup above, the image plugged into the strength too (brightened with a multiple math node)
And the right one is with some extra adjustments:


Notice that the colour hasn’t been altered at all.
The Multiple math node is just to brighten it up, but the orange mix node mixes that strength value with white, effectively decreasing it’s contrast and making the shadows less harsh (thanks to Sir Reyn for the great idea!)

Now the blue mix node is where it gets a little more technical: The Light Path node gives us access to the different sorts of light rays. It’s common to see people use the Is Camera Ray to make something look a certain way to the camera, but behave differently for the rest of the scene – for example making a glass object look like glass to the camera, but the rest of the scene perceives it as plain old transparency, thus eliminating caustics.
Here we’re adding the Is Camera and Is Glossy rays (making sure to Clamp them so as to keep the result between 0 and 1 and continue energy conservation) and using that as the Fac of a mix between the strength driven by the image and a consistent strength (in this case of 5.0 since that’s what is bright enough for this HDR). So to the camera and in reflections (the glossy ray), the environment looks exactly as it did before we started messing with the strength, but it still lights the scene nicely.

Hope that isn’t too confusing for ya :)

Also remember to enable Multiple Importance Sample and choose Non-Color Data for the environment texture.


PS: For those of you who have been hiding in a cave the last few years, Reyn’s blog is a great one to follow! A true artist he is :)

Progressive Animation Render Addon and Image Stacking

I remember when Cycles first came out, people loved the progressive rendering, where it shows you the whole image and gets clearer and clearer the longer you wait. But one of the first things people asked was “Can you render an animation progressively too?“. The answer was no. Until now that is.

I’ve created an addon that allows you to do exactly that.


(Download it!)

It’ll render each frame of the animation to its own folder (like path/to/render/frame_15/image.jpg), and then repeat the whole animation render again using a different seed, which means that there’ll be a different noise pattern. Then you can hit the Merge Seeds button and it’ll gather all the different images for each frame, average the pixel values and make a nice clean animation.

Continue Reading…