Distributed ray-tracing versus financial Monte Carlos

As I've been doing all these distributed ray-tracing extensions, I've been thinking how it compares to Monte Carlo pricing in finance. Theoretically, they're taking the same approach (paths with randomised variations) for rather different integration problems, in order to deal with the same limitation: The curse of dimensionality.

In finance, if you've got a path-dependent exotic option where PDE doesn't work, you've got not much choice but to simulate price changes across each of the relevant dates. If there are a lot of dates, this is suddenly a high-dimension problem, and MC is the way to go. In practice, for quite a lot of cases, there are actually some rather more important dimensions, and you can get quick convergence using Quasi-MC and Brownian bridges. The only problem is to make sure you're not screwing it up and getting the price wrong, which is... quite a big concern.

Anyway, assuming you are doing standard MC, what you really want out of your MC pricing is stability. If you make small bumps to the input, or move forward a day, or tweak a parameter, you don't want the price to change unpredictably. MC pricing noise is a fact of life, as MC convergence is slow. However, if this noise is consistent, you have a hope of getting stables greeks out, and being able to explain what happened with your profit-and-loss from day to day. You don't want noise.

In ray-tracing, I think the curse of dimensionality is rather less prevalent (maybe integration dimensions for anti-aliasing, for depth-of-field, for motion blur, soft shadows, etc. - less than ten in total). Just like with quasi-MC, we can also take some of the dimensions and simplify them with more traditional integration techniques (e.g. anti-aliasing by rendering on a finer resolution, using frequency-limited/anti-aliased shaders (the RenderMan shader books are quite good on this) etc.). However, the MC/distributed ray-tracing approach simplifies this by putting them all under the same umbrella.

What an MC approach does, though, is give us noise. The results are grainy. People are used to grainy images. They look like photographs, or dithered images. The noise means that, over wider areas of the image, there is convergence to the right value, and locally you've got approximately right values whose inaccuracy is covered by the noise across pixels. If the algorithm were adjusted to, for example, repeatedly use the same points in an area light source for soft shadow calculations, the shadows would be steppy and ugly. We are deliberately not looking for spatial stability in MC behaviour, and it's a remarkably different approach.

On the other hand, moving into an area I know less about, I'm not sure what you want to do with animation. Presumably intra-frame noise is something you're happy with. In a static scene the pixels could mildly flicker, in the way that they simulate with that overlay on static images in YouTube. On the other hand, if you want the image to remain stable, you'd end up in a complex situation where you want the image to retain temporal coherence while being spatially noisy. I guess that could be a bit tricky to implement... but I suspect that no-one cares about that?

Anyway, it surprised me to realise that such different approaches to noise could be taken in two different domains associated with Monte Carlo techniques.

Posted 2014-12-30.