Colouring a Rainbow
A long time ago I had a professor who was into photographing atmospheric optical phenomena like sun dogs. This inspired me to try work out exactly how these rainbow-like oddities occur. Any physics class will give you the basics: light reflects off water droplets and spreads out into a spectrum which we see as a rainbow.
But how does it reflect off the water droplets? Why does it split into a specturm?
During my investigation I wrote some code to model a simple rainbow, and thought to document the pieces of this journey as well as a surprising discovery.
The plan
If we want to model a rainbow we need to think about how light moves. Light (for the most part) travels in piecewise straight lines, changing directions when it encounters boundaries between different media like water and air.
The image we see is formed from light rays that hit our eyeballs so one way to render this on a computer might be to “cast” rays out from the camera (eyeball) and see what they hit - tracing the path of light backwards. If the ray for a particular pixel hits a red object, colour the pixel red and so on. This is a technique known as raycasting and is very common in computer imaging.
However, rainbows are fundamentally optical phenomena: they involve reflection and refraction, and there is no guarantee that any ray cast out from a camera reaches a source of light. Thus we make a slight simplification. Instead of casting rays from the camera outwards and hoping we can work out what colour of light from the sun has been scattered in that direction, we could cast rays out from the sun and see where they end up. Usually this kind of raytracing is too computationally expensive because very few of the rays that are emitted by any source will end up in the camera, but here we can make it work.
To be more exact, we know a rainbow is formed when a lot of spherical droplets of water (such as a cloud) reflect light from the sun backwards at a 42° angle. We can model just one of those droplets and see how it scatters the sunlight. Hopefully, we’d see a spectrum at about 42° which when taken in aggregate would show us the rainbow.
BSDFs
The key ingredient of this rendering is the BSDF, the Bidirectional Scattering Distribution Function. This is a formula that takes an incoming ray to a media boundary and returns the probability that the ray exits in a given direction. In particular we will want to sample this probability distribution so we can write a sampling function that just returns an outgoing ray direction.
The simplest of these is a mirror which just reflects the ray. There is no random element to this, even though it is sampling a distribution function, because the distribution function is Dirac.
impl Material {
pub fn interaction_sample(
self,
incident: Normal,
normal: Normal,
lambda: f32,
) -> Option<Normal> {
match self {
Material::Mirror => Some(Normal::new_normalize(
(*incident) - 2.0 * incident.dot(&normal) * (*normal),
)),
...
}
}
This is just the equation $\mathbf{v}_e = \mathbf{v}_i - 2 (\mathbf{v}_i \cdot \mathbf{n}) \mathbf{n}$ which is the formula for the exident vector $\mathbf{v}_e$ from a surface with normal $\mathbf{n}$ if the incident vector is $\mathbf{v}_i$.
We can give this a go by shooting a bunch of light beams from the left of the screen towards the right but sticking a circular mirror in the way. As expected, the beams bounce off the mirror and fan out.
I’ve rendered the spectrum at the bottom of this image. These are the wavelengths of light from 400nm to 700nm. This is roughly the range that human eyes can perceive: visible light. Longer wavelengths correspond to lower frequency by $\lambda = c / \nu$ so on the left (400nm) we have violet and on the right (700nm) we have red.
Translating wavelengths into RGB values is a whole other story. There are a few algorithms out there, most of which are just linear interpolations of one sort or another. I’m not actually worried about the accuracy of this step. Doing realistic rendering would be an interesting project, and involves lots of discussion of the (s)RGB colour space which is outside the scope of this little experiment.
However, this project definitely needs to keep track of the wavelengths of light if we want to see spectra. In fact the “white” light shining in the above render is mathematically white in that its a combination of different pure colours:
fn get_light_ray(scene: &Scene) -> Ray {
match scene.light {
LightSource::Bar(start, end, dir) => Ray {
pos: start + rand::random::<f32>() * (end - start),
dir,
alive: true,
lambda: 0.750 - rand::random::<f32>().powf(0.4) * 0.400,
},
...
}
}
The sample function for the wavelength lambda
was mostly chosen based on aesthetic preference. A more realistic “sun” light is a lot more orange.
Refraction
Now that we have some basic tracing code up and running, lets think about the effects that lead to a rainbow. I’ll present the code and then we can discuss the physics behind it.
Here is the code that computes the interaction between the surface of an object and a light ray. Recall the only information we require out of this is the direction of the ray after interacting with the boundary: the wavelength of the light is unchanged.
impl Material {
pub fn interaction_sample(
self,
incident: Normal,
normal: Normal,
lambda: f32,
) -> Option<Normal> {
match self {
...
Material::Dielectric(B1, B2, B3, C1, C2, C3) => {
let l2 = lambda * lambda;
// https://en.wikipedia.org/wiki/Sellmeier_equation
let material_r = f32::sqrt(
1.0 + B1 * l2 / (l2 - C1) + B2 * l2 / (l2 - C2) + B3 * l2 / (l2 - C3),
);
let (r, n) = if normal.dot(&incident) > 0.0 {
(material_r, -normal)
} else {
(1. / material_r, normal)
}
let cos_theta_1 = -n.dot(&incident);
let reflect = Normal::new_normalize(*incident + 2.0 * cos_theta_1 * *n);
let sin_theta_2_squared = r * r * (1.0 - cos_theta_1 * cos_theta_1);
if sin_theta_2_squared > 1.0 {
// Total internal reflection
Some(reflect)
} else {
let refract = Normal::new_normalize(
r * *incident + (r * cos_theta_1 - f32::sqrt(1.0 - sin_theta_2_squared)) * *n,
);
let cos_theta_t = f32::sqrt(1.0 - sin_theta_2_squared);
let rs = (r * cos_theta_1 - cos_theta_t) / (r * cos_theta_1 + cos_theta_t);
let rp = (r * cos_theta_t - cos_theta_1) / (r * cos_theta_t + cos_theta_1);
let fresnel = (rs * rs + rp * rp) * 0.5;
if rand::random::<f32>() < fresnel {
Some(reflect)
} else {
Some(refract)
}
}
}
...
}
}
}
That’s a lot of code.
Lets start with a simplified version that is probably more familiar. If the two media have refractive indices $r_1$ and $r_2$ respectively, then Snell’s Law says
$$ \frac{\sin \theta_1}{\sin\theta_2} = \frac{r_1}{r_2} = r $$
where $\theta_1$ and $\theta_2$ are the angles of the incoming and outgoing rays.
Clearly all that matters is the ratio of the two refractive indices which I’ve called r
. If we know the index of the object in material_r
and make the assumption that all boundaries are between “stuff” and vacuum (with a refractive index of 1
) we can do a quick switcheroo to simplify life so that we may assume we are entering the medium.
let (r, n) = if normal.dot(&incident) > 0.0 {
(material_r, -normal)
} else {
(1. / material_r, normal)
}
Next we can calculate the sine of the angle for the outgoing beam. More specifically its square: $$\sin^2\theta_2 = r^2 \sin^2\theta_1 = r^2(1-\cos^2\theta_1).$$ Now a classic gotcha in Snell’s law is what happens if there is no $\theta_2$ that satisfies the equation for given $\theta_1$ and ratio of refractive indices. In this case, according to the model, total internal reflection occurs and the beam is reflected, not refracted. This is easy to check in our formulation: we just need $ r^2(1-\cos^2\theta_1) \gt 1 $ for it to be totally reflected.
Once we know that refraction can occur we have one more physical phenomenon we will want to model which is that described by the Fresnel equations. When light (or any wave) reaches a change of medium, some of the wave propagates forward, but some is also reflected back.
The exact ratio of power that is reflected and refracted, known as the reflectivity, depends on the polarisation of the light but we can take the average of $s$-polarised light and $p$-polarised light as a good approximation1.
The equations for the reflectivities are $$R_s = \left (\frac{r_1\cos\theta_2 - r_2 \sin\cos\theta_1}{r_1\cos\theta_2 + r_2 \sin\cos\theta_1}\right)^2$$ and $$R_p = \left (\frac{r_1\cos\theta_1 - r_2 \sin\cos\theta_2}{r_1\cos\theta_1 + r_2 \sin\cos\theta_2}\right)^2.$$ If we take the average of these we get the fraction of rays that should be reflected back. Sampling a uniform random number generator between $0$ and $1$ will let us make the decision on a ray-by-ray basis2, thus sampling the BSDF.
After all this, we are almost ready to render a rainbow. The only question is why red light behaves any differently to blue light. That is, where have we made any decision based on the wavelength of the light?
The answer, satisfyingly is in a single place: the refractive index of the material. This number differs depending on the wavelength of the light beam. The model we will use for this is known as the Sellmeier equation. Its an empirical model which is to say it has enough degrees of freedom that we can measure the actual values and fit the coefficients so that it matches. $$r^2(\lambda) = 1 + \sum_i \frac{B_i\lambda^2}{\lambda^2 - C_i}$$ There are webpages that tabulate values of $B_i$ and $C_i$ for various materials in various ranges of $\lambda$.
This then is the last piece of the puzzle: shorter wavelengths “perceive” different refractive indices and so refract differently to longer wavelengths.
We can test out this theory by rendering a simple prism in the style of Pink Floyd’s famous album cover. The original is stylised but I found I got better looking results if the prism was made of ice rather than glass:
// https://refractiveindex.info/?shelf=main&book=H2O&page=Kofman-150K
pub const ICE: Material = Material::Dielectric(0.496, 0.190, 0.0, 0.071, 0.134, 0.0);
I’ve also employed a trick here where I only render the light beams that pass through the material and exit it on the right hand side. There are other rays that bounce off the leading edge or ricochet around inside the prism but they make the story more confusing and anyway, the original art only has the classic prism effect.
Rainbow
Well, now lets see an actual rainbow! We need the coefficients for water
// https://refractiveindex.info/?shelf=main&book=H2O&page=Kedenburg
pub const WATER: Material = Material::Dielectric(0.75831, 0.08495, 0.0, 0.01007, 8.91377, 0.0);
and a droplet (sphere) of water to reflect and refract in.
Its worth taking a moment to discuss what we expect to see. We’d hope to see something similar to the Pink Floyd cover: the droplet splits the incoming white light into its spectrum.
But remember that everything here is inverted to what we perceive. Our eyes receive incoming rays of light so if we see red light at 42° and violet at 40° then there were different droplets that refracted violet and red light at 180°-42° and 180°-40°. In a way we’ve moved from one receiver (your eye) and many indistinguishable emitters (the droplets) to one emitter (a single droplet) and many indistinguishable receivers (the pixels on the screen).
I’ll restrict the rendering to those rays that interact three times with the droplet since a little experimentation shows that these are the rays that form the rainbow. We also know that rainbows occur around 42° so lets place a “camera” around there. I’ll render the light that hits the camera along the bottom of the image where I put the whole spectrum before.
And there we have it: a beautiful spectrum, or as its commonly known, a rainbow3!
Its interesting to see the “curved” light inside the droplet. In fact these are all straight lines tracing out a particular locus. This is known as a caustic and there is a fun write-up in the Chalkdust magazine here.
Double Rainbow
While I was rendering the entire set of light beams (not just those that made up the rainbow), I was pleasantly surprised to see a second spectrum being emitted from the other side of the droplet. It was the double rainbow!
I was a little surprised to see this so plainly, though I’m not sure why. I think I’d unconsciously thought that double rainbows occurred due to some other optical phenomenon.
Learning something about rainbows
Finally, here is the picture with all the light rays rendered. This is the render I first looked at before I rendered the above images. I’ve zoomed out a bit so that the water droplet behaves a little more like a point. Even so you can see some caustics between the rainbow and its double.
A few observations we can make from this. Firstly, the vast majority of light is forward-scattered and leaves the droplet heading to the right. However, the lensing effect is quite strong and you can see the light rays get quite concentrated. This can have disastrous consequences as was displayed by the 1974 nature documentary Beautiful People4.
We can see the rainbow and the double rainbow plainly there but there is a lot of other light that is backscattered. In particular, there is white light scattered “inside” the rainbow and “outside” the double rainbow, but none in-between.
When I first saw this I thought that there was either an error with my Fresnel equations or that this is an invisible part of a rainbow. However, if we want to test the predictive power of this model we might make the following hypothesis:
Hypothesis: When you look at a rainbow, it is brighter on the inside than the outside. If it is a double rainbow, it will be darker between the two bows.
Lets take a look at a photo of a rainbow to check this (courtesy of Lauri Kosonen under CC BY-SA 2.5).
And there it is. I love this picture of a rainbow: it really looks like a portal to a different, brighter world. And now, thanks to a little code, we know why.
-
Besides, this is in 2D. What direction is the light meant to be polarised in? ↩︎
-
If you squint really hard (or read the wiki page carefully) you’ll notice that actually the total internal reflection we checked for explicitly falls out of the Fresnel equation. That is our variable
fresnel
would be equal to1
and theif
statement always passes. I kept it in this slightly more wordy format for pedagogical reasons. ↩︎ -
The vertical lines where it changes colour are just from the simple sampling method I’m using: reading RGB values off the canvas. ↩︎
-
This documentary is deeply problematic for many reasons, and the scene where a droplet of water ignites a weaver colony was faked. ↩︎