I was curious why a large aperture on a lens reduces depth of field. To investigate this I set my camera up with a macro extension tube and a subject with lots of depth.
The subject is a potassium sodium tartrate (Rochelle salt) crystal mounted in a brass holder, with a copper contact wire for detecting the crystal’s piezoelectric properties. From the perspective of the camera, this setup looks like this:
This image shows a very short field, as is typical with macro photos that are taken from very close to the lens. The focal plane is about 10 degrees off of perpendicular from the surface of the coins, and intersects the subject about where the copper wire wraps around the crystal.
To understand why some parts of the image created when the aperture is large are blurry, it’s helpful to visualize the paths the light takes through the lens. I used this simulator to make a simple diagram:
From any given point on the subject on the left, light passes through every point on the lens and is focused onto the camera’s sensor. If you imagine a tiny bug with an equally tiny camera walking around on the big lens and taking his own pictures of the subject, you would noticed that depending on where he was standing, his photos would each be slightly different, sometimes from a little higher or lower, or one side or the other. We can simulate the bug camera photos by taking a picture through a pinhole placed in front of the lens.
Because the tiny bug camera has a really tiny aperture, all his photos will have very large depth of field, they’ll be sharp all over. My bug-simulator has a fairly large pinhole, you can see it near the top of the image, it’s about 2mm wide, so I won’t get as much depth of field, but you can definitely see that much more of the depth of the image is in focus, compare to the image above and note how in these both the rubber band near the back of the image and the reeds on the edge of the coin are sharp. Here the bug is walking from one side of the lens to the other:
It’s hard to tell in the still images, but the perspective is different in each shot, the angles all change a bit as the bug walks across the lens. It’s easier to see this if you user a bigger hole so that you get a full-frame image instead of the circular shot, but it’s harder to see the increase in the depth of field. Here is another example with a slot to let in more light. Left side from the top of the lens, right side from the bottom.
Since the tiny bug camera can only collect a tiny bit of light with each photo, all those slightly-different photos will be quite dark. If we stack them all up to increase the brightness, we get the image we would get from the regular-sized camera. It’s a bright, but only the parts that were all the same in the individual images will still look sharp in the combined image. The parts that were all slightly different will be all mixed together, making them appear blurry.
That’s exactly what the big lens is doing, stacking together thousands of different perspectives of the view all taken at the same time. We could get deeper depth of field in the image by using a smaller aperture, but that makes the image darker. To compensate we can increase the light on the subject, leave the shutter open longer, or use a more sensitive sensor.
To really see this effect, it’s best to see it in a video, so check this out: