If you don a pair of special glasses that make everything appear to be upside down, after a few days your brain will learn how to compensate.
I'm constantly amazed by the ways in which our eyes and brains gather and process visual information. What makes this really fun (at least for me) is that I'm always learning something new.
For example, for quite some time I've known about a famous experiment that was first performed in 1896 by George Malcolm Stratton (1865-1957). While having a particularly inventive day, this little rascal created some special glasses which made everything appear to be "upside down". Amazingly, after a few days of disorientation, his brain began to automatically correct for the weird signals coming in and caused objects to appear to be the "right way up" again. Similarly, when George removed his special glasses a few days later, things initially appeared to be upside down because his brain was locked into the new way of doing things. Once again, within a short period of time his brain adapted and things returned to normal.
Pretty incredible, huh? The point is that I was also aware that the way in which the lenses in our eyes function means that the images we see are inverted by the time they strike the retina at the back of the eye, so our brains start off by having to process "upside-down" data. In order to comprehend what I'm talking about, consider the illustration below. This depicts an eye looking at someone in the distance; as we see, the image is inverted on its way to the retina at the back of the eye (I know this isn't a great picture, but I'm a bit pushed for time at the moment so I just threw it together).
So, here's the thing. When I first heard about the George's experiment – including the bit about images being inverted in the vertical plane as depicted above – I vaguely recall pondering how images would be inverted in the horizontal plane also. However, I couldn't quite wrap my mind around how this would behave with two eyes.
Well, I recently discovered how this actually works, and it's way more convoluted than I had ever expected (of course you probably know about this already, but it was new to me). Consider the following illustration showing a "birds-eye view" looking down on the top of someone's head.
The term hemifield refers to one of two halves of a sensory field. In the case of vision, if one was to draw a line straight out from one's nose into the distance, the fields of view to the left and right of that line are referred to as the left hemifield and right hemifield, respectively.
For both eyes, information from the left hemifield is projected onto the right-hand side of the retina, while information from the right hemifield is projected onto the left-hand side of the retina.
Now this bit is a bit convoluted, so sit up and pay attention. In the case of the left eye, the right-hand side of the retina is referred to as the left nasal retina because it's close to the nose, while the left-hand side of the retina is called the left temporal retina because it's close to the left temple. By comparison, in the case of the right eye, the right-hand side of the retina is called the right temporal retina, while the left-hand side of the retina is called the right nasal retina. (In this context, the term "temple" refers to the flattened region on either side of the forehead in human beings).
OK, this is where things start to get interesting. For reasons that are beyond the scope of this blog (which is another way of saying: "I don't know"), the left half of the brain controls the right-hand side of the body, while the right half of the brain controls the left-hand side of the body. In turn, this means that the left half of the brain is only interested in information from the right hemifield, while the right half of the brain is only interested in information from the left hemifield.
The problem is that there is a massive amount of overlap with regard to the images seen by both eyes. The way this is sorted out is that the bunches of fibers forming the optic nerves from both eyes first pass through an area called the optic chiasm, where they are sorted to separate the data from the left and right hemifields.
Following the divide at the optic chiasm, the resulting bunches of fibers are referred to as the optic tract. The optic tract wraps itself around the midbrain to an area called the lateral geniculate nucleus (LGN). After this point, the nerve fibers are known as the optic radiations, and it is these signals that are ultimately presented to the primary visual cortex at the back of the brain.
Good grief Charlie Brown! (Now I can't stop thinking about how three-eyed aliens from outer space deal with this sort of thing.) As soon as I learned about this, of course, I immediately added it to my ever-evolving paper on Color Vision. I tell you, the way that little scamp is growing, it will be a book before we know it!
Questions? Comments? Feel free to email me – Clive "Max" Maxfield – at firstname.lastname@example.org). And, of course, if you haven't already done so, don't forget to Sign Up for our weekly Programmable Logic DesignLine Newsletter.