# Magic Mirror on the Wall, How Do You Even Work at All?

**Bernard Murphy, PhD, Chief Technology Officer, Atrenta Inc.**

12/1/2014 05:45 PM EST

32 comments post a comment

The more one looks into it, the more one realizes that aspects of mirrors that initially appear to be intuitive are, in fact, extremely hard to explain.

EE Times editor Max Maxfield recently offered this challenge in a comment on a previous post: "I still cannot wrap my brain around how mirrors work -- from simple things like why is the angle of incidence equal to the angle of reflection, all the way up to how the photons 'bounce' off the atoms forming the mirror without being scattered to the four winds, as it were."

He's not looking for an easy answer using basic optics or even Maxwell's equations. His question is based on Richard P. Feynman's 1990 book *QED: The Strange Theory of Light and Matter* (where QED stands for Quantum Electrodynamics). I thought that I would knock out a quick response with a few examples, but this has turned into one of the harder questions I have attempted. Getting to a reasonable answer has made me reset my own understanding.

In fairness to anyone who hasn't read the book, here is a highly condensed summary of how Feynman explains reflection. The idea is to sum components of reflection over all conceivable paths. We want to prove that the angle of incidence is equal to the angle of reflection (AOI=AOR), but we can't start with that assumption. Instead, we have to consider all paths. Feynman does this considering the experiment below -- looking at the various possible paths from the source reflecting off each part of the mirror and ending at the detector.

We sum contributions at the detector by considering each contribution as an amplitude with an associated phase (shown by the arrows below the mirror). We assume the only difference in phase between the paths is due to the lengths of the paths (more on this later), which results in phase shifts between contributions at the detector. The phase shift changes slowly around the center line (at which point AOI=AOR), where the path length varies slowly. The path length (and therefore the phase) changes faster as we move away from the center. When we add these contributions together, they add constructively near the center but increasingly cancel through phase mismatch as we move away from that center. As a result, we obtain a peak around AOI=AOR and very little intensity as we move away from the peak on either side.

All of this is understandable, but what does it have to do with QED? In researching this blog, I first thought Feynman was using creative license to keep his explanation simple. Then I decided he was bending the truth just a bit. Finally, I realized his explanation -- apart from minor details -- is completely accurate and is the most intuitive explanation of QED I can imagine. Thus, the best I can hope for is to add some color to that explanation.

Let's start by saying that we believe photons are real, because we can reduce light intensity until we see single flashes at the detector, and the flashes always have the same intensity for a given frequency of light. So light is quantized, but whatever behavior we invent for this new model, it must still correspond at a macro scale with everything we expect about light behaving as a wave. We also need to double-check what has to be new and what is really just unexpected classical behavior.

An apparent problem emerges in imagining the experiment being performed using a laser as illustrated below.

The light isn't going all over the place, so what gives? In fact, this experiment is a little deceptive. If we look at the mirror from behind the laser, we can see a light spot, which means that light is reflected back toward the laser. This means that, even at the macro level and even for a laser, light is scattered in all directions at reflection. On this point, Feynman's explanation is completely classical, though not the way we normally think about light. Scattering in this way also corresponds with Huygens' principle (1678) that a light wavefront advances by treating each point on the wavefront as a new wavefront, which expands in all directions.

Given this, summing up the paths accounting for phase is also completely classical. That's what you do with waves. There are just two problems. The first is how all this applies to photon "particles"; the second concerns the assumption about phase differences. On the first point, my reading shows two lines of thinking. The most heavily represented is what I'll call the "mystery and imagination" track. Quantum behavior is weird, and we can't really understand what is happening, but the math works. In the meantime, we wrestle with how to imagine a photon particle behaving like a wave. I think most of us are secretly attracted to this track, because it gives us exotic behaviors as fuel for philosophizing about exotic possible causes. Perhaps photons are extended wave packets and behave as waves. Perhaps the universe splits into multiple universes at each event such as reflection, and so on.

I'll call the other track "mostly classical." As far as I can tell, it is represented solely by Feynman and a Geneva University theoretician who applied the Feynman path approach to detailed calculations of reflection, refraction, diffraction, and other phenomena in 2005. This paper is quite technical but still worth a read. Though all other authors acknowledge Feynman's genius, it seems that few if any actually use his methods in QED calculations, because they are typically more complex to apply than Schrodinger-based approaches.

The Geneva paper seems to be the first time anyone has documented detailed consequences of the Feynman model. (Feynman didn't record his own calculations for this example.) But before I go there, let's look quickly at the other problem: that phase assumption. Path length is certainly a factor in phase at detection, but what about the phase when a photon starts on one of these paths? If the source is a laser, you can assume phases are equal at creation, but this is not the case for a regular light source. If you assume that amplitude summing (interference) at the detector is between different photons travelling on different paths, path differences still affect the result, but lack of correlation between source phases will lead to random and time-varying (noisy) interference at the detector, which is not what we see.

Back to what the Geneva paper has to say:

- Feynman's explanation is more fundamental and more powerful than the Schrodinger approach. Schrodinger can be derived from Feynman, but not
*vice versa*, because Feynman represents correlation between space-time events (between paths), but Schrodinger cannot. - In detailed calculations using the Feynman method applied to photons, all classical behaviors of light as waves emerge as expected. Reflection, in particular, follows Feynman's example.
- Photons propagate over macroscopic distances in a completely classical manner. (Heisenberg applies to the creation and detection of a photon and to scattering events, but not to simple propagation.) But we must consider all possible paths in the analysis.
- Paths add in the same way that waves add. We add amplitudes with phases to obtain interference at the point of detection.
- This brings us to the creation phase issue and the only conceptually difficult requirement of the Feynman method. Different photons have random relative phases at creation, but any given photon is trivially in phase with itself when created. Therefore, to obtain the results we see, interference cannot be between different photons travelling different paths. Each photon individually must travel along all possible paths and (indelicately) interfere only with itself at detection. This is the only way we can avoid that noisy interference between uncorrelated photons, and it is why experiments testing one photon at a time give the same results as for multiple photons at the same time.

I should caution that the 2005 paper is an interpretation, and that it makes predictions of new behaviors that have not yet been tested. But absent counterexamples, I find this interpretation very appealing. It is in complete agreement with Feynman's explanation, and it conserves all classical and intuitive understanding of light behavior based on photons, with just one exception. That exception is a doozy: A photon must travel simultaneously along all possible paths to the point it is detected and resolve itself through self-interference at detection. If we suspend disbelief on this one point, everything else is completely intuitive.

So what do we make of this one difficult point? Travelling simultaneously along all possible paths is definitely neither classical nor intuitive. Perhaps we see a particle travelling along all paths as a projection from a simpler path in something more fundamental than space-time. There are hints of this in a recent article, "A jewel at the heart of quantum physics," which suggests that the space-time so familiar to us may not be the most basic representation of reality. Whether we will appreciate this as an improvement in intuitive understanding is up for debate.

**Related posts:**