# What if Gravitational Constant G Isn't?

We take this fundamental constant for granted, but determining its value with precision is surprisingly tricky -- and what if that value isn't truly constant?

Engineers and scientists live in a world defined by many metrology standards and constants. We start with time, mass, and length, and then expand to electric current, temperature, and many others. There are also fundamental physical constants such as the speed of light or Avogadro's number.

While all these constants are important, some of them are far removed from our daily lives. But one is not: the gravitational constant G. Even since Isaac Newton formulated the law of gravitational attraction F = G (mass1 × mass2)/r2, inspired by that apple falling from a tree, the value of G has been of great interest. Given how pervasive and accessible gravity is, it should be pretty easy to measure G accurately, right?

Well, yes and no. It turns out that gravity is easy to measure, but hard to measure with precision. A fascinating article in the latest issue of Physics Today, "The search for Newton’s constant," discusses the history of measuring G. It looks at the various experimental setups that have been used over several hundred years (torsion-balance, pendulum, beam-balance, and others) and the data spread in results of each. Some of the sophisticated tests by serious researchers produce results with low uncertainty, yet they differ significantly from other tests, which also claim low uncertainty.

While researchers have certainly improved the accuracy and precision of their results, the article explains why G is still so hard to measure. It's not only an interesting, well written article, it's also a sobering and thought-provoking one as well, because you likely assumed that G's value is pretty much nailed down solid, end of story.

Yet, as most engineers and scientists know, getting consistent, accurate results in any test-and-measurement challenge to better than three or four significant figures is rarely easy. Every added significant figure means ever-more-subtle sources of error must be uncovered, understood, calibrated out, or compensated for in the fixture and equipment.

If you're lucky, the test can be structured so some of these errors actually drop out, or self-cancel, much as the value of mass *m* cancels out in some basic physics experiments and even carnival rides, such as the "rotor ride" or Gravitron (Figure 1) where participants "stick" to the wall via centripetal force and friction. The mass of the person doesn't matter, only the size of the rotor, the speed of rotation, and the coefficient of friction between their clothes and the wall (Figure 2). (If you can't explain why the person sticks, and why their weight is not a factor, go to a basics physics book.)

Or maybe there's another explanation about the elusiveness of a precise, accurate value of G, one that keeps physicists and metrologists worrying: Perhaps the "squared" exponent in the denominator of Newton's Law is not exactly 2.0 out to as many places as you care to pick. Or maybe G itself is not a true constant, but actually changes slightly over time and place. Stranger things have happened; just ask those physicists who believed in the absoluteness of time and distance, but had to change their beliefs to accommodate the curvature of time and space, as well as time dilation itself and even E = mc², as Einstein's 1905 paper on Special Relativity became accepted principle.

Have you ever had a constant or fixed assumption in engineering or science that you had to abandon or at least become flexible about? Have you ever stopped and wondered what "gravity" is, as well? What are your thoughts are gravity waves and gravitational frame-dragging, as Gravity Probe B is exploring? (See "Spinning spheres test relativity's subtlety" and "The Gravity Probe B Bailout.")

Author

Bob Snyder 8/11/2014 10:01:52 PM

How many significant figures are currently possible in state-of-the-art scientific research?

Annual global mean sea level rise is currently estimated to be 2.28 mm/yr. The newest and most precise satellites having this ability, Jason-1 and Jason-2, orbit at a mean altitude of 1336 km (1.336 billion mm). Detecting a 1 mm change in sea level would require a measurement uncertainty of less than one part per billion.

Many factors can affect a satellite's position: Mountain ranges have more mass, and therefore more gravity, than prairies. The moon and sun have strong gravitational attraction. The solar wind is variable and turbulent. When the satellites' orbits begin to decay, booster rockets are fired to restore their altitude. All of this has to be modelled and corrected for.

A RADAR altimeter is used to measure sea surface height relative to the satellite. Two RADAR frequencies are used so that the effects of atmospheric moiisture can be accounted for. Higher ocean waves result in earlier arrival of initial RADAR reflections. A correction can be made by looking at all reflected energy, not just the earliest, but this correction depends on assumptions about the shape of sea surface waves.

The satellites complete one orbital cycle every 10 days, and they are separated by 5 days, so the sea surface height is measured only once every 5 days at each location. According to the Nyquist sampling theorem, that means any sea surface waves having a period less than 10 days will undergo temporal aliasing because the sampling rate is too low to capture the true waveform.

NASA goes to great lengths to make the satellite altimetry measurements as precise as possible. For example, the GRACE satellite mission maps the earth's gravity field, and this data can then be used to improve the real-world models used by the Jason missions.

My question is: Can I really believe the claims of one part per billion accuracy in the global mean sea level data? My gut is saying 'no', but I was wondering if someone with experience in this area could shed any light.

Author

cookiejar 8/11/2014 3:08:34 PM

As you can guess, there is no end to the uncertanties of this sensor, from the optics of the Dewar and thermal leakage to the changing absorption properties of the "black" body as it is exposed to radiation.

Meteorologists from the world gather each year at the time of the summer solstice on a mountain top, pick a clear day and after a countdown take readings from their "reference" instruments. These instruments are then used as transfer standards based on the assumption that the sun's output is constant.

While there is a lot of data showing the sun's output variations in the short term, there is no sensor stable enough to read the sun's long term variability.

As we all know, the sun provides the energy feeding our weather. But the sun's varying output is not a variable in any climate models. Being unmeasurable, it is assumed to be constant. Most scientists attribute past climate changes, from ice ages to tropical conditions in the Antarctic to varying solar output.

No doubt, the more we know, the more we realize we don't know.

Author

drdemjanenko 8/11/2014 1:05:12 PM

F = G x (1 + (relative velocity between 1 and 2)/ c) x (mass1 × mass2)/r2

with the caveat that the relative velocity can never be bigger than -c and has implied limits of 2 and 0. The gravitational event horizon is thus for mass which has been moving ever since the Big Bang into our forever non-visible universe and for mass that accumulates onto black holes that spin at nearly c.