I've changed my mind. It's nothing but clear that pro science writers read each other's stuff all the time. Have a look at that SciTech Daily article and party on with your own bad self.
So I didn't read that article specifically, but I have been following this story. Nevertheless, here's my take:
In the 1920s, Hubble famously discovered the expansion of the universe by looking at Cepheid variables in other galaxies. Cepheid variables are standard candles, which is what astronomers use to measure cosmic distances. Because a star that looks bright might be a close, dim star or a distant, luminous star, we need a way to independently determine how bright a star should be in order to figure out how far away it is. Cepheid variables serve as a standard candle because their brightness periodically pulsates and there is a correlation between the pulsation period and its average peak brightness. So the longer the period, the brighter a Cepheid should be, which means that if you find a dim, long period Cepheid, you know it must be very far away.
The tricky part is that in order to calibrate "very far away," you have to know the actual distances to some nearby Cepheids by some other method. This is the cosmic distance ladder, by which we climb one distance-measuring rung to reach the next. For the closest interstellar objects, we measure distance by "parallax," which in practice is a kind of vague term that corresponds to any method that uses a combination of time and geometry. The distances to the Cepheids Hubble used were actually determined via "statistical parallax," which is a little complicated and not central to this story. (See
this post if you want to get a better idea. Although I'm not super happy about how that one turned out.)
When you hear the term parallax, what probably springs to mind is what astronomers refer to as "stellar parallax," which is observing the apparent shift of a foreground star against background stars as the Earth moves around its orbit. Measure the position of a star. Wait half a year. Measure again. The greater the difference between the star's two apparent positions, the closer it is. This paper details a very precise set of stellar parallax observations made using the Hubble telescope. I'll talk about why these measurements are so good in a bit, but we're not quite done with Hubble the dude.
Hubble's discovery required accurate distance measurements and accurate spectroscopy. The faster a star is receding from you, the redder its spectrum will be due to the Doppler shift (redshift for astronomy). What Hubble found was a roughly linear relationship between distance and recession speed. A galaxy twice as far away as another will be receding at twice the speed. Combine this with some fancy math from general relativity and you can conclude that the universe is expanding and must have been smaller in the past. The expansion rate is now referred to as Hubble's constant. However, due to some systematic errors present at the time (for example, there were Cepheids that behaved differently from the rest, but no one knew it then), Hubble's estimate was an order of magnitude too high.
In the decades that followed, astronomers were able to get a much more accurate value for Hubble's constant and were also able to extend it out across the entire cosmos. They achieved this by finding more standard candles, the most important of which is type 1a supernovae. These work as standard candles out to much greater distances than Cepheids because they are extremely luminous and they have a fairly well understood peak luminosity based on underlying physics. Using these supernovae, astronomers were able to show that Hubble's constant is in fact pretty constant over long stretches of time and space. Cool.
So there are two reasons why the most recent Hubble observations are able to pin down a value for Hubble's constant with even less uncertainty. The first has to do with consistency. They measured the parallax of Milky Way Cepheid variables using the same Hubble camera that's been used to measure the brightness of extragalactic Cepheids. This means they can be very confident that discrepancies aren't just due to using different instruments.
Second, they're also using a relatively new technique for taking pictures with Hubble called spatial scanning photometry. Rather than just staring at a star and collecting its light over a period of time, they get Hubble to scan diagonally over it, leaving a star trail on the CCD and then adding up all the light from the trail. The advantage of this method is that you can collect a lot of light from a single source without saturating your pixels and you're not relying on one group of pixels to calculate the brightness of the star. You can average out the brightness across this diagonal pixel slash in a way that reduces the chance for error due to (essentially) imperfect calibration.
So the team got very precise measurements of the brightness and parallax of Milky Way Cepheid variables, which let them recalibrate the cosmic distance ladder all the way out to type 1a supernovae and come up with an even better measurement of Hubble's constant. Great. The reason this story is making headlines, however, is that it widens and solidifies the gap growing between this method of determining Hubble's constant and another method.
Let's flash back to Hubble the dude for a moment. He discovers the expansion of the universe, and theorists run with this idea and postulate a big bang. A big bang should leave behind observational evidence in the form of the cosmic microwave background, which formed when the universe cooled down enough so that electrons could calmly orbit protons and photons could stream outward without fear of hitting those electrons. Some of the static on your TV that nobody sees anymore because we've all gone digital is a result of CMB photons reuniting with matter for the first time in like 13.7 billion years, having cooled down to 2.7 kelvins.
But with very good satellites and other radio/microwave telescopes, we can detect much more than static in the CMB. There are tiny temperature fluctuations, some of them on large scales, others on small scales. You can plot all these variations as a power spectrum, which measures how strong your fluctuations are at particular sizes. The exact shape of this spectrum depends on a variety of factors, but cosmologists can model what it should look like using relatively simple physics.
One of the primary parameters influencing the CMB power spectrum is the ratio of matter and energy when the CMB formed. Before the CMB, matter and photons bounced around in a big sloshy mix that caused reverberations throughout the cosmos. Once the CMB formed, they separated and stopped influencing each other. The result is that the CMB power spectrum encodes the matter and energy waves that were most prominent at that last moment of scattering, so the ratio of matter to energy tells you what kind of waves you should get.
The big bang says the universe started out with more energy (from photons and neutrinos) than regular matter. However, as the universe expands, energy dilutes more quickly than matter (due to redshift), which means that at some point, matter becomes more dominant than energy. The ratio of matter to energy that you get from the CMB tells you when this happens, which tells you how quickly the universe is expanding, which gets you another estimate of Hubble's constant. (The difficult part is that many factors go into the CMB power spectrum, so this really gives you a range of acceptable values for the Hubble constant as those other parameters slide around.)
And the problem is that as more accurate maps of the CMB have been drawn (from WMAP and Planck), the value of Hubble's constant they're getting and the value coming from type 1a supernovae have stopped overlapping. The CMB gets you 67 km/s/Mpc, and this new paper's recalibration of Cepheid variables gets you 73 km/s/Mpc, and the uncertainties have shrunk enough that you can't just hope they're really the same value. So there's something important that cosmologists are missing. Thanks to efforts like this most recent paper, measurement error is probably not the answer. Maybe new physics? Maybe assumptions underlying one or both methods are wrong? No one is really sure yet. It's a pickle.
Wow that was way too long. I'll see if I can put together a shorter version later.