A week or so ago, a troll left a link at my blog (Thanks, David) to a supposed-to-be-alarming blog post about a new climate study of ocean heat content. According to the study, a revised method of tweaking ocean heat reconstructions has manufactured new warming so that the top 700 meters of the oceans are warming faster than predicted by climate models. In other words, the “missing heat” is missing no more.
The new paper is Cheng et al. (2015) Global Upper Ocean Heat Content Estimation: Recent Progress and the Remaining Challenges. (Not paywalled. A pre-print edition is available.) John Abraham, alarmist extraordinaire from SkepticalScience and The Guardian’s blog ClimateConsensus, was a coauthor. See Abraham’s post The oceans are warming faster than climate models predicted. Can anyone guess the goal of their study from the title of Abraham’s post?
While the stories about the paper focused on the newly manufactured warming, the paper itself was somewhat critical of (1) the large uncertainties in the reconstructions, (2) the lack of consensus in infilling (mapping) methods used in the reconstructions and (3) climate model simulations of ocean warming. The Cheng et al. abstract reads:
Ocean heat content (OHC) change contributes substantially to global sea level rise, so it is a vital task for the climate research community to estimate historical OHC. While there are large uncertainties regarding its value, in this study, the authors discuss recent progress to reduce the errors in OHC estimates, including corrections to the systematic biases in expendable bathythermograph (XBT) data, filling gaps in the data, and choosing a proper climatology. These improvements lead to a better reconstruction of historical upper (0–700 m) OHC change, which is presented in this study as the Institute of Atmospheric Physics (IAP) version of historical upper OHC assessment. Challenges still remain; for example, there is still no general consensus on mapping methods. Furthermore, we show that Coupled Model Intercomparison Project, Phase 5 (CMIP5) simulations have limited ability in capturing the interannual and decadal variability of historical upper OHC changes during the past 45 years.
Bottom line: To manufacture the new warming, Cheng et al. adjusted, tweaked, modified (tortured) subsurface ocean temperature reconstructions to the depths of 700 meters starting in 1970.
My Figure 1 compares the “unadjusted” data versus the much-adjusted ocean heat content reconstruction from the NODC. It is not the data presented in Cheng et al. (I used the UKMO EN3 reconstruction for the NODC “unadjusted” data. It used to be available through the KNMI Climate Explorer.) I’m providing Figure 1 to give you an idea of how horribly the data had already been mistreated to prepare the base NODC reconstruction.
If you were to read Cheng et al., they bounce back and forth between the metrics of ocean heat content and average subsurface temperatures, both to depths of 700 meters. That is, in the text, Cheng et al. present trends in ocean heat content for the period of 1970 to 2005, but in their Figure 4, my Figure 2, they’re showing trends for subsurface ocean temperatures. (Their Figure 4 made the rounds in the warmist blogs and mainstream media.) It appears climate scientists have realized the public will relate better to temperature than joules. But the trends listed on the graph are so minute, shown in ten-thousandths of a degree C per year, they’re likely losing some of their audience with all of those zeroes.
Presenting the subsurface ocean reconstructions using those two metrics is not unusual. Subsurface ocean temperature reconstructions and ocean heat content reconstructions mimic one another because subsurface ocean temperatures are the primary component of ocean heat content. You just have to keep track of which metric they’re discussing and illustrating.
Take a closer look at the results of the revised Cheng et al. reconstruction (red curve) in the top cell (Cell a) of their Figure 4 (my Figure 2) and the curve of the data using the “NODC-mapping” method of infilling (blue curve), which is not the NODC data. We can see Cheng et al. employed the cool-the-early-data method to increase the warming rate for the period of 1970 to 2005. [sarc on] They’re probably saving the warm-the-more-recent-data method for the next paper, which will then show the oceans warming even faster so the modelers can crank up climate sensitivities. [sarc off]
After seeing the trends listed on their Figure 4 for the “NODC-mapping” method, I decided to check to see what the vertical mean temperature reconstruction directly from the NODC website shows for the world oceans, to 700 meters, for the period of 1970 to 2005 (data here.) See my Figure 3.
Isn’t that amazing? Using the “NODC-mapping” method, Cheng et al. show a warming rate for the global oceans of +0.0045 deg C/year for the period of 1970-2005, but the reconstruction for the same depths of 0-700 meters directly from the NODC website show a warming rate of only +0.0033 deg C/year. Now consider that the outcome of Cheng et al.’s new method of infilling the oodles and oodles of missing data in the depths of the oceans shows the global oceans warming at a rate of +0.0061 deg C/ year. In other words, for the period of 1970 to 2005, Cheng et al. have almost doubled the warming rate of the basic NODC reconstruction for the depths of 0-700 meters.
Now, I guess you’re wondering about the differences in warming rates between the Cheng et al. “NODC-mapping” method and the reconstruction at the NODC website itself. Under the heading of “2 Data”, Cheng et al. write:
Assessment of OHC change relies on in-situ temperature observations. In this study, ocean subsurface temperature profiles for 1970–2014 are from the Institute of Atmospheric Physics (IAP) and the Global Ocean Temperature (IGOT) dataset (Cheng and Zhu, 2014b), which is a quality-controlled and bias-corrected dataset. The in-situ temperature profiles of the IGOT dataset are sourced from the World Ocean Database 2013 (WOD13) (Boyer et al., 2013).
In other words, it appears for the Cheng et al. results, the (1) data starts out as the observations-based data from the NODC’s World Ocean Database, then (2) the data are mistreated for the IGOT reconstruction, and, not satisfied with those results, (3) Cheng et al. tortured the IGOT reconstruction even more for this study and presented it two ways and one of those ways was the “NODC-mapping” method.
Did you notice the other remarkable coincidence? In their Figure 4 (my Figure 2) Cheng et al. show a climate model-simulated warming rate of +0.0053 deg C/year…for the multi-model mean of the climate models stored in the CMIP5 archive. That’s the archive used by the IPCC for their 5th Assessment Report published in 2013. The (good) “Observation” reconstruction presented by Cheng et al. has a trend of +0.0061 deg C/ year, while the already-tweaked and tweaked again (bad) “NODC-mapping” reconstruction shows a trend of +0.0045 deg C/year. The average of the “good” and “bad” reconstructions is +0.0053 deg C/year, exactly the same as the models. [sarc on.] Kind of, sort of, looks like the revisions to the data were planned to surround the models. Sheesh! [sarc off.]
CLOSING – NO MATTER HOW THEY TRY TO LEGITIMIZE OCEAN HEAT CONTENT DATA, IT’S STILL IN THE REALM OF MAKE-BELIEVE BEFORE THE ARGO ERA…AND QUESTIONABLE DURING IT
For years, climate scientists have been concerned about the “missing heat”, which was the difference between modeled and observed ocean warming to depth. The actual value of the missing heat has always been hard to find because the modeled ocean heat content and depth-averaged temperature of the oceans are not available in an easy-to-use format…from the KNMI Climate Explorer for example. Luckily, for the depths of 0-700 meters, Cheng et al. listed a warming rate for the global oceans of +0.0053 deg C/year for the multi-model mean of the CMIP5 climate models, while the reconstruction directly from the NODC website show a warming of only +0.0033 deg C/year. While the missing heat isn’t actually half of what was predicted by the models, it’s still a big chunk…almost 40%. That missing heat, of course, suggested that the climate models were way to sensitive to carbon dioxide.
But things have changed rapidly in the past few years. Climate scientists have not only “found” the missing heat by tweaking their reconstruction methods, they’ve manufactured more heat than the models show by torturing the reconstructions even more.
Unfortunately for the climate science community, no matter how they mistreat the source data, their reconstructions are still make-believe. Why? There’s very little source data, especially in the Southern Hemisphere. See Figure 4, which is an annotated version of Figure 13 from Abraham et al. (2013) Review of Ocean Temperature Observations: Implications for Ocean Heat Content Estimates and Climate Change. The IPCC used an edited version of it in Chapter 3: (Observations Ocean) of their 5th Assessment Report. See the IPCC’s Figure 3.A.2. We discussed the IPCC’s version in the post AMAZING: The IPCC May Have Provided Realistic Presentations of Ocean Heat Content Source Data.
Is it any wonder why Cheng et al. didn’t bother trying to reconstruct the temperature observations below 700 meters?
For more information about the numerous problems with ocean heat content reconstructions, see the post Is Ocean Heat Content Data All It’s Stacked up to Be?
Once again, the climate science community has shown, when the models perform poorly, they won’t question the science behind the models, they are more than happy to manufacture warming by adjusting the data to meet or exceed the warming rate of the models.