What Does Gavin Schmidt's 'Warmest' Year Tell Us About Climate Sensitivity to CO2?

Guest post by Jim Steele,

Director emeritus Sierra Nevada Field Campus, San Francisco State University and author of Landscapes & Cycles: An Environmentalist’s Journey to Climate Skepticism

A friend of mine who works for the EPA emailed me a link to NASA’s Earth Observatory page pitching 2014 as the warmest year on record, and asked if “I dismiss their findings.” The following is an edited version of my reply suggesting the Global Average Chimera tells us precious little about the climate’s sensitivity to CO2, and the uncertainty is far greater than the error bars illustrated in Anthony Watts post 2014: The Most Dishonest Year on Record.

I simply asked my friend to consider all the factors involved in constructing the global average temperature trend. Then decide for himself the scientific value of the graph and if there was any political motivation.

1. Consider the greatest warmth anomalies are over the Arctic Ocean because more heat is ventilating through thinner ice. Before the Arctic Oscillation removed thick insulating sea ice, air temperatures were declining. Read Kahl, J., et al., (1993) Absence of evidence for greenhouse warming over the Arctic Ocean in the past 40 years. Nature 361, 335 – 337.


After subfreezing winds removed thick ice, then air temperatures rose. Read Rigor, I.G., J.M. Wallace, and R.L. Colony (2002), Response of Sea Ice to the Arctic Oscillation, J. Climate, v. 15, no. 18, pp. 2648 – 2668. They concluded, “it can be inferred that at least part of the warming that has been observed is due to the heat released during the increased production of new ice, and the increased flux of heat to the atmosphere through the larger area of thin ice.”

CO2 advocates suggest CO2 leads to “Arctic Amplification” arguing dark open oceans absorb more heat. But the latest estimates show the upper 700 meters of the Arctic Ocean are cooling (see illustration below), which again supports the notion ventilating heat raised air temperatures. Read Wunsch, C., and P. Heimbach, (2014) Bidecadal Thermal Changes in the Abyssal Ocean, J. Phys. Oceanogr., http://dx.doi.org/10.1175/JPO-D-13-096.1.

So how much of the global warming trend is due to heat ventilating from a cooling Arctic ocean???


2. Consider that NOAA’s graph is based on homogenized data. Researchers analyzing homogenization methods reported “results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.”

Read Steirou, E., and Koutsoyiannis, D. (2012) Investigation of methods for hydroclimatic data homogenization. Geophysical Research Abstracts, vol. 14, EGU2012-956-1.

So how much of the recent warming trend is due to the virtual reality of homogenized data???

3. Consider the results from Menne. M., (2009) The U.S. Historical Climatology Network Monthly Temperature Data, version 2. The Bulletin for the American Meteorological Society, in which they argued their temperature adjustments provided a better understanding of the underlying climate trend. Notice the “adjusted” anomalies in their graph below removes/minimizes observed cooling trends. More importantly ask why does Menne (2009) report a cooling trend for the eastern USA from 1895 to 2007, but NASA’s graph (below Menne’s) shows a slight warming trend for all of the USA from 1950-2014? Does that discrepancy indicate more homogenization, or that they cherry-picked a cooler period to start their warming trend?



4. Consider that much of the warming in North America as illustrated by Menne 2009 (above) happened in the montane regions of the American west. Now consider the paper Oyler (2015) Artificial amplification of warming trends across the mountains of the western United States, in which they conclude, “Here we critically evaluate this network’s temperature observations and show that extreme warming observed at higher elevations is the result of systematic artifacts and not climatic conditions. With artifacts removed, the network’s 1991–2012 minimum temperature trend decreases from +1.16°C/decade to +0.106°C/decade.


So how much of the recent warming trend is due to these systematic artifacts???


5. Consider that NOAA’s graph is based on adjusted data and the fact that NOAA now homogenizes temperature data every month and climate trends change from month to month, and year to year. As an example, below is a graph I created from the US Historical Climate Network Cuyamaca weather station in southern California; a station that never altered its location or instrumentation. In 2010 the raw data temperature trend does not differ much from the homogenized trends (Maximum Adj.)


Just 2 years later, the 2011 homogenized century trend (in black) increased by more than 2°F in the 2015 trend (in red.) I have archived several other similar examples of USHCN homogenization causing rapid “virtual warming”. Then ask your self which trend is more real? The more cyclical changes observed in non-homogenized data or the rising trend created by a homogenized virtual reality?


6. Consider that climate change along western North America was fully explained by the Pacific Decadal Oscillation and the associated cycles of ventilation and absorption of heat. Read: Johnstone and Mantua (2014) Atmospheric controls on northeast Pacific temperature variability and change, 1900–2012. Such research suggests non-homogenized data may better represent climate reality.

Knowing that the upper 10 feet of the oceans contain more heat than the entire atmosphere ask yourself if decadal warming trends are simply artifacts of the redistribution of ocean heat.

7. Consider that increasingly temperature data is now collected at airports. A 2010 paper by Imhoff, “Remote sensing of the urban heat island effect across biomes in the continental USA”, published in Remote Sensing of Environment 114 (2010) 504–513 concluded that “We find that ecological context significantly influences the amplitude of summer daytime urban–rural temperature differences, and the largest (8 °C average) is observed for cities built in biomes dominated by temperate broadleaf and mixed forest. For all cities combined, Impervious Surface Area is the primary driver for increase in temperature explaining 70% of the total variance. On a yearly average, urban areas are substantially warmer than the non-urban fringe by 2.9 °C, except for urban areas in biomes with arid and semiarid climates.”

So how much of this recent warming trend can be attributed to increases in Impervious Surface Area in and around weather stations in rural, suburban and urban settings?

8. Consider that direct satellite observations show lost vegetation has a warming effect, and transitions from forest to shrub land, or grassland to urban area raise skin surface temperatures by 10 to 30°F. Satellite data reveals the canopies of the world’s forests averaged about 86°F, and in the shade beneath the canopy, temperatures are much lower. Grassland temperatures are much higher, ranging from 95 to 122°F, while the average temperatures of barren ground and deserts can reach 140°F. Read Mildrexler, D., et al. (2011) A global comparison between station air temperatures and MODIS land surface temperatures reveals the cooling role of forests. J. Geophys. Res., 116, G03025, doi:10.1029/2010JG001486.

Ask yourself, “How much of the warming trend is due to population effects that remove vegetation?” How much is due to citizens of poorer nations removing trees and shrubs for fuel for cooking and heating or slash and burn agriculture?

9. Consider that neither of the satellite data sets suggest 2014 was the warmest ever recorded.


10. Consider that many tree ring data sets show recent warming does not exceed that 1940s as exemplified by Scandinavian tree ring data (from Esper, J. et al. (2012) Variability and extremes of northern Scandinavian summer temperatures over the past two millennia. Global and Planetary Change 88–89 (2012) 1–9.)


Consider international tree ring experts have concluded, No current tree ring based reconstruction of extratropical Northern Hemisphere temperatures that extends into the 1990s captures the full range of late 20th century warming observed in the instrumental record.” Read Wilson R., et al., (2007) Matter of divergence: tracking recent warming at hemispheric scales using tree-ring data. Journal of Geophysical Research–A, 112, D17103, doi: 10.1029/2006JD008318.

In summary, after acknowledging the many other factors contributing to local temperature change, and after recognizing that data homogenization has lowered the peak warming of the 30s through the 50s in many original data sets by as much as 2 to 3°F, (a peak warming also observed in many proxy data sets less tainted by urbanization effects), ask yourself, does NOAA’s graph and record 2014 temperatures really tell us anything about climate sensitivity or heat accumulation from rising CO2? Or does it tell us more about climate politics and data manipulation?


0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
January 25, 2015 5:17 pm

“No” and “yes”.

January 25, 2015 5:24 pm
Reply to  Barry
January 25, 2015 5:29 pm

Graphed series ends in 2005. 10 years of flat-line/decline missing…

Reply to  jmichna
January 25, 2015 5:40 pm

And that data set has a long history of errors and adjustments of its own:

Reply to  jmichna
January 25, 2015 6:13 pm

Most climate stuff in Wikipedia is written or edited by W. Connelly.
It is NOT accepted by any scientific institution as a reliable reference.

Reply to  jmichna
January 25, 2015 9:01 pm

OK, here is what Roy Spencer had to say:

They all stem from the fact that there is not a single satellite which has been operating continuously, in a stable orbit, measuring a constant layer of the atmosphere, at the same local time every day, with no instrumental calibration drifts.
Instead, what we have is multiple satellites (we use 14 of them for the UAH processing) with relatively short lifetimes (2 to 16+ years), most of which have decaying orbits which causes the local time of measurement to slowly change over the years, slightly different layers sampled by the earlier (pre-1998) MSU instruments compared to the later (post-1998) AMSU instruments, and some evidence of small calibration drifts in a few of the instruments.
An additional complication is that subsequent satellites are launched into alternating sun-synchronous orbit times, nominally 1:30 a.m. and p.m., then 7:30 a.m. and p.m., then back to 1:30 a.m. and p.m., etc. Furthermore, as the instruments scan across the Earth, the altitude in the atmosphere that is sampled changes as the Earth incidence angle of view changes.
All of these effects must be accounted for, and there is no demonstrably “best” method to handle any of them. For example, RSS uses a climate model to correct for the changing time of day the observations are made (the so-called diurnal drift problem), while we use an empirical approach. This correction is particularly difficult because it varies with geographic location, time of year, terrain altitude, etc. RSS does not use exactly the same satellites as we do, nor do they use the same formula for computing a lower tropospheric (“LT”) layer temperature from the different view angles of AMSU channel 5.

Reply to  jmichna
January 25, 2015 9:18 pm

@NIck, I agree with Spencer’s post and I think it illustrates the man’s integrity and his acknowledgement of the uncertainty in satellite data. If only IPCC spokespeople or Gavin Schmidt acknowledged the tremendous uncertainty in the “global average temperature” chimera!. His acknowledgement of that uncertainty makes me trust his graphs more the Schmidt’s.

Reply to  jmichna
January 26, 2015 6:44 am

Gavin Schmidt, on the other hand, displays a lack of integrity, publicly and before the media.

Reply to  jmichna
January 27, 2015 1:13 pm

@ Jim Steele…your comment is right on target.

Reply to  Barry
January 25, 2015 6:22 pm

The nice thing about the satellite temperature projects is that one (UAH) is run by a skeptic of CO2 controlling the climate and the other (RSS) is run by a scientist who feels CO2 does control the climate. By the way it is 3rd and 6th for those 2.

Christopher Hanley
Reply to  Barry
January 25, 2015 9:35 pm

Irrelevant. it’s the pre-satellite data that’s the issue, for instance the relationship of the current global temperature to the early 40’s.

Reply to  Barry
January 26, 2015 1:54 am

Barry and Nick Stokes – whatever happened to 1998? One good cherrypick does deserve another.comment image

Reply to  Jimbo
January 26, 2015 3:01 am

Well, what happened to 2010 or 2014 in your second plot? or even 2005 surface? And what is the surface measure?
Troposphere measures give a big response to El Nino.

Bubba Cow
January 25, 2015 5:30 pm

We have got to ride the global warming issue. Even if the theory of global warming and CO2 is wrong, we will be doing the right thing in terms of economic and environmental policy.

Timothy Wirth, U.S Undersecretary of State for global issues, Clinton-Gore administration
Thanks for the data, though, particularly illustrations of the adjusted and unadjusted.

January 25, 2015 5:35 pm

Somebody decided to cloud the skies with an airplane or two or three over Toronto today for most of the afternoon. Ah, whatever maybe i’m seeing things. Why would anybody do such a thing? lol

January 25, 2015 5:45 pm

#1 – Yes, arctic oceans cool (release heat) when ice thins, because of increasing air temperatures. But then if warming were not occurring on a global scale, how do the other oceans (not ice covered) warm?

Reply to  Barry
January 25, 2015 6:16 pm

For all you ever wanted to know about oceans and ocean heat distribution, go to bobtisdale,worldpress.com .

Robert B
Reply to  Barry
January 25, 2015 6:36 pm

Now if the chart was average degree C anomaly like is used for surface temperatures, the chart wouldn’t have broken. “Ocean warming broke chart” ? “DHead can’t draw a graph” would have been more appropriate.

george e. smith
Reply to  Barry
January 25, 2015 8:15 pm

Cooling rates in the polar regions are as much as a factor of 12 lower than the cooling rates in the hottest deserts. So the polar regions are not a significant factor in global cooling.
In addition ocean and atmospheric circulations constantly move heat energy from the tropics to the poles, which is why the polar regions warm faster than the tropics do.
The southern oceans (Pacific, Atlantic, and Indian) can’t reach much further south than the coast of Antarctica, which is basically the Antarctic circle. But the northern oceans can penetrate all the way to the north pole.
So the southern ocean [areas] which are more water than land can’t cool as fast as the northern ocean [areas] which are more land than water.
That’s why the southern oceans have been warming, but the northern ones haven’t. The Northern hemisphere is where all the hot tropical deserts are, so it can cool radiatively more than the southern hemisphere can.

Reply to  Barry
January 25, 2015 8:50 pm


#1 – Yes, arctic oceans cool (release heat) when ice thins, because of increasing air temperatures.

Would you explain the heat transfer involved here?

Reply to  Barry
January 26, 2015 12:57 am

Via insolation.
1. Cloud data show decrease in cloud cover since mid-eighties
2. Increased atmospheric CO2 makes no contribution to SST because water is opaque to far IR.

Reply to  Barry
January 26, 2015 3:05 am

An object can only loose heat if it is in a cooler area. So the Arctic waters cools because the air is colder not because the air is warmer, this would warm Arctic waters.

Phil Cartier
Reply to  Barry
January 26, 2015 11:47 am

mashable:”We also found out that ocean temperatures reached record levels in 2014, with portions of every major ocean basin attaining new highs.” The authors apparently didn’t notice that the heat collected in certain areas due to ocean currents(weather), the same areas where the majority of Argo floats circulate. One Argo float, on average, represents about 60,000 cubic miles of ocean, measured by a 2ft square crossection 1-2000 meters tall. In the southern oceans they are spread out ~10X as far. There are so few floats relative to the size of the oceans that proper error bars on the graph would probably fill the whole thing!
The other point, the “deep” oceans have possibly risen about 0.02 degC. If and when that water does get pushed to the surface how does it transfer heat to its surroundings. The answer is that it doesn’t. At best the surface temps warm the same 0.02 degC that year and the cold upwelling feeds millions of tons of aquatic life and possibly starts another La Nina cycle.

Reply to  Barry
January 27, 2015 1:36 pm

Keep reading at mashable and you will remain in the dark.

January 25, 2015 5:51 pm

#2 – Here’s an interesting discussion of the non-peer-reviewed conference presentation (cited as if it were peer-reviewed):

Reply to  Barry
January 25, 2015 6:15 pm

Barry, your first comment is from a global warming advocacy site.
Your second comment is from Wikipedia.
Your third comment is from a MSM news article.
And your fourth comment is basically an ad hominem against Mr. Watts.
You’re going to have to do better than that if you want to convince people. Data please, and a reasoned analysis would be nice, too.

Reply to  SMC
January 25, 2015 6:30 pm

I’m not making an ad hominem, just pointing folks to a website. If anything, Mr. Steele at least should have credited Mr. Watts for pointing this out several years ago.

Reply to  SMC
January 25, 2015 6:54 pm

“I’m not making an ad hominem, just pointing folks to a website.”
with such content as:
“As a goal-oriented guy, Anthony Watts”(…)
Really, no ad hom.

Reply to  SMC
January 26, 2015 1:01 am

Well Barry, would you care to defend your other sources? Like the mainstream media. Or the wikipedia?

Jeff Alberts
January 25, 2015 6:00 pm

A friend of mine who works for the EPA emailed me a link to NASA’s Earth Observatory page pitching 2014 as the warmest year on record, and asked if “I dismiss their findings.”

The only answer needed was “There is no global temperature”. Anyone who thinks there is needs to explain why averaging intensive properties returns anything meaningful.

Reply to  Jeff Alberts
January 25, 2015 6:34 pm

I wonder if “climate scientists” could even agree on what would be needed to obtain a reasonable global average surface temperature?
What we have now arguably only gives us a number of local/regional average surface temperatures.

george e. smith
Reply to  JohnWho
January 25, 2015 8:34 pm

Well the differences in regional temperature (at any instant) are what drives heat flow from one region to another.
But local temperatures are never recorded all at the same time, so you never ever get a valid temperature map of the world at any instant of time, which would give you a function that you could average.
And of course you do need to have samples that conform to the Nyquist sampling theorem; and that you never do have, either spatially or temporally and by a big enough factor (relative to the signal bandwidth) that even the zero frequency spectral component (average) is corrupted by aliasing noise.
So nyet of getting a real global average temperature.

Reply to  Jeff Alberts
January 25, 2015 11:15 pm

Or refer to the standard method of deriving a global surface average temperature at the International Standards Organisation: http://www.iso.org/iso/home.html
Good luck with that!

Walt D.
Reply to  Jeff Alberts
January 26, 2015 5:06 am

You make a very good point. What type of average you use depends on what you want to use it for. What is the average voltage on a house AC circuit – zero. So we use root mean square. What do you do if you want to falsify cost of living numbers? Use the geometric average. What type of average would you use to calculate the black body radiation from a surface where there is a large variation in temperature?
It is not that there is no global temperature. Rather, that the land and sea measurements do not actually measure it. See the recent post by Tim Ball about data inadequacy. If we go back before 1956, there were no temperature measurements at the South Pole. Yet we hear talk about global records going back to 1880, 30 years before anybody even went to the South Pole.

January 25, 2015 6:05 pm

#3 – HCN v.3 has been available since 2011. Why not cite that data set?

Reply to  Barry
January 25, 2015 7:16 pm

I used Menne 2009 for the visual. Do you have a similar map for HCN v3.?
But see my point #5. My guess is they have homogenized an increasing warming trend as there are several other examples similar to Cuyamaca where homogenized trends increase yearly as discussed for Death Valley http://wattsupwiththat.com/2015/01/07/peter-miesler-helps-expose-ushcn-homogenization-insanity-and-antarctic-illusions/.
It teems the most rapid climate warming trend is caused by homogenization.

Reply to  Barry
January 25, 2015 11:16 pm

Because ISO doesn’t?

Bill Illis
Reply to  Barry
January 26, 2015 6:25 am

Here is the adjustments to Cuyamaca in GHCN v3.2.2 (why does there need to be new version every few months?).
No station moves, no instrument changes, but the NCDC takes Cuyamaca’s modest warming of +0.5C and increases it to +2.3C of warming.
The more you look, the more examples there are like this. We need forensic auditors to go in and fix this mess.

Reply to  Bill Illis
January 26, 2015 8:43 am

Why don’t you be that auditor who fix this mess? The better fixing the better homogenization.
How do you know there has been no changes in that location?

January 25, 2015 6:09 pm

Yes, the trends are likely overestimated. But, it’s moot. Even the greatest trend from the most dubiously processed data is still nowhere near what the models said it would be.

January 25, 2015 6:20 pm

“Or does it tell us more about climate politics and data manipulation?”
Yeah, I’m going with this one.
Although if Gavin had said, “Warmest year on data manipulated record” none of us would have anything to be skeptical of.

Reply to  JohnWho
January 25, 2015 11:20 pm

Except Gavin would have told you that “warmest” actually meant something completely different and would have provided you with information that was completely irrelevant to the question you had asked.

January 25, 2015 6:24 pm

#4 – “This network” is vague. NOAA has a few different networks. The HCN has about 1200 stations, and does not include the SNOTEL stations reviewed by Oyler et al. There is also a network of about 10,000 stations that includes the 700 SNOTEL stations. So I would guess that about (700/10000) x 1 C of the minimum temperature trend could be attributed to systematic bias (over a period of 10 years, mid-90s to mid-2000s). An important conclusion of the study was omitted, though: High-elevation minimum temperatures are still warming about as much as minimum temperatures at lower elevations.

John West
January 25, 2015 6:36 pm

“asked if “I dismiss their findings.””
Of course not. I do however put their findings in context. Why, do you suppose, “scientists” would portray inconclusive evidence of a warming trend as absolute proof of dangerous anthropogenically dominated warming?

January 25, 2015 7:01 pm

Barry, You really need to do a study of past high temperatures during the Holocence, Medieval and Roman periods. Just because the temps are rising by tenths of a degree does not mean its due to us. For example in Central England temps at three widely distributed gauges increased by 2C from the mid 17th Century over 40 plus years. Nothing to do with CO2. Just because there is a rising trend is not proof of AGW. Its all happened before and currently temps have flat lined over 18 years.

January 25, 2015 7:04 pm

The NOAA reported a record high by .04 degree on the assumption that they knew the temperature to a similar precision over the Pacific in 1890, the Sahara in 1903, the Atlantic in 1920 and a million other assumptions.
Not raw, precise data. But modeled assumptions of a complex and chaotic climate over more than a century.

Reply to  Chip
January 25, 2015 11:21 pm

Never to forget that for half a century Antarctic temperatures were being “measured” in the Falkland Islands.

Mike M.
January 25, 2015 7:35 pm

Mr. Steele asks several times: “So how much of the recent warming trend is due to …”. In each case, I think the answer must be “perhaps some, but not much”. I say this because the ocean covers some 70% of the Earth’s surface. So if you cut the warming over land down to match the warming of the sea surface (land warming is about 70% larger, according to the official figures), you only reduce the global trend by about 14%. So if the sea surface temperatures are OK, then the global trend is not that different from the official values.
I am not saying that the land temperatures are OK; it does seem there are problems. I am just saying that it is sea surface temperatures that really matter.
So is there some good reason I should not trust the sea surface temperatures?

Reply to  Mike M.
January 25, 2015 8:31 pm

No. Trust sea surface temperatures, but dig deeper than the average to wonder why the Indian and south Atlantic are causing all the average warming while all the other oceans are flat or cooling. You are right that it is ocean temperatures that really matter. The average ocean warming virtually guarantees that there must be some atmospheric warming going on, yet warm air rises and the satellites would detect the land surface warming if it were real.

Mike M.
Reply to  gymnosperm
January 26, 2015 8:45 am

Gymnosperm writes “satellites would detect the land surface warming if it were real”. But satellites do detect the warming, although they seem to give a slightly lower amount of warming than terrestrial measurements.
The warming of the oceans means there must be a radiative imbalance at the top of atmosphere. I agree that changes in the temperature distribution of the oceans is very important to understand since that can cause atmospheric temperatures to not track with the ocean temperatures.

Reply to  Mike M.
January 25, 2015 8:46 pm

Mike M. you ask the wrong question. It is not a matter trusting sea surface temperatures but putting those changes into a proper perspective. The question I asked is what does the temperature tell us about climate sensitivity to CO2. A slow down in the winds can cause sea surface temperatures to rise. The redistribution of heat concentrated within a warm pool, can warm surfaces elsewhere without any additional accumulation of heat. Redistribution of heat from a center of stored heat, can cause detectable warming elsewhere without any noticeable cooling within the region of stored heat. So your question is a good one in essence, but better said as “what does a change in sea surface temperatures tell us about ocean sensitivity to CO2”, because often the change is just weather not climate.
The upper 300 meters are a better indicator of climate trends and according to Xue 2010 (A Comparative Analysis of Upper-Ocean Heat Content Variability from an Ensemble of Operational Ocean Reanalyses), there has been no warming or a slight cooling since 2003 in the upper 300 meters.

Reply to  jim Steele
January 25, 2015 11:25 pm

The issue is an imbalance between incoming energy from the sun and outgoing energy from the Earth. The correct metric for this is enthalpy. Temperature is not a measure of enthalpy.

Mike M.
Reply to  jim Steele
January 26, 2015 8:53 am

Jim Steele writes “sea surface temperatures but putting those changes into a proper perspective”. That is what I am trying to do; I think you are the one with the mistaken perspective.
“A slow down in the winds can cause sea surface temperatures to rise.” No, they can’t. Sea surface temperature is normally measured at a depth of 3 meters and is representative of the mixed layer, which has an average depth at least an order of magnitude greater. As you point out yourself, that volume of water has a heat capacity that is large compared to the heat capacity of the atmosphere.
Because of the large thermal inertia of the oceans, ocean temperatures pertain more to climate than weather. It is the response of the ocean to CO2 that ultimately matters.

Mike M.
Reply to  jim Steele
January 26, 2015 8:57 am

The Pompous Git is correct that “The issue is an imbalance between incoming energy from the sun and outgoing energy from the Earth. The correct metric for this is enthalpy. Temperature is not a measure of enthalpy.”
But one can not measure enthalpy, one can only measure changes in enthalpy. One normally measures changes in enthalpy by measuring changes in temperature. So the statement “Temperature is not a measure of enthalpy” is misleading.

Reply to  jim Steele
January 26, 2015 10:24 am

Mike wrote “Because of the large thermal inertia of the oceans, ocean temperatures pertain more to climate than weather. It is the response of the ocean to CO2 that ultimately matters.”
MIke you are engaging in a bit of bait and switch! Or revealing your own lack of perspective.
Ocean surface temperatures are a different ball game the ocean heat content. Indeed the oceans are the real critical piece when analyzing accumulated heat. And to to understand if heat that is ventilating from the oceans was stored this year, last decade or during the Medieval Warm Period, requires a whole lot more observations and an improved understanding than what we currently have. Read Deep Oceans Are Cooling Amidst A Sea of Modeling Uncertainty: New Research on Ocean Heat Content http://landscapesandcycles.net/cooling-deep-oceans.html
On the other hand surface temperatures are greatly influenced by weather, winds speeds, el minos and upwelling. You say the response to CO2 is what ultimately matters? Really? Have you ever considered the sun and clouds? El Ninos and LA Nina. The tropical oceans receive more heat from the sun than temperatures suggest because much of that heat is distributed poleward determining extra-tropical climate. The IPCC’s Scientific Basis admits that added CO2 in the tropics has little significance due to deep convection and the extreme moisture content. (That’s why the focus on the Arctic) Slightly higher periods of solar irradiance are correlated with periods when the tropical Pacific is in a more La Nina like state. During La Ninas ther are fewer clouds in the tropical eastern Pacific and greater insolation. Fewer clouds can allow a increase as much as 200 W/m2.
Mike I suggest you broaden your perspective.

Reply to  jim Steele
January 26, 2015 10:30 am

@ Mike M.
So does a cubic metre of water at 0°C that is all liquid have the same heat content as a cubic metre of water at 0°C that is mostly solid? Does a cubic metre of air at 0% humidity have the same heat content as a cubic metre of air at 100% humidity? Does a cubic metre of stationary air have the same energy content as a cubic metre of air moving at 100 km/hr? Since energy content is dependent on volume, where is the dependency of temperature on volume? Does adding a lite of water at 50°C to a second litre of water at 50°C give us two litres of water at 100°C? Not in the physics I was taught back in the late 1960s.

Mike M.
Reply to  jim Steele
January 26, 2015 11:14 am

The Pompous Git asks “So does a cubic metre of water at 0°C that is all liquid have the same heat content as a cubic metre of water at 0°C that is mostly solid? ” etc.
Of course not. And how do you demonstrate that fact? Hint: It involves a thermometer.
To say that “Temperature is not a measure of enthalpy” is technically true, as I stated earlier. But to say that as an isolated statement is misleading, since it would seem to ignore the fundamental relationship between enthalpy and temperature.

Reply to  jim Steele
January 26, 2015 11:58 am

Mike M:
CO2 has no effect on SST because water is opaque to far IR. See the absorbency spectrum of water, with particular attention to the attenuation curve at 15 microns, the wavelength emitted by CO2. The data show that this wavelength is absorbed within 3 microns of the surface. All IR emitted by CO2 which is incident on water is converted into latent heat within a few seconds.

Reply to  jim Steele
January 26, 2015 12:55 pm

@ Mike M.
Explain to me then why I should accept temperature change as a proxy for change in energy content.

DD More
Reply to  Mike M.
January 26, 2015 11:51 am

Mike asks “So is there some good reason I should not trust the sea surface temperatures?”
On an recent posting involving sea temperatures I found.
Adjustments galore. Went looking for accuracy and just what they are able to measure after Bob’s last post and had a real awakening. Seems that overall measuring of ‘sea surface’ has problems. Original bucket & thermometer (no depth control), ship intake (well below surface), buoys (seem to rock in the wave with depth resolution of a meter), then IR satellite (cannot get thru the clouds) to microwave (get thru the clouds, but not the rain & surface mist). Oh and did I mention one of the satellites was doing reasonable until they had to boost the altitude, then had problems with pitch, yaw and just what was the height. The number of adjustments to correct is staggering. Includes (but not limited to); wind speed, rain, cloud amount/percent and cloud water vapor, daytime diurnal warming, high latitudes, aerosols, SSTs under 10C, columnar water vapor, higher latitudes show a slight warm bias, seasonal cycle wind direction for SST retrieval, fast moving storms and fronts, wind direction error and instrument degradation.
Still their abstract reads –
Errors were identified in both the MW and IR SST data sets: (1) at low atmospheric water vapor a
posthoc correction added to AMSR-E was incorrectly applied and (2) there is significant cloud contamination of nighttime MODIS retrievals at SST <10C. A correction is suggested for AMSR-E SSTs that will remove the vapor dependency. For MODIS, once the cloud contaminated data were excluded, errors were reduced but not eliminated. Biases were found to be 20.05C and 20.13C and standard deviations to be 0.48C and 0.58C for AMSR-E and MODIS, respectively. Using a three-way error analysis, individual standard deviations were determined to be 0.20C (in situ), 0.28C (AMSR-E), and 0.38C (MODIS).

Now put that in relation to historical data, where they use the 1872 to 1876 voyage of the HMS Challenger.
69,000-nautical-mile track, crossing the Atlantic, Indian and Pacific oceans and 300 ocean-temperature profiles. What a base to compare to.

Mike M.
Reply to  DD More
January 26, 2015 1:43 pm

Thanks, DD More, looks interesting.

January 25, 2015 7:56 pm

Its not warming
there was no Little Ice age
the climate doesnt change

Reply to  Steven Mosher
January 25, 2015 8:57 pm

Who is arguing there is no warming or that climate doesnt change??? That is a meaningless comment. What does the impending blizzard in New England tell us about sensitivity to CO2. What does the record high in Death Valley in 1913 tell us about sensitivity to CO2. I see no one answering the question.
There is no doubt climate has warmed since the LIA. But again what does that tell us about sensitivity to CO2 versus other climate dynamics? Johnstone and Mantua 2014 explained regional warming due to the Pacific Decadal Oscillation, and every paper I have read suggests rising CO2 in models have no skill simulating the PDO. Furthermore the changes in SST in that paper agree with most of California’s raw data but conflicts with homogenized data. What does that tell us?

Reply to  jim Steele
January 25, 2015 11:18 pm

Thank you Jim Steele, I’ve been reading here in for some time, and I find your logic unassailable in general. I am not sure what’s up with Mosher, but I suppose that’s the way he likes it. Don’t let him distract you 😉

Mike M.
Reply to  jim Steele
January 26, 2015 9:04 am

Jim Steele writes: “What does the impending blizzard in New England tell us about sensitivity to CO2. What does the record high in Death Valley in 1913 tell us about sensitivity to CO2. I see no one answering the question.”
OK, I’ll answer those questions: Nothing. Those are weather events.
So far as I know, Steele is correct that the models have no skill at simulating PDO (or AMO, for that matter). That tells us that even if the models have climate sensitivity right (unlikely IMO), they won’t give reliable results over short time periods, such as the 27 year period used by IPCC to test the models or the nearly 20 years of the pause.

Reply to  Steven Mosher
January 25, 2015 11:31 pm

Having fun yet?

Reply to  The Pompous Git
January 25, 2015 11:31 pm

Directed at Mosh…

January 25, 2015 8:12 pm

“So how much of the recent warming trend is due to these systematic artifacts???”
AFAIK, none. SNOTEL is a USDA system. Do you know of any climate index that uses SNOTEL data?

george e. smith
January 25, 2015 8:43 pm

Well the ideal gas law applies to ideal gases. We don’t have any ideal gases. and especially we don’t have any in our atmosphere, so the ideal gas law wouldn’t do much good. And the ideal gas law contains a volume. The atmosphere doesn’t have any specific volume. So just what are you going to calculate with the ideal gas law ??
The ideal gas law is a law relating to thermal energy (heat ; (noun)) The “green house effect ” is a consequence of electromagnetic radiation energy.

Reply to  george e. smith
January 25, 2015 11:18 pm

And it’s a trivial matter to consult MODTRAN: http://www.modtran5.com/

January 25, 2015 8:48 pm

11. Much of the current warm anomaly is due to the artificial rise in Siberian temps. after the fall of the USSR, which had rationed fuel supplies preferentially to districts that reported the lowest temps.

Reply to  rogerknights
January 25, 2015 9:12 pm

And despite northern Eurasia having one of the lowest densities of weather stations, it has one of the highest anomalies which calls into question homogenizaton’s evil twin “infilling”. Is it a coincidence that eastern USA has the greatest density of weather stations and the least warming and even cooling. Here is a map of GHCN station density. The warmest regions typically have the least coverage.
[Link missing? .mod]

Reply to  jim Steele
January 25, 2015 9:27 pm

Here is the link of GHCN stations

Reply to  jim Steele
January 25, 2015 11:30 pm

Infilling being what got Australian meteorologist Warwick Hughes started. He asked Phil Jones how the centre of the “hotspot” was hotter than the the surrounding stations used to infill the temperature in the hotspot where there were no stations.

Reply to  jim Steele
January 26, 2015 1:17 am

Fundamentally the problem is this: these groups have built a *model of temperature history*. It doesn’t matter if countless examples of the corruption of pristine station data is shown to them. Any one of these examples should be sufficient to debunk the model. However these refutations of the model are always hand waved away, treated as anomalies of little impact on the record overall. Why? Because their model shows a great deal of internal consistency. Of course it does. That’s what you get when you build a model that is designed to be *consistent.* What other result could you expect? This is where, I suspect, Mosher does the greatest job of fooling himself into thinking he has done something smart.
Now of course, the model is not presented as a model of temperature history, but THE temperature history. A particular station due to the unique morphology of the region and micro climate, might show a cooling trend due to, say, a change in circulation patterns in that region for, say, 20 years. For the model, that natural phenomena becomes an aberration to be smoothed out of existence. It doesn’t matter if that is what really happened over that time period in that region; because weather and short term climate can be messy and can operate in cyclical patterns. But if that’s not what the model assumes, all that information is lost. That is why the historical record is changing so dramatically. (The notion that a temperature measurement today alters a record a decade ago is so utterly absurd it defies credulity.Scientists working in other fields, engineers such as myself, are left speechless. And the geniuses who have designed this monstrosity can’t defend it either and don’t attempt to. They just say, “that’s the way it’s done.”. These people shouldn’t be taken seriously. Not until they clean up their methods.)
Their models are not just ‘fixing’ discontinuities due to measurement error, but erasing what happened and replacing it with what the model says *should have* happened.
In the field of psychology, where I was trained, messy data is a fact of life. And the best way to spot fraudulent data was to be presented with rather consistent measurements. Because everybody knew that was not what real world data ever looked like. You didn’t try to “fix” the data set, you used it or you didn’t use it. As soon as you tried to “fix” the data in that way (and I mean by this, substantially altering) you no longer had the original data but had inserted all your assumptions – the very thing you were seeking – into the data itself. Little wonder they found what they were expecting to find.

Reply to  jim Steele
January 26, 2015 6:14 am

Just add stations and increase coverage. This is the result:
The exact opposite of what you think.
Or try this:
with this result
(both unadjusted)

Reply to  jim Steele
January 26, 2015 7:10 am

Will Nitschke:
Mosher, in a comment here some months ago reported that ” new data” could change past data. He ended his comment “read’em and weep”
I consider Mosher as a hopeless case.

January 25, 2015 11:48 pm

The Git learnt his basic climatology from TR Oke’s Boundary Layer Climates (among other texts). Oke has not been updated to reflect CAGW. Nor, to the best of The Git’s knowledge is there a tertiary level text comparable to Oke. Until a warmist directs me to such, The Git will continue to accept The Received View, i.e climatology as it is taught at the tertiary level.

Mike the Morlock
Reply to  The Pompous Git
January 26, 2015 12:42 am

Hi thanks for your reply on the other thread. Now ISO-9000 heh heh…. I was an internal auditor for many years were I work. If you don’t comply you can’t export to Europe. If an area failed an internal audit the area Supervisor would be dinged on his review. Some of these people had been my supervisors in the past. Guess the rest.
I would show up dressed in a black leather trench coat- black leather finger gloves and a real life black soviet ushanko. complete with the CCCP emblem. It was best on new hires, I would greet them in Russian and then say I trust everything is in order?. My “normal” partner would ask.. is something wrong? It was a BLAST.
still laughing

Mike the Morlock
Reply to  Mike the Morlock
January 26, 2015 12:43 am

oops for The Pompous Git

Reply to  Mike the Morlock
January 26, 2015 2:02 am


January 26, 2015 1:34 am

“Or does it tell us more about climate politics and data manipulation?” ~ from the essay
I think your response to your friend over at the EPA was very good. There is so much more that you could have gone in to, but one needs to keep it short and to the point to have any chance at changing a mind. (if that is really possible at this point)
The politics of the AGW delusion is interesting. So many of the main players are getting well paid in many ways to keep finding that mankind’s use of energy (our industrial society in other words) is the main cause of the “coming runaway heat wave”. We can no longer expect any data set that can be tampered with to remain untampered. That is just the way it is.
As more and more and more CO2 is added to the atmosphere and the temperatures do not rise in sync, perhaps someday in the far future people will see that the “climate sensitivity” to CO2 is zero. (or so close to zero that we can not honestly meausure it) Perhaps when the IPCC releases its 100th assesment we will finally see that the Jim Hansen theory of CO2 driving climate is utter cow droppings. (the current players will have to die off as they will never admit that they were deluded)
“Consider that NOAA’s graph is based on homogenized data …” I would ask you to consider that the whole delusion is based on homogenized, infilled, and false data. Mencken once wrote that “For every complex problem there is an answer that is clear, simple, and wrong” and I believe the alarmists have found it!

The other Casper
Reply to  markstoval
January 26, 2015 6:16 am

“perhaps someday in the far future people will see that the “climate sensitivity” to CO2 is zero. (or so close to zero that we can not honestly meausure it)”
Or clearly above zero, but moderate enough that we’ll be able to adapt to it, just as humans have been adapting to climate changes for thousands of years already.
And that adaptation will be a whole lot easier if we continue to create wealth now.

Reply to  The other Casper
January 26, 2015 7:37 am

CS is indeterminable

January 26, 2015 2:06 am

The first question I would ask is, “Is it possible to measure the temperature of the earth with any accuracy?” The temperature changes every so many feet in the atmosphere. When the wind blows the temperature changes. The oceans are miles deep. How many temperature data points would be necessary to accurately take the temperature of a dynamic planet?
Point 2: We are talking 100th’s of a percent in temperature. What is the + or – room for instrument error?

Mindert Eiting
January 26, 2015 2:08 am

There are several procedures for obtaining a time series of global average temperatures and several data bases. Run your program on your base and print the time series. If you have the source code, ask a programmer to substitute for all procedures for a mean, procedures that compute a median. Test the program, rerun it on the data base, and print the result. Let me know whether you are as surprised as I was, when I did this. BTW medians are less sensitive to outliers than means.

January 26, 2015 2:25 am

I have often wondered at the wisdom of averaging the temperature from the tropical and subtropical zones which receive 73% of the total solar insolation and where Clouds reduce temperature, with the polar region which only receives 6% of the solar insolation and Clouds increase the temperature.
On what planet would that make sense? A pancake planet that doesn’t flip? Warmers probably read too much Paul Bunyan growing up.

January 26, 2015 5:17 am

Many wrongs no guarantee for being right. This post is an example.
#1. Polar amplification happens during global warming. Also when there are other causes than increased CO2 is the main cause.
The temperature of the Arctic is like the temperature in the Norhtern hemisphere but with more swing. More cooling 1940 – 1970. And more warming after that than globally. Polar amplification up and down.

January 26, 2015 5:37 am

#2 A presention (not article) with a very dubious claim:
“In 2/3 of the stations examined the homogenization procedure increased positive temperature trends, decreased negative trends or changed negative trends to positive. Global Temperature Increase (from the examined series) Raw data 0.42 °C Adjusted data 0.76°C
The expected proportion would be 1/2”
They think they actually know the right proportion. They cannot know that apriori. That is an empirical issue. And as they have taken a sample with a large portion of stations from the USA, we know there are for example TOBS issues that must be accounted for from those stations..
And since they have taken a subsample that is not representative their conclusion of 0.42 vs 0.76 is just wrong.
But does Steele really agree with these authors?
“Homogenization is necessary to remove errors introduced in climatic time series”
The aim must be to use the best homogenization methods.

January 26, 2015 5:43 am

#4 Already covered.
That network is not used by the temperature indexes except BEST (as Zeke Hausfather said in another thread here).
And the irony is of course that BEST removes that inhomogeneity. Another irony is that they found the bias by comparing their network to the measurements that are used by the other indexes. Had higher trend in SNOTEL.
Steele did not know that.

Walt D.
January 26, 2015 5:44 am

The global warming community were anticipating a large El Nino year. That way they could piggy-back on the rise and temperature and claim that part of it was due to increased levels of CO2. This would have allowed them to inflate the climate sensitivity.So when it did not materialize they were stuck.
Next time there is a large El Nino year, the global warming alarmists are going to attribute all the rise in temperature to CO2 and claim the CO2 caused the El Nino, so their models were right all along.

January 26, 2015 6:05 am

The point being that if there are regional patterns of temperature change that are more influenced by regional conditions then AGW and global change must be viewd differently?
Another issue is that the results from Johnston & Mantua depends on the choice of SLP data. The certainty of their conclusion being overstated.

January 26, 2015 6:08 am

#9 RSS and UAH have not 2014 as the warmest year.
Of course not. 2014 was not a ninjo year.
Hadcrut is by the way in. Tie with 2010. That means Cowton & Way probably will have 2014 as the second warmest. Go for that.

January 26, 2015 6:14 am

#5 – No details given here, and an anecdote at best, but I suspect it’s due to time of observation shifts. US HCN adjustments, and comparisons with other data sets, are well documented. See links here, for example: http://rankexploits.com/musings/2012/a-surprising-validation-of-ushcn-adjustments/

Reply to  Barry
January 26, 2015 10:03 am

LOL Barry, you are the king of replies that are simple anecdotes from your favorite blogs.
All data in my graphs were downloaded from USHCN website, and I can show many others with similar rapidly changing trends over the past 4 years as I did for Death Valley.http://wattsupwiththat.com/2015/01/07/peter-miesler-helps-expose-ushcn-homogenization-insanity-and-antarctic-illusions/
I have downloaded many many USHCN data files over the years and I suspect many others have as well. Perhaps a congressional hearing could examine all our files and compare it with the USHCN’s ever rising warming trends, and then have the USHCN explain to the public about the rapid change in the land of “virtual climate”. Why do you cling so desperately to those fabricated trends Barry?
Indeed the only reported change at Cuyamaca was a TOBS change in the early 60s. At worst that change would only shift a few monthly peaks but not the annual trend. I would love to here your explanation how a TOBS adjustment from the 1960s that has been in place for over a decade, warrants the change in homogenized trends between 2011 and 2015.
Barry you just spam the thread with whatever junk you can link to hoping something will stick, all the while never answering the question, “What does the “warmest” year tell us about climate sensitivity to CO2″?
Based on your multiple misdirections, I think you know the answer is precious little.

Zeke Hausfather
Reply to  Barry
January 26, 2015 6:08 pm

Interestingly enough, Berkeley’s homogenization process reduces the trend at Cuyamaca: http://berkeleyearth.lbl.gov/stations/28256
Regarding the need for homogenization (and the approach taken), I’d direct readers to this post:
If you for some reason decide that the raw data is more reliable than the homogenized or UHI-corrected datasets, you get nearly the same result globally for land. Also, if you really care about global temperature records, oceans are generally the dominant component, so all this discussion of land station homogenization is somewhat off the mark.

Reply to  Zeke Hausfather
January 26, 2015 7:21 pm

There is a definite need for “quality control” of the raw data, but USHCN homogenization goes beyond that. It assumes unexpected trends must be altered rather than expanding our understanding of local climate variability. Homogenization depends on an expected trend in order for an algorithm to detect an “undocumented change”. Therein lies the problem, allowing many biases to creep in. Stations without a location change or instrumentation change should serve as constraints to the homogenization process but instead the highest quality stations get altered to fit trends affect by landscape and population effects.
Serial homogenization that continues to increase the warming trend year by yea,r by lowering earlier temperatures is a testimony to how easily a trend can be altered for no valid reason. Menne and Vose acknowledge the adjustment aren’t concerned with the actual temperatures, so warming peaks the 30s, 40s, and 50s are lowered by several degrees, and a trend that would have originally correlated with a changes in sea surface temperature due to natural ocean oscillation is metamorphosed more and more each year to a trend that looks like rising CO2. That rouses all suspicions.

January 26, 2015 6:27 am

# 8 – The values given, and study cited, are annual maximum temperatures. What about minimum temperatures? Wouldn’t savannas cool a lot more than forests at night? Would these effects average out? Seems like a strange oversight for a landscape ecologist to make.

Reply to  Barry
January 26, 2015 9:27 am

Barry, Actually you raise an extremely good point. What about minimums?
I argue, as have several climate scientists, that the maximum is a better indicator of accumulated heat and climate change (with the caveat that spurious extremes due to drought conditions will raise the maximum by lowering heat capacity as well as a temporary increase in insolation).
Let me repeat the crazy statistical inference of the week:
“We can measure a higher daily average temperature while accumulating less heat due to the biases created by minimum temperatures”
Here is a thought question regards heat accumulation and the inappropriate metric created by averaging the max and min. You have two pots with equal volumes of water. Pot A rests at 10 C and Pot B rests at 30 C. Both are heated with unknown quantities but at the end of the day both pots measure 50C.
1. Which pot accumulated the most heat?
2. Which experienced the highest average temperature?
Answer below, but don’t peek until you thought for yourself— Hint use Q=mcdeltaT. Q is heat and because both pots have same mass and specific heat you can simplify the equation to Q(heat) = change in temperature
The answers:
1. Pot A Q= 40, Pot B Q=20. Thus we can conclude of maximum temperatures remain the same, then pots with the higher minimum accumulated less heat
2. Average temp: Pot A (10+50)/2 = 30 Pot B (30 + 50)/2 =40
Karl wrote showing that as populations increased so did minimum temperatures. During the rapid rise in recorded temperatures, minimums outpaced maximums2 to 3 times the maximum. By averaging in the minimums, the global average metric further distort our understanding of climate sensitivity to CO2, by aliasing landscape and population effects.

Reply to  jim Steele
January 26, 2015 10:18 am

Jim, thanks for the long-winded explanation of basic math. Since we all know temperature fluctuates daily and seasonally, the real question is what are the average annual temperatures of savannas vs. forests, and how much have they changed? You have not addressed that at all, but rather provide a bunch of misdirections.

Reply to  jim Steele
January 26, 2015 11:31 am

You would profit by considering more carefully what Steele puts forth for your benefit. Don’t forget, it is a 62% chance that 2014 was not a record year, according to giss ( Schmidt).

Reply to  jim Steele
January 26, 2015 12:23 pm

You too do a little bait and switch.You are so transparent.
You ask about the minimum then switch back to the average. The real question is what caused the change in maximum and minimums. Only then will the “how much” question add to our understanding of climate sensitivity to CO2 vs landscape changes and natural cycles.
Regards your averaging out question, I can see you never systematically examined microclimates. Using a Raytek Minitemp infrared thermometer, I measured a variety of surface in a variety of settings during the summer. At midday pavement would be 10 to 20 degrees hotter than gravel and sandy areas depending on shade. Areas dominated by grass were ~10 degrees cooler than the gravel areas and shrub and forested areas were another 10 degrees cooler. When I repeated those measurements just before dawn, the relative temperature differences were the same, but there was a slight contraction in the range of differences, ie instead of 10 degrees, the differences were 5 to 8 degrees.

Ulric Lyons
January 26, 2015 6:28 am

“CO2 advocates suggest CO2 leads to “Arctic Amplification” arguing dark open oceans absorb more heat.”
I am suspicious of that because of the particularly strong rebounds in sea ice extent immediately following summers with much reduced ice:

Ulric Lyons
Reply to  Ulric Lyons
January 26, 2015 6:44 am

In fact the consensus of IPCC models is that increased GHG forcing of the climate will increase positive NAO/AO. That will reduce warm ocean transport into the Arctic, and strengthen the polar vortex. Such that Arctic Amplification is negative and not positive, and that a decline in forcing of the climate is needed to account for the accelerated forcing of the AMO and Arctic since 1995.

Ulric Lyons
Reply to  Ulric Lyons
January 26, 2015 7:06 am

For example the decline in solar plasma pressure/density since the mid 1990’s:

Ulric Lyons
January 26, 2015 6:46 am

typo: accelerated *warming* of the AMO and Arctic since 1995.

January 26, 2015 8:22 am

Thanks, Dr. Steele. Yes, this data has been tortured and made to confess.
I rather view 2014 as:

Reply to  rooter
January 26, 2015 2:04 pm

You love to just throw away data (compression), and draw cherry picked trend lines. Which way is UAH heading now?

David Socrates
Reply to  rooter
January 26, 2015 2:12 pm

Anybody can “play” your game Bart…
Monckton has a problem?

Reply to  rooter
January 26, 2015 2:19 pm
Reply to  rooter
January 26, 2015 5:12 pm

Wow. Whoosh!

January 26, 2015 9:41 am

Regards sea surface temperatures; The Johnstone and Mantua paper had this graph showing sea surface temperatures (SST red), air temperatures and pressure for the northeast Pacific. What is striking is how the raw data from so many USHCN California weather station have a close match with SST and how badly homogenized data correlates with SST.

Reply to  jim Steele
January 26, 2015 3:14 pm

Striking ow the raw data have a close match with SST.
One TINY problem for Steele here. What kind of data did they use?
“SATs around the NE Pacific margin were investigated with monthly station
data from the US Historical Climate Network, version 2 (USHCNv2) (49) and
the Global Historical Climate Network, version 3 (GHCNv3) (50), using ad-
justed versions of both datasets.”
Steele managed to show the exact opposite of what he claimed to have shown. It is the adjusted/homogenized data that have a close match with SST.

Reply to  rooter
January 26, 2015 8:38 pm

Rooter, that’s an interesting point and I appreciate your sincere attempts to ensure the most honest discussion. But you seem a tad bent on denigrating me, which caused you to overlook a TINY problem that I have been discussing.
The key issue is which adjusted set did they use? The paper was received in 2013, so likely they used adjustments from 2012 or perhaps earlier. Topics I researched for my book downloaded data in 2010 but because the research was far from simple, my final analysis for book got published in 2013. As I illustrated with Cuyamaca the homogenization process keeps changing the trend month to month, year to year. If they used earlier adjustments as I had form 2010, then the Cuyamaca trend is well aligned with raw data and there sea surface temperature trends (see graphs from point #5) . If they useD the 2011 adjustment then the earlier peak begins to drop, but only slightly. The 2015 adjustments which likely came after the paper was published have no resemblance to the trend in sea surface temperatures.
So I thank you rooter for illustrating why this constant serial homogenization process causes nothing but confusion. I’d bet we see a paper disagreeing with Johnstone and Mantua, arguing there is no correlation with adjusted temperatures, based on the more recently adjusted data set. I forgive you for your harsh words because this homogenization insanity is not readily comprehensible to anyone.

Reply to  rooter
January 26, 2015 11:41 pm

Steele: You did not discuss different versions of homogenized data. Your claim was that the non-homogenized data had a close match with SST and homogenized data did not. That was very wrong and suggests that you did not check facts before you made your claim.
They explicitly states what versions of the adjusted data they use. USHCNv2 and GHCNv3.
If you want to discuss and improve homogenized data that is excellent. There are guaranteed better and worse ways of doing homogenization and there can be biases introduced by homogenization. One example of that might be stations in the Arctic that have been adjusted down because the warming in those stations exceeds stations further south. For the east coast of America the situation is different because of the smaller distances between stations and the greater number.

Reply to  rooter
January 27, 2015 8:20 am

Rooter says, “Steele: You did not discuss different versions of homogenized data. Your claim was that the non-homogenized data had a close match with SST and homogenized data did not. That was very wrong and suggests that you did not check facts before you made your claim.”
You are being hoist by your own petard, For a person who has perused each of my points looking for nits to pick, go back read point 5 one more time and read my reply above.

Reply to  rooter
January 27, 2015 2:14 pm

“Hoist by your own petard.” Like showing a plot with close match with SST and SAT and saying that this shows the wrongness of homogenized temperature series. And not knowing the match was between SST and homogenized series.

January 26, 2015 10:29 am

#7 – Numerous studies have found urban heat island effects to have minimal impact on regional temperature trends. While heat island effects can be substantial, they affect very small areas. Also, in some cases (such as in arid regions), urban environments are actually cooler than surrounding rural areas because of green spaces and irrigation.

Reply to  Barry
January 26, 2015 11:12 am

Like Jones et al 1990? How stupid do you think we are?

Reply to  Barry
January 26, 2015 12:49 pm

That’s true Barry, but far more studies have found a very significant warming due to landscape changes and urbanization over broad areas. Although a desert town that adds a water fountain or an arid area that is irrigated will have a cooling effect, overall the overwhelming effect of urbanization and landscape changes has been a warming.
The point is well said by Dr. Eugenia Kalnay, University of Maryland“influences on climate are the emission of greenhouse gases and changes in land use, such as urbanization and agriculture. But it has been difficult to separate these two influences because both tend to increase the daily mean surface temperature”
Has there been a rise in global population that could create a rise temperature via increased land use?
Dr. Xuchao Yang, China Meteorological Administration wrote “The contribution of urbanization and other land uses to overall regional warming is determined to be 24.22%.”
Dr. Young Kwon Lim, Florida State University wrote “Warming over barren areas is larger than most other land types. Urban areas show a large warming second only to barren areas.”

Reply to  jim Steele
January 26, 2015 2:52 pm

24.22%? That’s some amazing precision. All of the papers I have read (including Parker 2006, Jones et al. 2008) contradict these quotes, and NASA and NOAA correct for heat island effects before reporting trends. Besides, a lot of global warming is happening in the Arctic, where last I checked there were not a lot of large urban areas.

Zeke Hausfather
Reply to  jim Steele
January 26, 2015 6:12 pm

Regarding UHI in the U.S. data:
While there is a sizable signal in raw data (particularly minimum temperatures), its mostly eliminated in the homogenized data even if only rural stations are used for breakpoint detection.

Reply to  jim Steele
January 26, 2015 7:03 pm

Zeke says, its mostly eliminated in the homogenized data even if only rural stations are used for breakpoint detection.
I am not sure how you can claim it is mostly eliminated. Most of the papers I have read use a very static categorization of rural vs urban. But a growing rural area increases impervious surfaces, removes vegetation, alters winds and adds waste heat.
For example, in 1967 Columbia, Maryland was a newly established, planned community designed to end racial and social segregation. Climate researchers following the city’s development found that over a period of just three years, a heat island of up to 8.1°F appeared as the land filled with 10,000 residents. Although Columbia would be classified as a rural town, that small population raised temperatures five times greater than a century’s worth of global warming.
Furthermore microclimate issues remain whether or not the station is in a rural or urban area. Simply moving a station closer to a building can raise temperatures by a degree or more.

Reply to  jim Steele
January 27, 2015 2:00 am

Steele says:
“Although Columbia would be classified as a rural town, that small population raised temperatures five times greater than a century’s worth of global warming.
Furthermore microclimate issues remain whether or not the station is in a rural or urban area. Simply moving a station closer to a building can raise temperatures by a degree or more.”
In my view, these are arguments for homogenization. Also for stations classified as rural. Remove non-climatic bias. Which includes microsite issues and UHI.

Reply to  Barry
January 26, 2015 6:53 pm

Barry says, “Besides, a lot of global warming is happening in the Arctic, where last I checked there were not a lot of large urban areas.”
Well your snarky comments confirm you are just sniping and trying to spam the thread with nonsense criticisms. I listed many points that affect the global average chimera, and covered the Arctic warming due to ventilating heat.

Reply to  jim Steele
January 27, 2015 1:35 am

Which would appear to be confirmed by Spencer Weart’s recent paper showing that the Arctic Ocean is cooling.

January 26, 2015 2:57 pm

#6 – I agree with this one, for the most part. Of course decadal warming trends have a lot to do with redistribution of heat with the oceans. But the issue is century-scale warming trends, in both the oceans and lower atmosphere, as we’ve observed. So this is yet another red herring by Mr. Steele.

Reply to  Barry
January 26, 2015 4:10 pm

Barry us must say I appreciate your devotion to my thread, but I wish you would indulge in a little more substance. Are you also calling Johnstone and Mantua’s analysis a red herring? Just what is your scientific background. I am starting to get the impression you are a professional internet sniper.
If you combine the synergy between increased solar and ocean oscillations we see that there is century trend with similar peaks in the 40s and 2000 and that cyclical response over the century helps understand how much of the recent climate change is due to natural cycles.

Reply to  Barry
January 26, 2015 6:03 pm

Barry- I have to admit, you aren’t making any sense- I have agree with Mr. Steele’s comment- you aren’t showing much understanding of the points he’s making So far, your comments don’t add much to my understanding of the issue- and if you are trying to convince us that Jim Steele doesn’t know what he’s talking about- your comments to be honest do the opposite. I have to admit I may be a bit biased as to Mr. Steele’s knowledge and expertise- I just finished his book “Landscapes and Cycles” and I’m convinced he has a lot of interesting information – (Highly recommended to anyone who wants to be able explain many of the issues with biology studies and global average temperature.) His point of view, research, and thoroughness of discussing all sides of the topic is remarkable.

Reply to  Louise Nicholas
January 26, 2015 6:55 pm

Thanks Louise!

Reply to  Louise Nicholas
January 27, 2015 1:26 am

The Git purchased two copies so that even if the loaner goes astray, he still gets to keep one. 🙂

Tom O
January 27, 2015 5:59 am

I like the thought that most temperature recording sites are now located in urban areas and are “adjusted” to compensate for the UHI of the specific site. You add to that the fact that should someone become concerned that the site is reading UHI influenced readings and move the site to an area where it will again genuinely give accurate readings, the site is removed from the list of sites that are included due to its short period of recording, and the accurate data is replaced by homogenized data. What a game, and it’s all played out to pretend there is global warming when it probably isn’t happening. It all shows that we are not interested in accurate, scientific data, we are interested in only supporting a preconceived agenda with the reports. We are not being affected by global warming, we are being afflicted by global governance connivance.

January 28, 2015 11:05 pm

It seems pretty simple. Either it was the warmest year or it was not. If not, then CO2 isn’t the answer. It’s science. Premise is that CO2 causes warming. CO2 has gone up, has temperature? At some point the question is answered.

February 1, 2015 10:54 am

Gavin will be in Durham, NC this Friday, 2/6/2015. Is anyone here planning to attend? If so, please contact me.

February 2, 2015 10:03 am

The average temperature of Earth is not a temperature measurement.
It is an average of many local temperature measurements.
That means it is a statistic, not a temperature.
Sometimes the average temperature for a specific year is compared with the average temperature over an earlier 30-year span, to view anomalies — that would be one statistic compared to another statistic.
The average temperature of Earth a complex statistic that can be calculated (estimated) in many ways.
Missing data points may be filled in with wild guesses.
Corrections may be made to compensate for known measurement errors.
Known measurement errors may be ignored.
Unknown measurement errors are obviously ignored, but usually implied to be small.
The changes in the sun’s energy output are ignored.
“Climate” scientist funding is dominated by governments who want a “climate crisis” to fight.
“Climate” scientists receive government grants only if they predict a coming “climate crisis”.
“Environmentalists” have been predicting one false environmental crisis after another since the 1960s, with the same end result: Life on Earth will end as we know it unless everyone follows their directions.
4.5 billion years of Earth’s average temperature data were not collected.
“Global” average temperature statistics began in the 1800s with few thermometers, from from global coverage, and most of the surviving thermometers from that era read low compared with modern accurate thermometers, and the data read from them probably had an accuracy of +/- 1 degree F.
All real time average temperature statistics were calculated during a warming trend, which is most likely still in progress, so record highs are not ‘news’ — they are to be expected until that warming trend ends, and no one knows when that will be.
No one knows what a “normal” average temperature is, or what a “normal” CO2 level is — but the mid-1800’s was decided to be “normal” by some people, with no logical explanation of why.
And even if all the potential measurements errors, and financial incentives to predict a crisis, and the desire to predict an environmental crisis to get attention, disappeared tomorrow, can ANYONE explain to me why the Average Temperature of Earth is important to know?
I can see why the trend of sea level rise might be a problem for people who live near the ocean.
I can see why a shorter and/or less productive food growing season would be important to many people.
I can see why people who lived in Florida would be concerned if the summers there were getting hotter and hotter.
Bit I’ve investigated the sea level rise trend, farming output, and temperature records in various US states that interest me … and I can’t find any climate-related problems at all.
So why anyone care about a rough estimate of the average temperature of Earth, and the fact that it has changed slightly in the past 134 years?
Local weather conditions are not driven by the average temperature of Earth.
Why is average temperature of Earth important to know in the absence of any symptoms of REAL climate problems (that actually affect human health, comfort, and the health of plants on our planet)?
In my opinion, the average temperature of Earth is not important at all.
In my opinion, that statistic was selected for use as as a political tool — technically known as a boogeyman — to scare people into giving their government more power to control their lives, ands eventually tax corporations for their energy use, and transfer wealth from rich to poor nations for “climate reparations”.
The average temperature of Earth has been rising since the peak of the last ice age 18,000 years ago.
… It has been rising since 1850, if measurements since then are reasonably accurate
… But it has also been falling since the Greenhouse Ages hundreds of millions of years ago.
… And it has also been falling since 1998, according to weather satellite data.
The average temperature where you live may matter if there is a noticeable rising or declining trend.
In my opinion, the average temperature of Earth does not matter at all.
If Earth’s local temperatures are really averaging one or two degrees F. hotter today, than in 1880, that only matters to people who want to use that statistic to make their predictions of a coming climate disaster sound real.
I always hope, but am probably over-optimistic, that people are smart enough to be very skeptical about predictions of the future — especially predictions of a coming catastrophe unless everyone does as the predictor says.
It used to be that religious leaders might say: ‘Do as I say or you will go to hell’.
That strategy worked to control religious people for centuries.
Since the 1960s not as many people are religious and really believe there is a hell.
So leaders needed a modified strategy to control people.
One new strategy is to tell them: ‘Do as I say or climate change will turn Earth into hell.’
The predictions of a coming climate change catastrophe are 99% politics and 1% science.
And the average temperature of Earth is not an important statistic, except for its use as a political tool.
One interesting article I read years ago on whether or not an average temperature is important:
My climate thoughts are posted here:

Reply to  Richard Greene
February 7, 2015 7:48 pm

Richard Greene,
Thanks for the link to “Does a Global Temperature Exist?” by McKitrick, Essex and Andresen
I totally agree and when I wrote about the global average being a chimera ( http://landscapesandcycles.net/the-global-average-temperature-chimera.html ) of many varied climate dynamics I was that others have not written more on the subject. I am not surprise that Mckitrick had already tackled the issue, just surprised there are not more papers on the problem

%d bloggers like this: