Satellite Temperatures and El Niño

By Steve Goddard

RSS April 14,000 Foot Anomalies

UAH and RSS satellite data have been showing record warm temperatures in 2010, despite the fact that many of us have been freezing – and CAGW types have been quick to jump on this fact. But Had-Crut surface data has 2010 at only #5 through March, which is the latest available. So Watts Up With That?

Had-Crut rankings through March

I don’t have any theories about root cause, but there are some very interesting empirical relationships correlating ENSO with satellite and surface temperatures. Satellite TLT data is measured at 14,000 feet and seems to get exaggerated relative to surface temperatures during El Niño events. Note the particularly large exaggerations during the 1998 and 2010 Niños below. UAH (satellite) is in red and Had-Crut (surface) is in green.

http://www.woodfortrees.org/plot/uah/from:1978/plot/hadcrut3vgl/from:1978/normalise

Looking closer at the 1998 El Niño, it can be seen that both UAH and RSS 14,000 ft. temperatures were highly exaggerated vs. normalised Had-Crut and GISS surface temperatures.

Below is a chart showing the normalised difference (UAH minus Had-Crut) from 1997-1999

The chart below shows the same data as the one above, but also plots ENSO. This one is very interesting in that it shows that the satellite data lags ENSO by several months. In 1998, ENSO was nearly neutral by the time UAH reached it’s peak, and the 1999 La Niña was at full strength before UAH recovered to normal.

Conclusions: ENSO is already headed negative, but it is likely that we will see several more months of high satellite temperatures. CAGW types will abandon their long cherished Had-Crut, and declare 2010 to be the hottest year in the history of the planet. Yet another way to hide the decline.ñ

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
206 Comments
Inline Feedbacks
View all comments
Neven
May 15, 2010 1:56 am

Pssst, HadCRUT (not Had-Crut) leaves the Arctic out of its global average. That’s the place with the highest positive anomalies.
Keep ’em coming, Steve. Doing a great job.

Peter Miller
May 15, 2010 2:40 am

Off topic:
Anthony
It seems there is a recently published best selling (110,000 copies sold) book in France called Climate Imposture, written by the former Minister of Science. It has the alarmists in a fury and ranting and raving all over the place – see Real Climate. In France, the ‘climate scientists’ there want the book banned.
It seems there have been TV debates on this over the last couple of months which have had the effect of pushing the non-sceptical French into the sceptical camp.
Is there any chance of an an analysis or comment on this book from you or one of your contributors?

May 15, 2010 3:09 am

I hope Roy Spencer has some way of telling if his incoming MSU data has … .. changed.

Mooloo
May 15, 2010 3:21 am

Of course the facts support the hypothesis. That’s why you won’t find a scientific society of standing that disputes it.
That could have been said about any number of hypothesis in the past too. Some have been shown right, some tragically wrong. One hundred and fifty years ago no scientific society of any note seriously disputed that God created man whole. Now, I believe, some do.
In any case, I believe that you might be wrong in the original assertion. What is the position on global warming of, say, the Russian Academy of Sciences?
The AGW cheer-leaders are a few specialist outfits (Hadley etc), some journals (Nature, New Scientist etc) and some people with no training at all (Gore etc).
Most scientific societies issue bland statements and stay out of the fight. They know that the issue is too divisive. Any attempt at entering the fray, either way, will alienate a significant number of their members. They are right – it is too politicised to be safe.
The “consensus” hangs and falls on the IPCC when it comes down to it. If the IPCC were to issue a sceptical document, the whole “consensus” would melt in a moment.
Compare this to the situation if the UN made up an IPE (International Panel on Evolution) and issued a statement, under pressure from religious fundies, that evolution was incorrect. In that situation there is a real consensus among biologists, and the UN would be ignored.

Joe
May 15, 2010 3:24 am

Something I don’t quite understand, if satellites are taking the atmosphere temperatures, then when they scan to the sides, they are going through further atmosphere compared to the straight up and down measurements. Any measurements that are not constant in distance and space can show massive variables depending were the satellite is positioned.

Donald Holdner
May 15, 2010 3:55 am

Good post!
I interpret the data to show:
1. The global temperatures respond to El Nino with lag of about 6 months.
2. The troposphere temperature is more sensitive to El Nino than the surface temperature.
Both look to be quite reasonable responses to an El Nino.

John Finn
May 15, 2010 3:57 am

John A. Marr says:
May 14, 2010 at 6:26 pm

Pearland Aggie
“Thunderstorms can reach many multiples of 14,000 ft into the atmosphere, so it seems plausible that UHI-induced heat could be transported to those levels in the atmosphere. Furthermore, the dispersion of that heat throughout the atmosphere would cover many times the actual area of the UHI itself. Afterall, we do live in a 3-dimensional world…”

Exactly right! I mean, you keep hearing the alarmists harping on about about how all the CO2 from the population centres gets quickly dispersed but they keep very quiet about how exactly the heat from those cities mixes into the atmosphere. It’s clear in the global map at the top of this page that the UHI effects created in the US eastern seaboard have spread northwards over northern Canada as some kind of red/yellow cloud of excess heat.
I think both you and PearlandAggie need to get the area size of urban developments relative to the rest of the world into perspective. You could drop the top 20 largest cities in the world into Texas and not notice they were there. The reason that UH *might* have an impact on surface temperature trends is because, for obvious reasons, the stations happen to be close to urban areas.

val majkus
May 15, 2010 3:58 am

Are you feeling warmer yet?
Crisis in New Zealand climatology
by Barry Brill
May 15, 2010
The warming that wasn’t
The official archivist of New Zealand’s climate records, the National Institute of Water and Atmospheric Research (NIWA), offers top billing to its 147-year-old national mean temperature series (the “NIWA Seven-station Series” or NSS). This series shows that New Zealand experienced a twentieth-century warming trend of 0.92°C.
The official temperature record is wrong. The instrumental raw data correctly show that New Zealand average temperatures have remained remarkably steady at 12.6°C +/- 0.5°C for a century and a half. NIWA’s doctoring of that data is indefensible.
http://www.quadrant.org.au/blogs/doomed-planet/2010/05/crisis-in-new-zealand-climatology

Geoff Sherrington
May 15, 2010 4:08 am

Bill Illis says:
May 14, 2010 at 5:07 pm “When I run the regressions on the different temperature series over the entire time periods available, I get more-or-less the same response to the ENSO.”
How do you determine which is the dependent variable?
If ENSO causes heat, where does it come from?

John Finn
May 15, 2010 4:14 am

Adrian O says:
May 14, 2010 at 8:25 pm
I have asked before the same question as Jantar above. Probably around 1998-2000 HadCrut must have shown the 1998 peak AT LEAST AS BIG as shown by satellites, to prove warming.
The same 1998 peak is now SMALLER than the satellites, so as to diminish the cooling 1998-2010.

What cooling would that be?
QUEST: Does anyone have a record of HadCrut temperatures as put out around 2000 (web or publications,) so we can check it against HadCrut temperatures now?
I’ve been using Hadcrut for about 6 years and there’s been very little change in the 1998 anomaly. There was always a bigger spike in the UAH record.

May 15, 2010 4:16 am

barry
Several flaws in your logic.
1. I am not discussing the 31 year satellite trend. I am discussing the 200% anomaly discrepancy during ENSO events.
2. The trend from 1979 to the present is anomalous. The long term trend is 0.65 C/century
http://docs.google.com/View?id=ddw82wws_627c4bc5tgq

Geoff Sherrington
May 15, 2010 4:16 am

Charles Wilson says:
May 14, 2010 at 6:01 pm
OOPS ! Somebody Normalized the Had-Cru but NOT the UAH.
I cannot comprehend why scientists do not abandon the “anomaly” method and use actual temperatures, even if they are degrees C and not K.
The anomaly method is sensitive to the selection of climate stations in the reference period. Stations added or removed after the reference period cause this. The effect can be tailored to suit preconceptions by those who open and close stations or select them for analytical studies. It’s not good science.

A C Osborn
May 15, 2010 4:24 am

It is an obvious FACT that what the Satellite data calls Surface Temperature is not.
Apart form a small area of the west the Northern Hemisphere has had it’s coldest Ground based Temperatures for many years and has broken many records. The Northern Sea temperatures as measured by Argo Bouys do not show the Seas warmer now than in the past 10 years.
Yet dispite all this REAL evidence we still have Satellite Global Surface Temperature Anomalies the highest ever and not only that all the PLUS anomalies are shown in the Northern Hemisphere.
It is definitely a case of 2 (Ground) + 2 (Sea) does not equal 4.

John Finn
May 15, 2010 4:50 am

stevengoddard says:
May 14, 2010 at 8:30 pm
Beth ,
I don’t really understand the description on the WFT website, but you get the identical result by shifting HadCrut -0.23
http://www.woodfortrees.org/plot/uah/from:1978/plot/hadcrut3vgl/from:1978/offset:-0.23

Steve
I think an offset of -0.23 applies to the GISS record not Hadley (different base periods). The mean Hadley anomaly for the 1979-98 period is ~0.15 deg. The UAH anomaly is, or should be, approximately zero. I’m assuming (I haven’t read the WFT blurb) an offset of -0.15 for Hadley would provide a reasonable comparison between the two, i.e.
http://www.woodfortrees.org/plot/uah/from:1978/plot/hadcrut3vgl/from:1978/offset:-0.15
The mean GISS anomaly for 1961-90 (Hadley base period) is ~0.08 which should, in theory, mean the GISS offset is -0.23. Unfortunately GISS and Hadley didn’t track each other as closely as they might during 1979-98 so the GISS average anomaly for 1979-98 is a bit higher – I think (can’t be bothered to check).

frederik wisse
May 15, 2010 5:03 am

Hide the decline ! Werenot these the words choosen in the east anglia climategate emails ? Apparently the american burocracies are showing a more professional behaviour than the east anglicans where there is some conscience left and truth can be discussed. Please read the american mission statements well and they are validating that these are fully purpose-driven . Is the word truth ever mentioned here ?
So it is obvious what is happening under our eyes . The cap and trade crap must be pushed through no matter what , even when the rest of the world is not doing a thing , on the contrary , is trying to benefit from the situation developing in the usa , where pieces of paper all of sudden are becoming highly valuable out of thin air and who is going to pay for them , nobody else than the us taxpayer , who already took up the burden laid upon them by a ruthless elite .
So on the short term dramatic temperature readings are needed to fool the innocuous part of our society , may be the mayor share of the population and get them to okay the life work of our highly esteemed leader obama and his godfather al gore , a system to control every citizen in the usa , whilst they are able to live their luxury lives with private or governmental planes , nine bathroom mansions , to live a life with unlimited spending , whilst the bulk of our population returning towards poverty .
O sure , they will claim that you wanted this and that these are costs and everybody must make a contribution and o well see how temperatures are falling now that we are controlling the exhaust of combustion gases , not taking into consideration that the output increase in india and china will be many times bigger than the us savings .
Al gore and friends know damn well how badly they are cheating the public , but they are also aware of the fact that it will be very hard to continue the fooling of his cocitizens given the circumstances that the sun has nearly fallen asleep and the enso temperature modulations will become very negative i a short wile . This is for them the ideal opportunity to install the system so the most natural events can then be explained as benefits of the system , whilst at a further stalling of the capandtrade implementations their cheating would become too obvious . A telltale sign is a reccent change in the mission statement of NOAA , showing that the us burocracy is slowly but surely wakening up and individuals are claiming their rights and liberties .
Being a lawyer , for mr obama truth does not count , although there can be found truth with him in the words he is not speaking !

John Finn
May 15, 2010 5:33 am

stevengoddard says:
May 14, 2010 at 8:30 pm
Beth ,
I don’t really understand the description on the WFT website, but you get the identical result by shifting HadCrut -0.23
http://www.woodfortrees.org/plot/uah/from:1978/plot/hadcrut3vgl/from:1978/offset:-0.23
So whatever their normalisation algorithm is doing, it isn’t scaling.

Steve/Beth
I’ve just had a very quick look at the ‘normalise’ option. Select the normalise option for both sets of data similar below.
http://www.woodfortrees.org/plot/uah/from:1978/normalise/plot/hadcrut3vgl/from:1978/normalise
Note that the max and min for both datasets is at exactly the same point, i.e. about +0.5 for the max and -0.5 for the min. It seems as though normalisation, in this case, scales the data between a range of 1 (e.g. -0.5,+0.5, -0.2,+0.8…). No idea why but the cup final is on shortly so I’m not going to bother with it now.

ShrNfr
May 15, 2010 5:38 am

Sexton The brightness temperatures (the raw data) measured by these satellites is the integral of temperature of the surface and the atmosphere times the weighting function of the channel to the first order. To get an estimate of a temperature at a discrete pressure level, you have to go through all sorts of mathematical gymnastics to add and subtract the various brightness temperatures from all the channels. It can be done and done reasonably well. You will not get the exact temperature you would have measured with a radiosonde, but then again the exact temperature measured by a radiosonde has errors too. The fact that the TLT channel peaks up a bit in height means it is most accurate at that level, but it does not mean that lower levels cannot be reliably estimated. Its known as the inversion problem and there have been tons written about it. I did my PhD in it long ago and far away.

May 15, 2010 5:43 am

John Finn
Ypu can theorize all you want, but the normalisation did not scale the data. It shifted HadCrut by -0.23 with no scaling.

May 15, 2010 5:49 am

John Finn
My last post was wrong. You were correct.

May 15, 2010 5:52 am

Wren says, May 14, 2010 at 9:33 pm:
“Do you have a specific null hypothesis in mind?”
Dr Roy Spencer explains the climate null hypothesis: No one has falsified the theory of natural climate variability. That is the null hypothesis.
The climate fluctuates naturally; always has, always will. The current climate is completely ordinary. What is happening on various parts of the globe has happened many times in the pre-SUV past.
Point out the human signal in the climate, if you can. And no models, please. Programmed speculation isn’t science.
The one human emitted CO2 molecule out of every 34 that the planet normally emits can not be shown to have any measurable effect — unlike scary scenarios, which have the measurable effect of increasing government grants.
I say this in all sincerity: wise up. CAGW is driven by money, not by science.

Jbar
May 15, 2010 6:12 am

James Sexton –
The map DOES reflect reality in Philadelphia. But of course it varies from week to week.

Henry chance
May 15, 2010 6:18 am

The new cap and trade is going to cost 7,700 dollars per household. It is another source of tax revenue we desparately need. The thinking is taxing in the utlilty bill, at the gas pump and inside the grocery bill is more secure than a payroll tax.
You are questioning numbers and readings, but this discussion is actually a threat to pork and kickbacks. Maurice Strong and George Soros couple with Allgor and comprise the dirty carbon trinity.
All I see is common flucation in the charts.

Jbar
May 15, 2010 6:19 am

The Hadcrut dataset does not contain data from the arctic region. Since the arctic is the fastest warming region on the planet, the Hadcrut index tends to understate the rising temp trend. As we see from the big red spot on the map, leaving out millions of square km can throw off a global average.

David L. Hagen
May 15, 2010 6:25 am

Wren
re “Do you have a specific null hypothesis in mind?”.
Try:
1) Average temperature rate of rise since the last ice age will continue.
2) Average temperature rate of rise, over all data since 1850, will continue.
3) Pacific Decadal Oscillations superimposed on the average temperature trend will continue.
Check out Don Easterbrook’s AGU paper on potential global cooling He projects Pacific Decadal Oscillations (PDO) dominating temperature trends through 2100.

The recurring climate cycles clearly show that natural climatic warming and cooling have occurred many times, long before increases in anthropogenic atmospheric CO2 levels. The Medieval Warm Period and Little Ice Age are well known examples of such climate changes, but in addition, at least 23 periods of climatic warming and cooling have occurred in the past 500 years. Each period of warming or cooling lasted about 25-30 years (average 27 years). Two cycles of global warming and two of global cooling have occurred during the past century, and the global cooling that has occurred since 1998 is exactly in phase with the long term pattern. Global cooling occurred from 1880 to ~1915; global warming occurred from ~1915 to ~1945; global cooling occurred from ~1945-1977;, global warming occurred from 1977 to 1998; and global cooling has occurred since 1998. All of these global climate changes show exceptionally good correlation with solar variation since the Little Ice Age 400 years ago.
The IPCC predicted global warming of 0.6° C (1° F) by 2011 and 1.2° C (2° F) by 2038, whereas Easterbrook (2001) predicted the beginning of global cooling by 2007 (± 3-5 yrs) and cooling of about 0.3-0.5° C until ~2035. The predicted cooling seems to have already begun. Recent measurements of global temperatures suggest a gradual cooling trend since 1998 and 2007-2008 was a year of sharp global cooling. The cooling trend will likely continue as the sun enters a cycle of lower irradiance and the Pacific Ocean changed from its warm mode to its cool mode. . . .
The announcement by NASA that the Pacific Decadal Oscillation (PDO) had shifted to its cool phase is right on schedule as predicted by past climate and PDO changes (Easterbrook, 2001, 2006, 2007) and coincides with recent solar variations. The PDO typically lasts 25-30 years, virtually assuring several decades of global cooling. The IPCC predictions of global temperatures 1° F warmer by 2011, 2° F warmer by 2038, and 10° F by 2100 stand little chance of being correct. “Global warming” (i.e., the warming since 1977) is over! . . .
The good news is that global warming (i.e., the 1977-1998 warming) is over and atmospheric CO2 is not a vital issue. The bad news is that cold conditions kill more people than warm conditions, so we are in for bigger problems than we might have experienced if global warming had continued. Mortality data from 1979-2002 death certificate records show twice as many deaths directly from extreme cold than for deaths from extreme heat, 8 times as many deaths as those from floods, and 30 times as many as from hurricanes. The number of deaths indirectly related to cold is many times worse.

The challenge to AGW is to show significant variations CAUSED BY anthropogenic effects that are significantly different from Easterbrook etc.
Global Warming Models have not included the PDO and other natural cycles. Consequently, they cannot differentiate between natural weather “noise” and anthropogenic causes.
e.g. See Credibility of climate predictions revisited G. G. Anagnostopoulos, D. Koutsoyiannis, A. Efstratiadis, A. Christofides, and N. Mamassis; Department of Water Resources and Environmental Engineering, National Technical University of Athens, (www.itia.ntua.gr)
European Geosciences Union General Assembly 2009, Vienna, Austria, 19‐24 April 2009

The performance of the models at local scale at 55 stations worldwide (in addition to the 8 stations used in Koutsoyiannis et al., 2008) is poor regarding all statistical indicators at the seasonal, annual and climatic time scales. In most cases the observed variability metrics (standard deviation and Hurst coefficient) are underestimated.
• The performance of the models (both the TAR and AR4 ones) at a large spatial scale,
i.e. the contiguous USA, is even worse.
• None of the examined models reproduces the over‐year fluctuations of the areal
temperature of USA (gradual increase before 1940, falling trend until the early 1970’s,
slight upward trend thereafter); most overestimate the annual mean (by up to
4°C) and predict a rise more intense than reality during the later 20th century.
• On the climatic scale, the model whose results for temperature are closest to reality
(PCM‐20C3M) has an efficiency of 0.05, virtually equivalent to an elementary
prediction based on the historical mean; its predictive capacity against other indicators
(e.g. maximum and minimum monthly temperature) is worse.
• The predictive capacity of GCMs against the areal precipitation is even poorer
(overestimation by about 100 to 300 mm). All efficiency values at all time scales are
strongly negative, while correlations vary from negative to slightly positive.
• Contrary to the common practice of climate modellers and IPCC, here comparisons are made in terms of actual values and not departures from means (“anomalies”). The
enormous differences from reality (up to 6°C in minimum temperature and 300 mm in
annual precipitation) would have been concealed if departures from mean had been
taken.
20. Conclusions
Could models, which consistently err by several degrees in the 20th century, be trusted for their future predictions of decadal trends that are much lower than this error?

See further details in their numerous publications.
See also: Validity of Climate Change Forecasting for Public Policy Decision Making
Green, Kesten C, Armstrong, J. Scott and Soon, Willie, 20. February 2009
and International Journal of Forecasting 25 (2009) 826–832

. . .We used the U.K. Met Office Hadley Centre’s annual average thermometer data from 1850 through 2007 to examine the performance of the benchmark method. The accuracy of forecasts from the benchmark is such that even perfect forecasts would be unlikely to help policymakers. For example, mean absolute errors for 20- and 50-year horizons were 0.18°C and 0.24°C. . . . Again using the IPCC warming rate for our demonstration, we projected the rate successively over a period analogous to that envisaged in their scenario of exponential CO2 growth—the years 1851 to 1975. The errors from the projections were more than seven times greater than the errors from the benchmark method. Relative errors were larger for longer forecast horizons. Our validation exercise illustrates the importance of determining whether it is possible to obtain forecasts that are more useful than those from a simple benchmark before making expensive policy decisions. . . .

We look forward to anyone providing counter studies that include these effects and still provide statistically clear evidence for anthropogenic global warming.