Satellite data shows globe will stay below 1.5°C target at the current rate of warming, resulting in 1.1°C
December 18: University of Alabama, Huntsville
The average temperature of Earth’s atmosphere has warmed just over four tenths of a degree Celsius (almost three fourths of a degree Fahrenheit) during the past 37 years, with the greatest warming over the Arctic Ocean and Australia, said Dr. John Christy, director of the Earth System Science Center at The University of Alabama in Huntsville. Microwave sounding units on board NOAA and NASA satellites completed 37 complete years of collecting temperature data in November, giving us nearly global coverage of climate change during that time.
If that trend was to continue for another 63 years, the composite warming for the globe would be 1.1 C (about 2 degrees Fahrenheit) for the century, Christy said. That would put the average global temperature change over 100 years well under the 2.0 C (3.6 degrees F) goal set recently at the climate change summit in Paris.
Due in part (but not exclusively) to the ongoing El Niño Pacific Ocean warming event, the global temperature trend has been trending slightly warmer over the past several months, Christy said. While the current global trend is just under 0.115 C (rounded down to 0.11), he expects the trend line to cross 0.115 C in the next several months, raising the global trend to 0.12 C per decade.
Two major volcanic eruptions in the first half of the 37-year satellite temperature record (El Chichon in 1982 and Mount Pinatubo in 1991) depressed temperatures over large portions of the globe. While natural events, their random eruptions during the first half of the dataset has the effect of tilting the trend upward more than it otherwise would have been.
While a 0.12 C trend isn’t exactly a sprint to climate catastrophe (the 1.2 C or 2.2 degrees F rise over 100 years would be roughly equal to the warming seen most spring days between 10 a.m. and noon), it nonetheless has been a steady trend for the past several years. Take away the random variations caused by warm and cold weather systems, and any long-term trend, no matter how small, will produce climate records on a regular basis. Add to that long-term warming the additional heat of a large El Niño, and record-setting monthly average temperatures should be both routine and expected.
Despite that, early indications are that 2015 will end as the third warmest year in the satellite temperature record, behind 1998 and 2010. That is the early indication. Typically, the warmest temperatures are seen in the second year of an El Niño warming event, although there have been exceptions. If the typical pattern holds true, the second year of the current El Niño would be expected to bring more record high temperatures in 2016, perhaps including a new record high temperature for the year.
The fastest warming place on Earth over the past 37 years has been in the Arctic Ocean north of the Svalbard archipelago, where temperatures have been rising 0.5 C (about 0.9 degrees F) per decade. The fastest cooling spot was over the eastern Antarctic near Dome C. Temperatures there have been falling at the rate of 0.41 C (about 0.74 degrees F) per decade.
Global Temperature Report: 12/1978 through 11/2015
Global climate trend since Nov. 16, 1978: +0.11 C per decade
37-year temperature trends:
Global average trend.: +0.11 C (about 0.20 degrees Fahrenheit) per decade since December 1978.
Northern Hemisphere: +0.14 C (about 0.25 degrees Fahrenheit) per decade since December 1978.
Southern Hemisphere: +0.09 C (about 0.16 degrees Fahrenheit) per decade since December 1978.
Tropics: +0.10 C (about 0.18 degrees Fahrenheit) per decade since December 1978.
37-year composite changes:
Global Composite: +0.41 C (about 0.73 degrees Fahrenheit) composite change since December 1978.
Northern Hemisphere: +0.52 C (about 0.93 degrees Fahrenheit) composite change since December 1978.
Southern Hemisphere: +0.33 C (about 0.60 degrees Fahrenheit) composite change since December 1978.
Tropics: +0.37 C (about 0.67 degrees Fahrenheit) composite change since December 1978.
37-year warming
Trend per decade Composite
Globe +0.11 C +0.41 C
Land +0.20 C +0.74 C
Ocean +0.08 C +0.30 C
N.H. +0.14 C +0.52 C
Land +0.21 C +0.78 C
Ocean +0.09 C +0.33 C
S.H. +0.09 C +0.33 C
Land +0.18 C +0.67 C
Ocean +0.07 C +0.26 C
Tropics +0.10 C +0.37 C
Land +0.21 C +0.38 C
Ocean +0.07 C +0.26 C
NoPole +0.23 C +0.85 C
Land +0.19 C +0.70 C
Ocean +0.26 C +0.96 C
SoPole -0.01 C -0.04 C
Land -0.01 C -0.04 C
Ocean -0.01 C -0.04 C
USA48 +0.19 C +0.70 C
Australia +0.24 C +0.89 C
The complete version 6 beta lower troposphere dataset is available here:
http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/uahncdc_lt_6.0beta4.txt
Archived color maps of local temperature anomalies are available on-line at:
As part of an ongoing joint project between UAHuntsville, NOAA and NASA, Christy and Dr. Roy Spencer, an ESSC principal scientist, use data gathered by advanced microwave sounding units on NOAA and NASA satellites to get accurate temperature readings for almost all regions of the Earth. This includes remote desert, ocean and rain forest areas where reliable climate data are not otherwise available.
The satellite-based instruments measure the temperature of the atmosphere from the surface up to an altitude of about eight kilometers above sea level. Once the monthly temperature data are collected and processed, they are placed in a “public” computer file for immediate access by atmospheric scientists in the U.S. and abroad.
Neither Christy nor Spencer receives any research support or funding from oil, coal or industrial companies or organizations, or from any private or special interest groups. All of their climate research funding comes from federal and state grants or contracts.
— 30 —
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“If me aunt had Bxxx she’d be me uncle” assuming it evens warms that much. It could actually cool dramatically
No worries for alarmists. The IPCC will just declare that 1.1 degrees C per century will have catastrophic consequences, and point to scary “warmest year ever” in cooked book “surface data”.
Note the agreement with our host’s data from pristine surface stations.
UAH USA48 ———- .19 C / decade (11/1978-11/2015)
Unperturbed USA — .20 C / decade (1979-2008)
Bingo.
The was the first number that JUMPED out at me… along with the fact and this is just weird. However, land whether it be NH, SH, or tropics are all warming at the same rate. Which is similar to oceans, they are all warming at the same rate, with the exception of the Arctic.
Globe land – .20 / decade
NH land – .21/ decade
SH land – .18/ decade ** relatively close
tropics land – .21/ decade
No Pole land – .19/ decade
contiguous 48 – .19/ decade
I find it not only weird but highly unlikely that all the land would be warming at the same rate, could there be a problem with how the wavelengths are interpreted….
I did find the fact that the satellites and unadjusted/ well sited thermometers have a similar rate of warming… However, once again don’t read into it, thermometers measure temp at 1.5 meters, while satellites measure at all levels of the troposphere.
Brian
Brian, your list excludes So Pole land, whose rate was -.01/decade.
Completely unrelated, its disconnected from all other major landforms. All the ‘NoPole’ land is connected to either Asia, Europe, or N America so part of continental land forms, minus of course the islands.
Remember, the S Pole is an island that has a completely different weather system. That said, you are correct, there is a huge difference between that and the rest.
Well, it is a rate per decade. So maybe on the decadal time scales the land and atmosphere are in equilibrium? The oceans, however, would take much longer (60 years to centuries? or millenia?). If the rate of ocean warming (not just sea surface temp’s) was the same, that would be a problem.
Nice point.
I must admit i am having difficulty reconcling that we all seem to accept that the earth is warming .19C or .20c per decade and the Pause which states there has been NO warming for 18+ years. Can those that know more than me please comment
Let us suppose we have a man who was conceived 37 years ago and who is now 6 feet (72 inches) tall. And let us suppose he reached this height 18 years ago. His average growth rate would be 72 inches/37 years = 1.9 inches per year. However at the same time, he did not grow in the last 18 years.
Werner Brozek
Best explanation I have seen. You make it so simple. Thankyou.
Eugene WR Gallun
That’s neat. I was actually thinking about asking the question: how well does Anthony’s best data match the satellite record?
If the global match is as good as your figures for the USA, then that’s a major boost for Anthony – and the satellite/radiosonde records.
The massive divergence between the satellite and ground records is surely the elephant in the room. At some point it will become completely unsustainable – and something will have to give.
Chris
May I humbly suggest that the criteria for comparison between this work and the satellite record concerns only the land figures.
A glance at the UAH data shows that there is something ‘odd’ above land v oceans over the satellite period.
The land temperature has risen much faster over that period than the oceans. Perhaps they have compensated for that and thereby messed up prior data?
https://www.dropbox.com/s/gixvd270vdrl4z4/rlh-full-uah-triad%20plots.r
I suppose that these might be more directly relevant.

Anyone care to comment of phase of the signals thus displayed? Causation anyone. The Ocean looks like it was pushing the land much earlier than the ‘peak’ La Nina in the middle. And now we are pretty much in phase. Care to guess which way this all might go?
El Nino damn it. Up not down.
“If we looked at untampered NOAA/NASA data covering the same time period as the satellite data, it might be possible to better speculate on Richard’s “it might be right [that NOAA/NASA data is methodically biased].”
I don’t think that ‘tampering’ are quite the words to use. ‘Tunnel vision’ might be closer.
Nice reality check.
Hi Werner,
Yes, the obvious explanation did also occurr to me. However, in some of the graphs published on WUWT that show the actual warming rate is lagging well below the climate models overinflated rates, the last 18 years are still shown as small increases and not as flat line. This leaves me a bit vulnerable when trying to argue with my warmist colleagues
RSS is very slightly negative for 18 years and 9 months, so starting with March 1997 gives a flat line on WFT.
UAH6.0beta4 gives a negative slope for 18 years and 6 months, so you need to start in June 1997. However WFT only shows version 5.6 which is a huge problem. You have to go to the following site to verify what I said about UAH6.0beta4:
http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html
Drawing linear trend lines on various parts of a cyclic pattern is willful self-deception (such as from 37 yrs of data on what is probably ~60 yr cycles). Mr Nyquist said something about that.
Yep,, he most certainly did. And since the underlying signals (e.g. PDO) are quasi-periodic, you need far more than 2 periods to see a signal. More like 5. Hope we keep the satellite record consistent for ~300 years.
Peter
The 21st century will have two negative periods fit in from the ~60 year cycles. It means there will be little warming during the 21st century not counting how solar activity declines in future. Regarding the true indication of climate it really should cover two 60 year cycles, so can compare how both positive and negative phases have changed. It is clear that the natural ~60 year cycle is in charge over the recent period since the 1940’s.
Here’s my take on that, (from elsewhere on WUWT).
The problem of people using straight line ‘trends’ of any climate time series is that I think that Nyquist might have uttered the rather dry observation
“The width of the data capture window available does not support the bandwidth required to get to that frequency. You can’t draw a straight ‘trend’ line on a time series graph as that implies an infinite lower bandwidth which you do not have in that data”.
Straight lines on time series graphs are propaganda at best. Not science. IMHO. He might have agreed.
The width of the data capture window
===========
that is an interesting observation. with a small window to look out, you can’t tell if the data is a straight line or a curve. As such, it is equally valid to fit a long period curve to the data as it is to fit a straight line.
Which means that your trend can exist anywhere along the area swept by the curves, with the straight line no more likely than any other curve. The only thing that sets the straight line apart is that it is the average of all the possible trends. This however does not make it most likely.
In point of fact, it would likely be more instructive to draw two additional curves on the trend, representing the maximum and minimum trend possible given the size of the sample window, and remove the straight line trend altogether, because it provides no additional information, and may well be misleading.
I don’t know you can ever sort the mess out at the bottom end. The amount of noise in the signal will allow almost any phase and some magnitude changes to occur.
This essentially means that you can no clue about ‘cycles’, present or not, in the record much before the record itself. The phase signal which you require to attach to is lost in the dust I believe.
I don’t see how this question of a trend line involves the Nyquist criterion in any way.
The text books for statistical mathematics, describe the specific algorithm for computing the trend line for ANY finite data set of finite elements. It matters not a jot, whether the members of the data set are related in any way whatsoever, or whether they are related by a very specific closed form equation.
M of B uses such an algorithm to compute the trend line that he uses in his “pause” analysis. It’s one of those algorithms that I describe as “numerical origami”.
You follow the formula (the folding sequence) and what results is by definition the trend line for that data set.
Now in the case of climatic data, such as Temperature anomaly data records, there is some expectation that the numbers in the data set are in fact related. They are related in the sense that are observational result of some particular physical system; the earth’s climate or Temperature anomaly if you wish.
Now those experimental recorded numbers are likely to be contaminated by noise, but it is also likely that they are a record, all be it with some uncertainties, of what that system really is doing.
The trend line calculated by that algorithm, however has NO error whatsoever. it IS the trend line for that written set of exactly known numbers. Whether those exactly known numbers are the exact measure of the system is an entirely different question. We KNOW the numbers; they are a published record. So the trend line has no error (if the practitioner is capable of doing essentially 4-H club arithmetic without error.)
However, that accurate trend line has no real meaning whatsoever, other than what the origamist chooses to assert that it means.
If (s)he says it is a jumping frog, or a crane, or a swan, that is his(er) prerogative. But whatever the creator says it is, the algorithm (folding sequence) if followed without error, does produce the required end result.
So a plot of Temperature anomaly data over say 60 years or so, may look anything but a straight line approximation, and in my view it is just plain dumb to assert that the trend line is a valid approximation to what the actual data says.
But there is no way that we are looking at samples of a continuous data function, and asserting that the trend line in some fashion is a corruption due to a failure of any Nyquist sampling strategy.
Nyquist has nothing to do with trend lines. If we were trying to reconstruct some band limited continuous function that the recorded data set is supposed to be samples of, that would be a different question. And we don’t need to actually try to reconstruct that function, to incur a Nyquist penalty. if we have under-sampled the continuous function, there will be aliasing noise in the reconstructed function, and if we under-sample by two or more compared to the Band limited function requirement (of Nyquist), then the aliasing noise spectrum has folded back all the way to zero frequency, which is the AVERAGE of that continuous function over the recorded interval, so the average will be corrupted.
But all of that is unrelated to the trend line algorithm. It may be a totally dumb thing to calculate (I think it is), but the result of performing that prestidigitation IS the trend line for that data set, no matter what the numbers in the set actually represent (if anything).
So don’t invoke Nyquist for an issue that has nothing to do with sampled data theory.
g
Let me put it this way. You have no idea how the straight line you draw is made up. It is the result of a combination of a set of low frequency waves which you do not have the power to resolve because the time window is too short. You do not know how many, how big, what phase they are in. I or 1000 combined together. A straight line is a propaganda exercise (or fool yourself anyway).
By definition on a time series graph you need an infinite bandwidth to draw a straight line. A straight line is defined that way.
“But there is no way that we are looking at samples of a continuous data function, and asserting that the trend line in some fashion is a corruption due to a failure of any Nyquist sampling strategy.”
Then tell me what taking a temperature reading of a thermometer is. A discrete sampling of a continuous function. Every reading you take is a digitisation.
I suppose that a valid treatment if you want straight ‘trend’ lines, is to increase the width of the line in some way to try and compensate. The fact the the ‘line’ may then become taller than the data it is covering may then prove awkward.
The climate alarmist will say, “but, but, but UAH is DENIER temperature data…listen to NOAA instead…”.
Why listen to NOAA? They are constantly ‘adjusting’ the past temperature record:

NASA does the same thing:
They might be credible if the adjustments were split between both warmer and cooler temperatures. But they always make the temperature charts look hotter/scarier.
That cannot be a coincidence.
But it could be right.
If you look at this presentation of the UAH data (same data as above) then it would appear that the land was cooler than the ocean until the ‘big event’. Since then it has been warmer on average. Why?
https://www.dropbox.com/s/gixvd270vdrl4z4/rlh-full-uah-triad%20plots.r
They look like the same data as Steven Goddard Tony Heller
Interesting also to compare the run I did last year with this one.
RichardLH,
The land/ocean breakdown could explain why recent GISS data are warmer than UAH, but doesn’t explain why the earlier GISS published data that dbstealey shows was lowered from the raw data.
Too bad we don’t have satellite data prior to 1979. The 1998 big event could be just a coincidence occurring at the inflection point of a cycle where the land temps exceed the ocean temps on either side of the middle. The whole series from 1979 could be just noise superimposed on a longer term cycle.
What do you make of the discrepancies between your two graphs from 2001 to 2005?
For that I will have to defer to UAH. It obviously shows two things. One, that some of the changes reflect the move to v6.
Others reflect that as the series extends, the lower frequencies come more and more into their correct phase and magnitude alignment. As to which is which, ask UAH I’m afraid.
RichardLH said:
But it could be right.
=================
I could also win the state lottery 5 times in a row. The problem is that every adjustment increases the average rate of warming. This suggests confirmation bias in selecting the adjustment methodology.
Adjustments should be conducted on a double blind basis. As soon as the person making the adjustments is allowed to see the results, the temptation is to tweak the adjustments to get an answer closer to what the person making the adjustments expects to be the correct answer.
This is how bad science is created. As soon as you allow the results of an experiment to feed back into the methods, your experiment is no longer searching for the truth. It is searching to satisfy the expectations of those people conducting the experiment.
The simplest example is proof-reading your own writing. If you expect a word to be there, your brain will fill in the word while you are proof reading. It is only much later, when you have forgotten what you meant to say, that you are able to see the missing word.
Our brains fool us all the time. They fill in missing information with made up information, to allow us to operate with imperfect sense and limited information. We are very good at finding the answer to the problem, even if the problem is poorly understood and the answer is ultimately wrong, as most answers are. Our brains will convince us we are right, track history to the contrary.
RichardLH,
Temperature anomalies don’t tell whether one place is warmer or cooler than another, they only tell us the change relative to some baseline period. Because the baseline period for UAH v6 is 1981-2010, we’d expect all three time series to intersect somewhere around 1995 — which they do.
“Temperature anomalies don’t tell whether one place is warmer or cooler than another, they only tell us the change relative to some baseline period.”
Indeed they do. I cannot conceive how if you continue to add data to the right of the series, you will ever get the two traces to swap sides of the total to the left. I rather suspect that they will stay the side they are on. Some bouncing up and down as I said as phase and magnitude actually are displayed properly as longer and loner periods come into view.
YMMV.
“I could also win the state lottery 5 times in a row. The problem is that every adjustment increases the average rate of warming. This suggests confirmation bias in selecting the adjustment methodology.”
No it suggests that they have seen the same warming as observed by satellite and used that information in a way that makes sense of what they see and what they believe is there. The difference between land and ocean in UAH will show up badly if you are JUST looking hard at the ground data and using the rest for support.
RichardLH,
That doesn’t explain why the ‘adjustments’ done by NOAA, NASA, etc. always show increased warming, but never more cooling. What would explain it is ‘tweaking the data’ to get the results they want the public to see.
Ferd Berple is being polite when he gives NOAA an excuse like confirmation bias — which is a very real effect. I would go a step farther though, and point out that the heads of these government agencies are appointed at-will by the President, who doesn’t even have to threaten their employment. They’re smart guys. They know the score.
When they see the President constantly expounding on “climate change” being the most dangerous problem we’re facing, they naturally get on board with that narrative, and help it along by providing what they amusingly refer to as ‘data’ that fraudulently shows more and more global warming, when real satellite and radiosonde data shows nothing of the sort.
It doesn’t require someone to be much of a student of human nature to see what’s happening here. Those folks know they can be replaced for any reason — or for no reason at all. They serve at the pleasure of the President, and rationalize what they’re doing as ‘being prudent’.
But can we really blame them? Like most of us, they have families to take care of, and bills to pay, etc. No, the buck stops at the President’s level. In the past Administrations, Presidents always left NASA alone to do honest science. Now we see how very easily science can be corrupted into Muslim Outreach-style politics.
“confirmation bias”
That’s exactly what I think is happening. Tunnel vision.
RichardLH,
Indeed temperature anomalies do not. I’ll show you:
http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/uahncdc_lt_6.0beta4.txt
Surely the absolute temperature of the North Pole was not warmer on average than the tropics AND the entire globe in October 2015.
Change the reference period to 1979, and all three series will converge to zero on that year and diverge from there. Conversely, set the reference period to 2015, and that year’s anomalies will be at or near zero whilst all prior years will be trend to negative values in prior years.
“The simplest example is proof-reading your own writing. If you expect a word to be there, your brain will fill in the word while you are proof reading. It is only much later, when you have forgotten what you meant to say, that you are able to see the missing word.”
Yes. When you read something you have just written you are actually not ‘reading’ it but merely using the words to prompt your brain into remembering what it was you thought you wrote. But I have tip for those who do not have time to wait before going to press. Select the text in question and change it all to a wildly different font style. This tricks the brain into having to actually read the work and you will notice errors. Edit, and then change the font back.
Brandon, DB, Ferd, and Richard:
I think there is some conflating going on here. Richard’s data seems to be anamolies from the same baseline. Notice that the green is the weighted average of the ocean and land points. The confluence on all three curves happens whenever the land and ocean anamolies reverse their respective magnitudes. This means that prior to 1998, land temperatures usually averaged lower than ocean temperatures, while the reverse applies after 1998. This is likely a true phenomenon if satellite data are to be trusted. DBStealey’s NOAA data doesn’t show satellite data and his NASA data doesn’t show temperatures prior to 2000. So from what is shown, we can’t tell whether the “flip” at 1998 occurs because of data tampering or whatever.
If we looked at untampered NOAA/NASA data covering the same time period as the satellite data, it might be possible to better speculate on Richard’s “it might be right [that NOAA/NASA data is methodically biased].
Why bother to control the CO2 knob when it’s infinitely easier to just fudge the numbers?
RichardLH,
“
Indeed they do.
Indeed temperature anomalies do not. I’ll show you
What those graphs show is that there is a difference between two sets of anomaly figures (when plotted on the same graph).
A comparison, anomaly to anomaly. Absolute is not in this battle.
“If we looked at untampered NOAA/NASA data covering the same time period as the satellite data, it might be possible to better speculate on Richard’s “it might be right [that NOAA/NASA data is methodically biased].”
I don’t think that ‘tampering’ are quite the words to use. ‘Tunnel vision’ might be closer.
Wrong sub-thread before (MODS: could you delete it from where I put it wrongly in the first place?)
Chic Bowdrie,
As I explained upthread, the convergence of temperatures near 1995 is mainly an artifact of the 1981-2010 baseline … 1995 is the midpoint. Change the baseline period, and the point of convergence will change. It tells us nothing about anything.
We already know that NOAA apply bias adjustments to raw temperature data because they tell us they do. And how they do it: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/
“As I explained upthread, the convergence of temperatures near 1995 is mainly an artifact of the 1981-2010 baseline … 1995 is the midpoint. Change the baseline period, and the point of convergence will change. It tells us nothing about anything.”
Actually all that does is bounce the lines up and down a bit (in concert). The base line period is merely a ‘DC’ reference.
RichardLH,
Let’s review what you wrote upthread:
http://wattsupwiththat.com/2015/12/19/uah-un-climate-change-goal-new-trend-analysis-shows-were-there-now/comment-page-1/#comment-2102177
If you look at this presentation of the UAH data (same data as above) then it would appear that the land was cooler than the ocean until the ‘big event’. Since then it has been warmer on average.
“Cooler than” and “warmer than” read to me as expressions of absolute temperature, as in: a cup of ice water is cooler than a pot of freshly brewed coffee, or temperatures in summer are ususally warmer than temperatures in winter.
In that case my words have failed to convey my meaning.
The relative warmth and cold. i.e. a comparison between anomalies.
RichardLH,
Yes, and it ALSO affects the point of convergence in this case:
http://3.bp.blogspot.com/-53w3KgQ8p-U/VncYHIQWshI/AAAAAAAAAgk/Rim0VMZuS1A/s1600/UAH%2BTLT%2Bglobal%2Bland%2Bocean%2Bmultiple%2Bbaselines%2B01.png
I think what you’re trying to get at is the relative rates of change. The regression lines in the above plot may or may not help with that for your purposes.
All that appears to be doing is to change the amount of warmth and when it was moved. Same observation with different parameters.
RichardLH,
That’s just it … surely you and I agree that the “amount of warmth” is the same regardless of how it’s plotted. If it appears to you that the amount of warmth is different in those three plots, then I suggest there’s either a problem in how you interpret the meaning of a temperature anomaly or that again I don’t understand your meaning from how you written it.
This is all actually beside the point. My main thing is that I don’t consider it possible to tell whether NOAA/NASA are inappropriately adjusting temperatures by comparing GISSTemp to UAH. The way to attempt to do that is to go read NOAA’s papers and download their data.
No. See below which I accidentally detached from this portion of the thread.
Anyway, the centre of the plot is data capture length divide 2. It not up for grabs.
And the centre of the reference period is defined. They are not the same.
I’ve updated the plot in light of your comments. I’ll have to add it to the R.
There’s no reason to believe that 37-year trend which happens to begin at the end of a cooling period will continue, but it’s significant that even within the Alarmist ideology, the reason for alarm when examined, goes poof.
correct. sampling theory would suggest that the most we can talk about is an 18.5 year trend. perhaps this is where the talk (Santer?) about the models being inconsistent with a Pause of more than 18 years came from.
“The fastest warming place on Earth over the past 37 years has been in the Arctic Ocean north of the Svalbard archipelago”.
I did often wonder why the BBC seemed so interested in producing reports about “global” warming, in Svalbard.
Obviously this is not cherry-picking on their behalf.
However, the BBC is also oddly silent on the topic of Antarctica. And they completely forgot to report the latest NASA study from Zwally.
Which seems to be oddly negligent.
I expect that they blind-folded David Shukman and had him stick a pin in a map of the globe.
“Well, it looks like you happened to pick Svalbard, David. Off you go, to provide a totally non-representative impression of the dramatic changes afflicting the planet”.
http://www.bbc.co.uk/news/world-europe-19508906
Look at Wikipedia to find out Shukman’s, the BBC’s Science Editor, and that fellow alarmist Harrabin’s, the BBC’s Environmental Analyst, academic qualifications and scientific and technological qualifications and work experience. The former has a degree in geography followed only by journalism, and the latter a degree in English and similar journalism. Yet they parade in front of us in the UK on TV regularly, full of their self-importance and talking down as if they were Newton’s prodigies! They talk arrant nonsence and are totally incapable of really understanding even the very basic science and technology involved. They effectively also control all the BBC’s output on this subject.
It’s laughable if it wasn’t so serious a matter.
As an example of Harrabin’s misinformation and false assessment on this subject, they flew him to the northern coast of Australia earlier this year to report on warming ocean’s and increasing CO2 concentrations in the sea. He showed us film of depleted fish stocks and destroyed corals, standing up to his waist in clear sea. There were massive amounts of bubbles pouring out of the sandy sea bed and spiralling up in columns to the surface. The direct implication was, and he confirmed this, was that CO2 was being released and causing this devastation. In fact what was being shown was a typical gas release from subsea volcanic areas: H2S, CO, NOX, and many other extremely poisonous gases. No wonder the coral and fishes were dying. Shukman’s inputs are no more honest or accurate
Yet again, however, the BBC never allowed and statements or available evidence that Harrabin was blatantly misinforming us.
Look up Christopher Booker’s evidence and his ongoing saga, attempting to get the BBC not to back “denialists theories” but simply to provide the balanced reporting on the subject that is enshrined within the BBC’s public broadcasting Charter! Needless to say they won’t debate this, let alone agree to it!
Errant nonsense.
“The former has a degree in geography followed only by journalism, and the latter a degree in English and similar journalism. ”
Yes indeed Cassandra! It seems that in order to board the Climate Gravy Train, a journalist must take a three day Science course in genuflection at the Altar of Catastrophic Climate Change in the Temple of Doom.
Only WUWT would keep 2013 post about the under sea volcanoes near Svalbard. Hot Stuff!
Thanks Anthony.
http://wattsupwiththat.com/2013/08/02/hot-times-near-svalbard-volcanic-range-discovered/
The BBC might not be so keen on Svalbard as its “canary in the mine” if its researchers actually did a bit of research. It seems Svalbarders were on the beach in their swimwear in the 1100s [perhaps a bit of hyperbole but hey..]
See for instance …ftp://ftp.gfz-potsdam.de/home/cegit/egu/pdf/EGU2010-2624-1.pdf
Add to all this the fact that the effect of increased CO2 is logarithmic and you get less of a temperature rise in the coming decades
Atmospheric O2 has been increasing about 5% per decade for the last twenty years. If it continues as an exponential increase at this rate, the temperature response will be linear in time ( log exp –> linear).
“If it continues as an exponential increase at this rate, the temperature response will be linear in time ( log exp –> linear).”
Or the temp response will be nothing, as it has been for nearly twenty years.
“Atmospheric O2 has been increasing about 5% per decade for the last twenty years. If it continues as an exponential increase at this rate…”
I assume you’re talking about CO2. If I look at the Mauna Loa graph it doesn’t look particularly exponential to me.
One more thing: I’ve never heard anybody explain how (what looks to be about) a 5ppm summer/winter variation in CO2 is completely overwhelmed by a 4ppm human contribution? Especially since that 5ppm seasonal drop must be solely due to the summer/winter difference in the northern hemisphere forests which occupy such a small percentage of the overall globe. Also, especially since the majority of the atmospheric O2 doesn’t derive from those same forests but, instead, from photosynthesis in the ocean.
Interesting but the alarmists will throw a stream of yabuts at it.
Yabut, the oceans will be acidic if we don’t reduce emissions by 50%.
Oh well, they still have “extreme weather”, “melting icecaps” “acidic oceans”, “island-swallowing oceans”, “species extinctions”, “carbon pollution”, etc. etc.
Here is a basic question that has not been touched on (to my knowledge). Can temperatures be averaged? Or is it necessary to use enthalpies, to take into account the water content of the mass of air being measured (and the vastly greater heat capacity of water compared with air) ?
Ian M
It’s been touched on, Ian, but pretty much completely ignored. Temperature is an intensive property. Which means averaging temperatures from two distinct places doesn’t give you anything physically meaningful. All of these exercises in global anything related to temperature are moot.
Agree that temperatures from different sites can not be averaged meaningfully. It is meaningful to compare the derivative of temperatures (eg. the slopes of change) to assess patterns over geographical area of interest.
https://rclutz.wordpress.com/2015/04/26/temperature-data-review-project-my-submission/
A single point location, Ron, yes. But even regional averaging is meaningless. I’ve seen a 27f difference between two places 13 miles apart within 20 minutes of each other. Averaging that region would be physically meaningless.
Thus, when NOAA says “hottest year/month ever”, it’s utterly meaningless.
Ian, you are confounding temperature with heat content (thermal mass) at that temperature. That is how miniscule ARGO temperature differences get translated into zettajoules of ocean heat content. Temperatures can be averaged. Whether the resulting average is climate meaningful is a different discussion.
Rud, I’ve been wondering about the meaningfulness of the “global temperature average”. One problem is that the process of averaging throws away a lot of useful information, e.g. whether increase in the average temperature is due to increases in daily maximum temperature o daily minimum temperature. The cynical side of me thinks that “global average temperature” is bandied about because it doesn’t require much thought whereas describing what’s actually going on requires a good understanding of atmospheric physics as well as biology and geology. It would also force people to differentiate the warming caused by GHG’s as opposed to warming caused by land use changes or natural phenomena.
There is also the question of which temperatures are you going to average and how much “weight” to put on them. Ideally, each station would integrate the temperature over a 24 hour period to get the average temperature. Is that how it’s done or is it the high and low or other paradigm?
As far as global temperatures, how close can the average of the station temperatures be to the theoretical average of every point on the globe at 5 feet above ground?
2/3 of the globe is covered in water which has two very energy intensive phase changes from ice to liquid and liquid to vapour. The extreme non-linear ratio between energy and temperature during these phase changes mean temperatures cannot be averaged. Check out Arctic air temperatures when the temperature is near freezing (http://ocean.dmi.dk/arctic/meant80n.uk.php), the temperature volatility drops very low as the ice can absorb or release large amounts of energy with insignificant temperature changes. When temperatures are below freezing, then temperature swings become much more volatile as temperature and energy become linearly proportional.
John Christy,
Your article in the lead post is a benchmark example of clear communication of climate focused science by a scientist. That kind of clarity in expression of science by a scientist is wonderful. Feynman had it and so do you.
John
I believe an earlier version (Beta2 or Beta3) agreed closely with RSS, but Beta4 disagrees particularly in the last few years. Does anybody know why?
Are you sure the differences are significant? The reason I am asking is that when 6.0 first came out, the pause on UAH was one month less than RSS; 18 years and 4 months versus 18 years and 5 months. Now, the difference is only 3 months: 18 years and 9 months for RSS and 18 years and 6 months for UAH.
Since there’s no global temperature, and therefore no global trend, where specifically is UN goal to be effective?
Jeff Alberts December 19, 2015 at 8:39 am
The UN is expecting their goal to be most effective in the Cayman Islands, Switzerland, the Bahamas, and a few other countries noted for “don’t ask, don’t tell” banking policies. I’m pretty sure the rest of the world is on their own.
What is the standard deviation of this 0.12 trend? Does the post’s title/conclusion mean anything? Assuming an average el nino, if there is such a thing, temps could be back at the post 1998 and 2010 level again. So what is all the fuss about? No one knows where temperatures will be in 10 years or anytime in the future.
According to:
http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html
At the 95% level:
Temperature Anomaly trend
Jan 1979 to Nov 2015
Rate: 1.416°C/Century;
CI from 1.008 to 1.824;
(I do not know why the 0.14 is different from 0.12. Perhaps Nick can weigh in.)
Werner,
Yes, it’s an error – sorry about that. I’ve been tinkering to try to add UAH V5.6 to the set of data that can be graphed, and it seems I have mistakenly pointed that diagram to the V5.6 data. Will fix – I think the error arose in only the last few days.
Werner,
It’s fixed now, thanks. Shows 1.13°C, as is correct for V6.0.
I find the graphic extremely illuminating. This should be the standard for tracking temperature changes from here on out IMO. So much better than a single line on a graph that is supposed to encompass all of these measurements.
The reality check above is simply awesome. So now we have balloon, satellite, and land based measurements that agree.
Given the warmist hype, if find it very surprising to see all that white, even if it is over the oceans, nevermind the cooling of the antarctic.
Put another way, The energy retained by the earth’s during the recent period has been 0.04% per decade if you ignore the AMO. Yes, I am assuming a constant specific heat for the whole shebang for simplicity. Obviously that is a very crude assumption about a very complex heat transport problem. Somehow, somehow, I would rather think that a small increase in cloud cover that would reflect the incoming insolation back in a return to sender would more than counteract that. Further, we can see the 20th century as a large Forbush event on average given the increase in the solar magnetic fields during that period. My prediction is colder by 2020 and downright chilly by 2030, if only due to the AMO. Add a bit of inverse Forbush and the wind turbines may have a large ice problem in the not so far future.
Assuming constant specific heat sounds reasonable. While it ignores that rapid change in enthalpy of humid air with temperature, the heat content of the of the atmosphere is small compared to that of the oceans.
Yes, I ignore that entirely for simplicity. If I was designing a heat engine of some sort, it would have to be there.
You might assume a constant specific heat, but to convert temperature to energy requires a mass estimate too. How deep into the ocean did you go and how do you know the temperature gradient has been constant all these years? Is it possible that the deep ocean has cooled while the surface has warmed?
So we will get the thermal inertia from the current El Niño to add to the overall temps the next 24 months or so. Then we will probably move to a pretty substantial La Niña by the end of 2016 if history plays out. If this happens, along with other cycles coming into play, the decade starting 2020 will be pretty interesting.
If it turns cold to much colder by then, global demographics says we will be fighting wars over food and shelter, not worrying about crocodiles and polar bears. If we think we have immigration issues now, it will only get worse.
Most of our food is grown influenced by continental air masses, and if the cold grows over the Arctic during the growing months, early frosts on soybeans, corn and other food crops will severely limit production.
Unfortunately as long as the general population is warm and well fed, the will put up with a lot of nonsense. But a hungry and miserable population just might bring back the guillotines.
Can someone please help an interested non-scientist? The SOI is back into positive figures. What does this mean considering everyone seems to still be expecting a “Godzilla” El Nino?
Hey, how come the IPCC’s map of the world shows everything in various shades of RED , in other words, from hot to extremely hot, except the temps are actually the same as this map !! Wonder why ?? LOL
Need you ask?
No, not really !!
Thanks, Anthony, UAH.
“Global Composite: +0.41 C (about 0.73 degrees Fahrenheit) composite change since December 1978.” This tells the story of the global warming alarm; nothing much to it.
The interesting thing about the last several years being the warmest ever, is that I can recall a substantial amount of news stories of exceptional cold in north and south america, europe, and even unusual snow storms in the middle east and north africa. What I can’t recall are any large quantities of widespread heat waves
Dr Christy doesn’t mention what the error margins are in the UAH v 6.0 (beta) trend and whether these overlap with the ‘best estimate’ trends shown in the surface data. In RSS the trend is 0.122 ±0.063 °C/decade (2σ): http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html
This overlaps with all the ‘best estimate’ trends of the main surface data sets from 1979. Seems likely UAH v6.0 will have similar very wide error margins.
This is why there was no statistically significant warming in the first half of the data set between launch in 1979 and the run up to the Super El Nino of 1997/98. A slight positive trend during this period, but not statistically meaningful.
Ditto, post the Super El Nino of 1997/98. Since then there has been a slight cooling, but not statistically significant.
we are left with essentially two flat trends. Temperatures trending flat between 1979 and the Super El Nino of 1997/98, and then a step change in temperature of about 0.27degC coincident with that event, and then temperatures once again trending essentially flat.
Zero correlation between CO2 and temperature in this data set. Only a one off isolated step change coincident with what appears to be an entirely natural event, ie., the Super El Nino of 1997/98. .
“we are left with essentially two flat trends.”
_______________
Only if we divide the data set into two parts (for what purpose?).
Left alone as a single series, then both RSS and (probably) UAH both show statistically significant warming since 1979.
DWR54
Sure they do, but not because there is a straight line linear trend in and throughout the time series.
There is a step change of about 0.27degC half way through the series; this is where the significant warming occurred. But for that one off and isolated event (coincident with the Super El Nino of 1997/98) there would be no statistically significant trend in the series at all.
Of course, one has to consider carefully how the data series should be analysed, but that said, it is clear that there is no steady straight line linear trend in the data series.
It will be interesting to see whether there is another long lasting step change coincident with the current 2015/16 strong El Nino (similar to that seen with the Super El Nino of 1997/98), or whether the current El Nino simply results in a short term spike, say similar to 2010, and following the La Nina that is likely to take place in 2017, whether the temperature anomaly into 2018 to 2020, stabilise again at around the 2001 to 2003 anomaly level.
I consider that the next 5 years is likely to tell us a lot more about what is going on.l
From:
http://moyhu.blogspot.com.au/p/temperature-trend-viewer.html
Temperature Anomaly trend
Jan 1979 to Nov 2015
Rate: 1.130°C/Century;
CI from 0.724 to 1.536;
t-statistic 5.454;
Temp range -0.197°C to 0.219°C
So in other words, 0.113 +/- 0.0406
Thank you Nick!
Just to be sure we are comparing apples with apples, here are Nick’s number for RSS:
Temperature Anomaly trend
Jan 1979 to Nov 2015
Rate: 1.221°C/Century;
CI from 0.828 to 1.614;
t-statistic 6.087;
Temp range -0.118°C to 0.331°C
So in other words: 0.1221 +/- 0.0393.
Not sure how to read this Werner, as I’m not familiar with Nick’s calculator or how it works.
From what I can make of the above, the trend in UAH v6.0 (beta) since 1979 is 0.11 C/dec with an error margin of 0.04C. If so, this means that UAH v6 shows statistically significant warming since 1979 and an upper error band of 0.15C.
This compares to the surface average of ~0.16C over the same period (NOAA is exactly 0.15C).
All the surface and satellite data sets agree that global warming since 1979 is statistically significant. There are disagreements as to extent, but the error margins in each set are very close to one another.
This is statistical gibberish which bears no relation to measurement error or even the variability of the rate of change. It is a dimensionally inconsistent numerological artefact similar to the number of angels on a pin.
That is correct.
Here is part of my next report that will come after Hadcrut4 comes. As for the other questions, perhaps Nick will weigh in again.
On several different data sets, there has been no statistically significant warming for between 10 and 22 years according to Nick’s criteria. Cl stands for the confidence limits at the 95% level.
The details for several sets are below.
For UAH6.0: Since January 1993: Cl from -0.000 to 1.668
This is 22 years and 11 months.
For RSS: Since April 1993: Cl from -0.016 to 1.575
This is 22 years and 8 months.
For Hadcrut4.4: Since January 2001: Cl from -0.048 to 1.334
This is 14 years and 9 months.
For Hadsst3: Since December 1995: Cl from -0.002 to 2.035
This is an even 20 years.
For GISS: Since January 2005: Cl from -0.029 to 2.505
This is 10 years and 11 months.
Overall, however, the brightness temperatures of a channel will be a function of the temperature of the atmosphere multiplied by the weighting function of the channel and the temperature of the surface multiplied by its emissivity. The trend of the various channel brightness temperatures will correspond to the energy temperature of the layer the channel is tuned for plus a tad from other levels. Windows channels can estimate the surface emissivity/temperature fairly well. A lot of the globe is covered by a saline solution whose emissivity is pretty well known in the absence of wind driven foam.
A trend in the tropo channel for instance would indicate a trend in temperature over a layer subject to an error band of the surface emissivity/temperature assumptions. I worked on those things in the 1970s. I trust them a heck of a lot more than I trust the crew “adjusting” the surface data. Radiosonde data should have uncorrelated errors over time. There is not a lot of it, but what ever trend it shows is probably real.
And now for a little light humor……..
MSNBC is the darling of the liberal left, and just about as intelligent !!
Notice anything wrong in the graph below ??
http://americanlibertypac.com/wp-content/uploads/2015/11/vlcsnap-2015-11-05-08h24m13s010-650×366.png
I didn’t realize Trump had such a dark tan !!!
and Rubio and Cruz were separated at birth…
The only one they recognized was Bush
Everyone with a Hispanic name (Rubio, Cruz) look alike, doncha know.
Perhaps a shade of wishful thinking.
Numbers only add up to 78%?? Oh, as pointed out, the pictures. Guess I am as blind as the guy who put it together.
Eugene WR Gallun
When making the target 1.5 degrees instead of 2 my immediate reaction was this would be the warmists way of saying how effective the measures they are putting in place are. In fact they will probably argue that if even more money is spent then they could get the trend lower. The reality being all along that they can’t affect the temperature and it was never going to reach 1.55 anyway. Without really knowing the about the hole in the ozone layer I suspect that whether they banned CFCs or not the hole in the ozone layer would have been exactly as it is now . By having successfully convinced the world of the evil of CFCs they also can claim success as we don’t really know if the hole really existed in the first place . The best way to solve a problem is to create the illusion of a non existent one then pretend that your non existent solution has solved it. Who cares how much the cost we have saved the world.
Don’t forget that the 1.5 degC figure is 1.5degC above pre industrial times, not 1.5degC per century nor 1.5 degC warming by 2100.
According to the warmists, it is already about 1degC above pre industrial times. that means that the cap they want is just a further 0.5 deg C, of warming and no more. I emphasise that they are seeking to cap future warming as from now at 0.5degC .
So if the warming trend is about 0.12 degC per decade, that will be reached in 4 decades, ie., around 2055.
.
;
Didn’t know they had radiosondes in the pre-industrial times. Ya learn something every day.
I love the smell of science in the morning.
+ 2,000