Guest Post by Willis Eschenbach
As many of my posts start out, “I got to thinking about …”.
In this case, I got to thinking about the Berkeley Earth global temperature dataset. So I got their gridded file “Monthly Land + Ocean Average Temperature with Air Temperatures at Sea Ice” covering 1850 to 2021 and took a look at it. I started at the first month’s data, January 1850 … and my jaw hit the floor.

Figure 1. Berkeley Earth surface temperature, January 1850. White areas have no data.
What shocked me was the red-orange circle centered north of New Zealand, as well as the half-circle in northern South America.
Clearly, what they are doing is taking one temperature reading at one point, and extrapolating it to a surrounding area. How big an area? Well, the circle north of Kiwiville has a diameter of ~ 1,600 km (~ 1,000 mi). It covers an area of 8,700,000 square km (3,360,000 sq mi). That’s about the area of the continental US … estimated from one temperature reading.
And there’s no island anywhere near the center of that circle, so it would have been a temperature taken from a ship …
Now, if you look carefully you’ll see that the southern part of the circle is more orange, it’s a bit cooler. That makes me think that they’ve used modern measurements of the temperature gradient around the center, and adjusted them to fit the single surface temperature measurement. To check that, let me go take a look at the January temperatures of that region over time … I’m writing this as I’m analyzing the data, so I’ll be back soon.
…
OK, here’s what I find.

Figure 2. January temperatures from 1850 to 2021 of a vertical (North/South) slice through the middle of the red circle north of New Zealand in Figure 1. The slices run from 16°S to 46°S. Temperatures are expressed as anomalies around the temperature at the center of the circle, at 31° South latitude.
Looks like my guess was not too wild, they’re using some kind of procedure like that.
But is extrapolating the temperature of an area of the ocean the size of the continental US from one single temperature measurement a reasonable procedure?
Having spent a good chunk of my life at sea, I’d have to wonder. I’ve seen areas where the ocean changed temperature by a few degrees or more in a few hundred meters. Where a cold current hits a warm current, there is often a clear dividing line and little mixing across the line.
And over the land the changes are much larger, like say over northern South America in Figure 1.
So … the whole of the US from one thermometer? Where I live, for example, it almost never freezes. But a kilometer (~ a half-mile) away, it freezes a number of times per year. Here’s the freeze warning for tomorrow. I live near the coast north of San Francisco, in the narrow sliver of green near the coast to the left of the “S” in “Santa Rosa” … it probably won’t freeze here. The stretch along the coast in this area on the western side of the first range of hills, from about 600′ to 900′ (180m to 270m) in elevation, is known locally as “The Banana Belt” because it hardly ever freezes. We grow lemons, limes, and avocados on our patch of soil.

Figure 3. Freeze warning for Wednesday, February 23, 2022
So I’ll leave it to the reader to decide if one thermometer is enough to estimate the temperature of the entire continental US … and while you consider that, here’s a video loop of the coverage of the first twenty years (240 months) of the Berkeley Earth global surface temperature data.

Figure 4. Video loop of the first 240 months of the Berkeley Earth global surface temperature.
I find the changing coverage of Australia over time most perplexing.
At the end of the day I got to wondering … just when did Berkeley Earth finally achieve complete global coverage? Here’s the sad answer.

Figure 5. Percent coverage, Berkeley Earth surface temperature, divided by land and ocean.
Interesting. The effects of the wars on the temperature reports from oceanic shipping are quite visible. And even with extrapolating out so that a single thermometer covers an area the size of the continental US, land coverage didn’t exceed 90% until after WWII … and total coverage wasn’t achieved until 1978.
I have no overarching insights from this research, other than that the spotty nature of not just this Berkeley Earth dataset but most climate data is a continual thorn in the side of researchers, and it makes all conclusions about the climate very tentative.
Oh, yeah … about the title of the post, “SWAG”.
A “WAG” is a “Wild Ass Guess“. And no, I didn’t make that up.
And a “SWAG”, on the other hand?
That’s a far superior creature, a “Scientific Wild Ass Guess“ … like say the various estimates of global average temperatures in the 1800s.
My best wishes to all, blessed rain here, what’s not to like?
w.
As Is My Custom: When you comment, I ask that you quote the exact words you’re discussing, so we can all be let in on the secret of just who and what you are on about.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Well its instructive also to see the very wide cherry red equatorial region ~30C in 1850. I guess a cooler planet means fewer clouds so your governor also works to restrict cooling. I was in Lagos, Nigeria in 1966 and again in 1997 and it was 30C both times.
In my opinion there were simply not enough temperature stations in the world to make meaningful statements about world temperature in 1850. Who was monitoring temperature in Africa, south America, or Antarctica or over the oceans (70% of the earth’s surface)? And did the stations that exist meet the requirements for locating the thermometer?
For scientific work I believe radiosondes (judiciously), satellites and ARGO are the only datasets to be used. Each shows that nothing unusual is occurring within the Earth’s climate.
not fit for purpose … period … they are NOT measuring the globe … they have a tiny number of thermometers (relative to the globe) that they have extrapolated to “cover” the globe …
Goodness, the maximum and minimum surface temperatures were recorded accurately and daily over ~95% of the global oceans in 1860, that’s amazing!
I am not shocked but I wonder how on Earth they managed to get readings that claim to cover 50% of the globe in 1850.
There is one reliable SST data set on the Australian coast for 1871. It was produced measuring bucket samples taken every hour from 6am to 6pm during the voyage. I have attached chart showing up and down tracks compared with 2019 satellite tracks over the same latitudes and longitude for the separate weeks corresponding to the up and down trips.
An interesting detail is the spike in the Brisbane river. It gives the clue to why cricket size hail stones have been observed in Brisbane.
If SST could exceed 30C in open oceans then boats in tropical waters could be showered with massive hailstones. The maximum convective potential is a function of the surface temperature and updraft velocity is a function of the potential. If ocean surface got to 40C, as some models predict, then hailstones would be the size of footballs.
Doesn’t this still point to the same lack of data such that in Hansen’s 1998 paper he showed usa and world temps since 1880 graphed side by side. The USA with much better data shows little warming, while the world looks completely different, much steeper rise. But assuming that is not adjusted data, doesn’t it just reflect that most of the historical world readings are all from major cities only and those few samples are just extrapolated out to homogenize the data such that what we are really seeing is all urban heat island effect?
@Willis: and then there is the even more esoteric…SEWAG….. Scientifically Educated Wild Ass Guess…. which I suspect the good folks at Berkeley Earth Systems are employing.
Let’s extend that to SEWAGE….. Scientifically Educated Wild Ass Guess (Egregious)
“(Egregious)” -> “(Extrapolated)”
It’s my guess that the ‘Scientists’ at Berkeley Earth got paid a goodly fee for their Bag of SWAG, even if it was totally inaccurate.
A swag bag?
As an engineer I often modified a Confucius saying about watches to one about temperature measurement to: Confucius say, “Man with one thermometer knows what the temperature is. Man with two thermometers has no idea what the temperature is.”
Here is a new one. Confucius say, “Climate Scientist with no temperature measurement can make temperature anything he wants.”
I had no idea that the historical temperature record is this sketchy. The corruption is worse that I thought!
It would be an encouraging gesture of transparency were Eschenbach to provide the ACTUAL data set and the ACTUAL chart parameters he uses to create his graphs. His refusal to do so in addressing previous requests is telling.
Willis always provides the data. The two previous times when you made this demand, the article in question wasn’t even by Eschenbach, however the article did contain a link to the actual article in which the source of the data was given. For this article, every single chart is labeled with where it was acquired. Perhaps if for once, you actually bothered to read the article, you would already know that.
As a troll, you are going to have to work much harder to reach the standards set by our existing menagerie.
Barry, there is a link at the bottom of each graph to the exact source of the data that I used. In this case, it’s the Berkeley Earth NetCDF file. Go get it, it’s on the web, and you can confirm or falsify my claims.
And no, as far as I know, I have never “refused” to provide data.
w.
>>Barry, there is a link at the bottom of each graph to the exact source of the data that I used.<< Allow me to clarify. I would be very grateful if you can provide a package of the EXACT data file used in your charts AND the exact parameters you used in your charting software of choice to render those charts. I can work with virtually any spreadsheet, MathCAD, ModTRANs, etc. My email address is barryanthony35@aol.com. There should be no concern about sending a file 5 MB or smaller. Thanks in advance.
Mr. Anthony: You are not “clarifying”, you are making a new request. Your first post didn’t ask for anything, it whined about previous requests. MarkW showed you that your previous requests were, in fact, misdirected. Maybe you should “clarify” that while you wait for Mr. E’s polite reply.
Troll wants everything handed to it on a silver platter. The source of the data is given. Get it yourself.
BTW, AOL? LOL? I didn’t know they were still in business.
Barry, you don’t seem to understand, so I’ll go over it again.
The EXACT data file I used is the NetCDF listed at the bottom of each of the charts. It contains both the month-by-month gridded temperature data, as well as the gridded 12-month climatological temperature data used to create the anomalies.
I do not use “charting software”. I use the computer program “R”. Here, for example, is the program I wrote to make the animated GIF shown as Figure 3.
# Animation —–
library(animation)
clrconsole()
animap=function(){
for (i in 1:(12*20)){
print(i)
(thedate=paste0(month.name[(i-1) %% 12 +1],” “,
1850+floor((i-1)/12)))
toanetmap=climatplus[,,i]
drawworld(
toanetmap,point.cex=6,cex.title = 1.6,
legend.cex=1.2,
subsize=1.2,
mincolor = -50,maxcolor = 30,
subline=-2.7,theunits=”°C”,
subtext=subtextberkgridall,
titleline=-3,titlespacing=4,
titletext =
paste0(thedate,
“, Berkeley Earth Coverage”)
)
}
}
oopt = ani.options(interval = .25,
ani.width = 960,
ani.height = 720)
saveGIF(
animap(),
movie.name = “berkeley earth coverage.gif”,
img.name = “Rplot”,
convert = “convert”,
cmd.fun = system,
clean = TRUE
)
# end ——————-
This in turn depends on a couple other functions that I’ve written, which are “clrconsole” and “drawworld”.
“clrconsole” just clears the console. “drawworld” does the heavy lifting. Here it is:
# drawworld ———————
drawworld=
function(
themap,
maxcolor = NA,
mincolor = NA,
titletext = “Dummy Two-line\nTitle”,
roundto = 0,
cex.title = .9,
titlespacing = 2,
titleline = -3,
# cex.title = 1,
# titleline = -1.7,
# titlespacing = 1.6,
printavgs = TRUE,
isweighted = FALSE,
printunits = TRUE,
theunits = “W/m2”,
thebox = NA,
printlegend = TRUE,
reversecolors = FALSE,
rotation = 0,
oceanonly = FALSE,
point.cex = 3,
whichglobe = “l”,
linecol = “black”,
thepch = “.”,
colorlist = c(“blue”, “cyan”, “green3”, “yellow”, “orange”, “red3”),
uselist = F,
thebias = 1,
latlines = NA,
headlist = NA,
legend.cex = .85,
subline=-.3,
subsize=.9,
subtext=”DATA: CERES EBAF 4.1 https://ceres.larc.nasa.gov/data/“,
plotreset=T
) {
# oldmar=c(0,0,0,0)
par(font=2)
if (whichglobe != “l”)
latmatrix = latmatrixold
if (length(dim(themap)) == 3)
themap = arraymeansall(themap)[, , 1]
rowcount = dim(themap)[1]
columncount = dim(themap)[2]
yint = 180 / rowcount
xint = 360 / columncount
legendlabel = paste0(” “, theunits)
if (printunits == FALSE)
legendlabel = ” ”
par(mai = c(.05, .05, .05, .05), cex = 1)
masklong = seq(-179.5, 179.5, 1)
masklat = seq(89.5, -89.5, -1)
globallongs = matrix(rep(masklong, 180), nrow = 180, byrow = TRUE)
globallats = matrix(rep(masklat, 360), nrow = 180)
if (is.na(maxcolor))
maxcolor = max(themap, na.rm = TRUE) # ————lowlow
if (is.na(mincolor))
mincolor = min(themap, na.rm = TRUE)
colormin = mincolor
colormax = maxcolor
# choose which trends
if (reversecolors)
colorlist = rev(colorlist)
# colorlist=c(“blue4″,”blue”,”cyan”,”yellow”,”red”,”red4″)
color.palette = colorRampPalette(colorlist, bias = thebias)
mycolors = color.palette(100)
# mycolors=rainbow(27)
cvalues = matrix(oneto100value(themap), nrow = 180)
colormat = matrix(mycolors[oneto100value(themap, mymax = colormax,
mymin = colormin)], nrow = 180)
# themap=testmap;rotation=180
avgdec = roundto + 1
# calc averages——–
if (!isweighted) {
# bylatmedians=apply(themap,1,median,na.rm=TRUE)
# gavg=round(weighted.mean(bylatmedians,latmatrix[,1],na.rm=TRUE),avgdec)
# nhavg=round(weighted.mean(bylatmedians[1:90],latmatrix[1:90,1],na.rm=TRUE),avgdec)
# shavg=round(weighted.mean(bylatmedians[91:180],latmatrix[91:180,1],na.rm=TRUE),avgdec)
# tropavg=round(weighted.mean(bylatmedians[67:114],latmatrix[67:114,1],na.rm=TRUE),avgdec)
gavg = round(weighted.mean(themap, latmatrix, na.rm = TRUE), avgdec)
gavg
nhavg = round(weighted.mean(themap[1:90, ], latmatrix[1:90, ], na.rm =
TRUE), avgdec)
shavg = round(weighted.mean(themap[91:180, ], latmatrix[91:180, ], na.rm =
TRUE), avgdec)
tropavg = round(weighted.mean(themap[68:113, ], latmatrix[68:113, ], na.rm =
TRUE), avgdec)
arcavg = round(weighted.mean(themap[1:24, ], latmatrix[1:24, ], na.rm =
TRUE), avgdec)
antavg = round(weighted.mean(themap[157:180, ], latmatrix[157:180, ], na.rm =
TRUE), avgdec)
# bylatland=apply(themap*seamaskarr[,,1],1,median,na.rm=TRUE)
# bylatlandweights=apply(seamaskarr[,,1],1,sum,na.rm=TRUE)*latmatrix[,1]
gavgland = round(weighted.mean(themap * seamask, latmatrix, na.rm =
TRUE),
avgdec)
# bylatsea=apply(themap*landmaskarr[,,1],1,median,na.rm=TRUE)
# bylatseaweights=apply(landmaskarr[,,1],1,sum,na.rm=TRUE)*latmatrix[,1]
gavgsea = round(weighted.mean(themap * landmask, latmatrix, na.rm =
TRUE),
avgdec)
} else {
gavg = round(mean(themap, na.rm = TRUE), avgdec)
nhavg = round(mean(themap[1:90, ], na.rm = TRUE), avgdec)
shavg = round(mean(themap[91:180, ], na.rm = TRUE), avgdec)
gavgland = round(mean(themap * seamask, na.rm = TRUE), avgdec)
gavgsea = round(mean(themap * landmask, na.rm = TRUE), avgdec)
tropavg = round(mean(themap[67:114, ], na.rm = TRUE), avgdec)
arcavg = round(mean(themap[1:23, ], na.rm = TRUE), avgdec)
antavg = round(mean(themap[168:180, ], na.rm = TRUE), avgdec)
}
# rotateit —————————————————-
# rotation=-45
therot = 180
if (rotation != 0) {
# if (rotation>0){
if (rotation > 0) {
themap = themap[, c((rotation + 1):360, 1:rotation)]
} else {
themap = themap[, c((360 + rotation + 1):360,
1:(360 + rotation))]
}
therot = rotation + 180
if (therot > 360)
therot = therot – 360
# } else{
#
# }
if (therot == 360)
therot = 0
}
newworld = redo.map(“world”, therot)
maps::map(newworld,
projection = ‘mollweide’,
interior = F,
col = linecol)
#,orient=c(90,therot,0),interior=F,wrap=T)# draws the map outlines
temp = mapproject(globallongs,
globallats,
“mollweide”,
orientation = c(90, therot, 0)) #translates to map coordinates
lines(
temp$y ~ temp$x,
type = “p”,
pch = thepch,
col = colormat,
cex = point.cex
)#10*latmatrix) #colors the map
maps::map(
newworld,
projection = ‘mollweide’,
interior = T,
col = linecol,
add = T
) #redraws the lines
if (length(thebox) > 1) {
temp = mapproject(thebox$longs,
thebox$lats,
“mollweide”,
orient = c(90, therot, 0))
lines(temp$y ~ temp$x,
lwd = thebox$linewidth,
col = thebox$boxcolor)
}
mylats = seq(-90, 90, 5)
mylongs = rep(90, length(mylats))
temp = mapproject(mylongs, mylats, “mollweide”,
orientation = c(90, therot, 0))
newlongs = temp$x * 2.005
lines(temp$y * 1 ~ newlongs, lwd = 2.5)
newlongs = newlongs * -0.9999
lines(temp$y * 1 ~ newlongs, lwd = 2.5)
temp = mapproject(c(-90, 90), c(0, 0), “mollweide”, orientation = c(90, therot, 0))
newlongs = temp$x * 2
lines(temp$y ~ newlongs)
temp = mapproject(c(-90, 90),
c(66.55, 66.55),
“mollweide”,
orientation = c(90, therot, 0))
newlongs = temp$x * 2
lines(temp$y ~ newlongs, lty = “dashed”)
lines(-temp$y ~ newlongs, lty = “dashed”)
temp = mapproject(c(-90, 90),
c(23.45, 23.45),
“mollweide”,
orientation = c(90, therot, 0))
newlongs = temp$x * 2
lines(temp$y ~ newlongs, lty = “dashed”)
lines(-temp$y ~ newlongs, lty = “dashed”)
if (is.finite(latlines)) {
temp = mapproject(c(-90, 90),
c(latlines, latlines),
“mollweide”,
orientation = c(90, therot, 0))
newlongs = temp$x * 2
lines(temp$y ~ newlongs, lty = “dotted”)
lines(-temp$y ~ newlongs, lty = “dotted”)
}
# make the legend =========
multiplier = 1
colorcount = 6
mintext = round(colormin, roundto)
if (is.na(colormin))
mintext = round(min(themap, na.rm = TRUE) * multiplier, roundto)
maxtext = round(colormax, roundto)
if (is.na(colormax))
maxtext = round(max(themap, na.rm = TRUE) * multiplier, roundto)
steptext = (maxtext – mintext) / (colorcount – 1)
midtext = round((mintext + maxtext) / 2, roundto)
#midtext=round(median(themap),roundto)
q1 = round((mintext + midtext) / 2, roundto)
q3 = round((midtext + maxtext) / 2, roundto)
p1 = round(mintext + steptext, roundto)
p2 = round(mintext + steptext * 2, roundto)
p3 = round(mintext + steptext * 3, roundto)
p4 = round(mintext + steptext * 4, roundto)
plusminus = “”
legendtext = c(
paste(plusminus, mintext, legendlabel, sep = “”),
paste(plusminus, p1, legendlabel, sep = “”),
paste(plusminus, p2, legendlabel, sep = “”),
paste(plusminus, p3, legendlabel, sep = “”),
paste(plusminus, p4, legendlabel, sep = “”),
paste(plusminus, maxtext, legendlabel, sep = “”)
)
newcolors = mycolors[seq(1, 100, length.out = 6)]
if (printlegend) {
legend(
“bottom”,
inset = .1,
legend = legendtext,
col = newcolors,
fill = newcolors,
horiz = TRUE,
cex = legend.cex
)
}
# title(sub=paste(“Global Average:”,round(weighted.mean(themap,latmatrix,na.rm=TRUE),1),”W/m2 per °C”),line=-3.9)
if (is.na(titleline))
if (printavgs)
titleline = -2.5
else
titleline = -4.4
title(main = titletext,
line = titleline,
cex.main = cex.title)
if (oceanonly == TRUE) {
gavgland = NaN
gavgsea = NaN
}
if (uselist == T) {
gavg = headlist$gavg
nhavg = headlist$nhavg
shavg = headlist$shavg
gavgland = headlist$gavgland
gavgsea = headlist$gavgsea
tropavg = headlist$tropavg
arcavg = headlist$arcavg
antavg = headlist$antavg
}
theaverages = paste(
“Avg Globe:”,
gavg,
” NH:”,
nhavg,
” SH:”,
shavg,
” Trop:”,
tropavg,
“\nArc:”,
arcavg,
” Ant:”,
antavg,
” Land:”,
gavgland,
“Ocean:”,
gavgsea,
theunits
)
if (printavgs) {
title(theaverages,
line = titleline – titlespacing,
cex.main = cex.title)
} else {
# mtext(text=theaverages,line=-16, cex=.6,adj=0.5)
}
par(mai = c(1.3, .25, 1, .25))
title(sub=subtext,cex.sub=subsize,line=subline,font=2)
if (plotreset) resetplot()
invisible(
data.frame(
gavg = gavg,
nhavg = nhavg,
shavg = shavg,
gavgland = gavgland,
gavgsea = gavgsea,
tropavg = tropavg,
arcavg = arcavg,
antavg = antavg
)
)
ima <- readPNG(paste0(“~/Pictures/willis logo 3.png”))
(usr=par(“usr”))
(theleft=usr[1]+.49)
(thebot=usr[4]-.76)
(thetop=thebot+.30)
(theright=theleft+.30)
rasterImage(ima,theleft,thebot,theright,thetop)
}
# END OF DRAWWORLD =================================
The last bit, starting with “ima” (for image) just adds my personal logo, the two whales yin-yang symbol, to the picture.
Hope this helps …
w.
Send the files via email, please. barryanthony35@aol.com.
Get them yourself.
Barry, the data is freely available on the web. I’ve given you the functions in a copy-pasteable format, as well as the links to the data.
If you can’t figure it out from there, I’m sorry, but I’m not going to hold your hand. Get “the files” yourself.
w.
You appear to misunderstand my request. Please email a copy of the specific file(s) you used to create these charts. Any format will do, really.
This would be a helpful gesture in the name of transparency by allowing independent verification. barryanthony35@aol.com.
Willful obtuseness does not increase your credibility or status.
>>Willful obtuseness does not increase your credibility or status<< To the contrary. This is in no way being obtuse. I’m simply asking that Eschenbach provide the *exact* same materials/files his charting/spreadsheet software is using when creating these graphs. This is in the interest of transparency and independent verification.
He has you troll. It’s not his job to package it so you can send to one of your handlers.
Personal insults are unseemly, Charles. I’m simply asking Eschenbach to demonstrate transparency and, as a result, integrity. I would hope you’d understand and value those qualities.
I don’t have time to suffer fools. Especially those that want to make up their own terminology. You will not find greater transparency anywhere than what has just been displayed to you.
The fact that you are either too ignorant, too ideological, too willfully obtuse, or are simply acting stupid does not change that fact.
I have no problem calling you a stupid troll when that is the behavior you demonstrate.
Your questions are insincere. You choose to harass rather than engage.
After I’ve provided you both the data and the code, you dare accuse me of lacking transparency and integrity? Get stuffed! I’ve provided everything you need to replicate my findings.
I’ve been totally transparent on this side of the screen. The only thing transparent on your side of the screen is an x-ray of your cranium.
As the saying goes … “learn to code”.
w.
I already told you that I’m NOT using either charting or spreadsheet software. I’m using the computer language R. Either you are being obtuse, or you’re not paying attention.
And if you cannot take the data and the code I’ve provided and verify my results, don’t blame me. Instead, get a computer, learn the R language, and verify them for yourself.
w.
Barry, I have given you both the link to the data and the actual computer code used to create the charts. I have no idea what else besides the code and the data you think you need.
I have explained it to you, but I fear I can’t understand it for you. That you have to do yourself.
w.
Mr. E’s polite and complete reply above, somehow falls short for Mr. Anthony. Mr. Anthony is indeed fortunate to be at a site where there are so few bad actors, nobody will send a “file” to the email he published. No matter how well deserved.
I showed YOU the links to the data he used in the other thread even posted them for YOU twice, yet you go on and on pretending you never saw them.
Now I see that in this post he posted this:
”
In this case, I got to thinking about the Berkeley Earth global temperature dataset. So I got their gridded file “Monthly Land + Ocean Average Temperature with Air Temperatures at Sea Ice” covering 1850 to 2021 and took a look at it. I started at the first month’s data, January 1850 … and my jaw hit the floor.”
Each of the charts has the information showing what he used to generate it, how hard can that be understood?
Don’t forget about this.
HADCRUT DATAGATE
HadCrut4 Global Temperature, 1850 – 2018.
Absurdity everywhere in Hadley Met Centre data
Scandal: First ever audit of global temperature data finds freezing tropical islands, boiling towns, boats on land
CRUTEM — HADCRUT
Climategate’s “Harry Read Me” File is a Must Read!
As another pundit said: this isn’t just the smoking gun pointing to the fraud of global warming, it’s a mushroom cloud!
http://bit.ly/2KAglcK
https://www.climatedepot.com/2018/10/07/scandal-first-ever-audit-of-global-temperature-data-finds-freezing-tropical-islands-boiling-towns-boats-on-land/
“Climategate” was nothing but a contrived controversy by fossil fuel shills desperate to distract from the reality of AGW at all costs, including criminal activity. Even those who took part in it are now not only apologizing publicly, but admitting that their own research agrees with the science. https://www-bbc-com.cdn.ampproject.org/c/s/www.bbc.com/news/uk-england-norfolk-59176497.amp?fbclid=IwAR3i5a3xyAtnW4Kd1PjthFqk1OcSGv1gngWusUMPw-RYNuxh9jL2QLirvFU
Mr. Anthony: “those” and “their” are plural, but the propaganda article you cite only has one “sceptic” saying he was sorry. I warrant there is not another single soul on earth who regrets any role in exposing phil jones and the team. But the funniest part is this- you just called Steve Mosher a fossil fuel shill!!!
Should be fun kicking you down ’til you choose some other name.
Barry, that’s so far from being true that it is totally laughable. How do I know? Because I’m the man who made the first FOIA request to Phil Jones that eventually ended up in Climategate, and I’m mentioned in the Climategate emails. Those emails prove that Phil flat-out lied to my face.
The true story of Climategate is here, and it had nothing to do with “fossil fuel shills”. That’s a damn lie.
w.
Willis,
“And a “SWAG”, on the other hand?
That’s a far superior creature, a “Scientific Wild Ass Guess“ … like say the various estimates of global average temperatures in the 1800s.”
“SWAG” may be superior to “WAG”, but what tops them both is “SSWAG”. “Settled Science Wild Ass Guess”. Why? The “Settled” part is always shifting as the past “Settled Science” turns out to be wrong.
Colleagues and I have studied the public Australian temperature data from BOM since (for me) 1992 and we know a thing or two about it. I supplied Steve Mosher with early data that went into BEST, for example, and discussed problems with urban versus rural stations.
Ruthertglen BOM station has been a poster child. There are many versions of what happened there, many in conflict. Here is a deep study:
https://www.bomwatch.com.au/bureau-of-meteorology/rutherglen-victoria/
In short, nothing useful is known today about screen shifts back then.
Willis,
Re the circles on the global map, for Australia, the historic time slices reflect data from a succession of new sites starting off. Here is a table of the earliest 25 stations to open, from 1856 to 1889 incl.
http://www.geoffstuff.com/bomopen.xlsx
We are always happy to provide neutral, informed links to data that raises questions that might not be answered on official sites. There is plenty of official myth. Geoff S
Thanks, Geoff.
The video shows the situation from 1850 to 1870. The curious part to me about Australia is that you would expect the coverage to get better and better over that period, as new stations are opened …
… but it doesn’t. WUWT?
w.
It seems that the message is that the uncertainty in the historical meteorological data is much larger than officially recognized.
Part of the problem is that sparsity is used to create a warming trend as stations are added.
Example.
Station 1 for five years ->
15, 15, 15, 15,15
Station 2 added in year six ->
20, 20, 20, 20, 20
What to the averages show ->
15, 15, 15, 15, 15, 18, 18, 18, 18, 18
Is this a warming trend?
How about anomalies.
-3, -3, -3, -3, -3, +2, +2, +2, +2, +2
Is this a warming trend?
Think about what lack of stationarity causes and how Simpson’s Paradox may apply.
Has the addition of tropical stations been offset by an equal number of stations in colder regions? Does this add an unexpected bias to the GAT?
Sorry, not true. The anomalies are taken about the station average, not the global average. So the anomalies in both cases are 0, 0, 0, 0, 0, and there is no trend.
w.
You are correct and I didn’t use appropriate values.
However the problem still occurs. People are averaging station anomalies with different unseen variances and those different variances can cause spurious trends as stations come and go. It is one reason that “long” records are desired and entice the use of inappropriate “corrections” to measured data.
I was in a program management review for a missile program back in the late 1980s, being briefed by a contractor, the Program Manager for one of the missile stages. He cited a number (for what, I don’t recall), and the customer immediately pounced on him, demanding to know the accuracy of said number.
He calmly replied, “It’s an IPIDOOMA.” We on the customer side were clearly bewildered. Noting this, the PM went on to say “You’ve heard of a WAG and a SWAG – a Wild Ass Guess and a Scientific Wild Ass Guess. Well, an IPIDOOMA is a ‘I Pulled It Directly Out Of My Ass.'”
It would have been funnier if his part of the program hadn’t been a complete catastrophe – not unlike climate “science.”
Another great acronym. Thanks for the story. Some day the whole charade is going to collapse. Maybe not this generation but it’s inevitable.
Only if Marxism does not win out in end
That’s ShittyWAG.
Swag?
I thought I would be purchasing “Watts Up With That” T-shirts or a coffee mug.
SWAG is Super Wild Ass Guess. That’s the term when the boss comes in and asks for a budget estimate to do a task with just 5-minutes of thought. It’s used all the time at Boeing engineering meetings.
No science involved. Just two orders of magnitude more uncertainty than a WAG.
Been there, suffered that.
BEST is nothing of the sort.
So much for years of trollops insisting on the glories of BEST.
As a psychometric /biometric statistician the first thing I look for is the reliability of any measure claimed to represent a construct, because reliability limits what you can understand by the numbers. Reliability is related to the correlation between alternative measures of the same construct. Reliable measures generally have correlations .9 and above. From memory the correlation between temperature measures on the same days 1000km apart was tested and was described glowingly as 0.6. Which means only 36% of the variance is shared in common, which means 74% of what is measured is error (incorporating measurement and random error and everything else, including all local conditions, heat island effects, height above sea level, weather, etc).
God knows what the error (1 – reliability [aka correlation-squared]) is, given the wide coverages described here, of daily sea and land measures covering the whole globe (but wait, we don’t even measure sea temperatures daily!)
In other words terrestrial temperature measures (even before they are “remodelled” by scientists with an agenda) are almost certainly highly error prone. It is notable that it is very hard to obtain details of correlations between alternative daily measures of even nearby locations ( eg coastal vs inland, vs sea, vs satellite based), but these seem generally low (less than .5).
So to claim to be able to then remodel these unreliable measures (using secret sauce algorithms) then come up with a reliable “global” measure to 2 decimal places suggests a discipline that is kidding itself.
And as for claiming we know the “global” temperature in the 1850s, when we had virtually no data for the Southern Hemisphere at all (much less systematic data), and when only a tiny proportion of the northern hemisphere was systematically measured – that is bonkers.
I appreciate your comments. They follow many of my own thoughts based on measurement assessment.
Indeed, there may be no frost northwest of Santa Rosa.
It will be interesting to see when it reaches people in the mid to high latitudes how thin the troposphere becomes during the winter season and how much the circulation is affected by the stratospheric polar vortex and the temperature in the stratosphere.


Thank you for this eye-opening analysis.
Here’s the illustration for “wag” in my climate glossary:
Willis,
OT but when I read this paper, I thought it would be of interest to you.
CRE of multi-layer clouds over the Pacific.
https://geoscienceletters.springeropen.com/articles/10.1186/s40562-020-00156-6
And is in opposition to what Corti published in 2009
https://www.researchgate.net/publication/26640018_A_simple_model_for_cloud_radiative_forcing
Thanks, D., I’ll take a look.
w.
The GHG hypothesis is wrong if it says adding CO2 must raise average temperature. That is a simple matter to prove false mathematically.
There is a 4th power relation between radiation and temperature but a linear relation between average temperature and actual temperature.
Thus by changing the distribution of temperature while decreasing the average temperature you can actually increase the radiation!!!! Completely opposite to GHG theory.
This follows the Holder Inequality for sums for those seeking a formal proof.
Frost persists for several days in New Mexico and Texas.