From NOT A LOT OF PEOPLE KNOW THAT
By Paul Homewood
Hottest evah!

.
According to NOAA, there was record warmth in much of the world last month, including Greenland and Africa:
.

.
The reality is somewhat more mundane, as NOAA has no temperature data at all in much of Greenland and Africa.
.

.
Much of the world was also much colder than usual.
NOAA also say they know what the global temperature was in July 1881:

.
Which is quite miraculous given that they only had a handful of stations in North America and Europe:
.

GHCN Station Network
https://journals.ametsoc.org/view/journals/atot/29/7/jtech-d-11-00103_1.xml
Still, at least they did not site their weather stations next to airport runways and electricity sub-stations in those days!
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Just another case of POOMA—pulled out of my @ss.
Little do you know that NASA can see all the brown stuff boiling the oceans-
Nasa captures giant swirling gases in the sky | Watch (msn.com)
That brown would be the fault of all the sea cows….? They must poop hydrates
Sounds like the perfect way to describe “climate models” and “data adjustment algorithms.”
I’m stealing it!
You are forgetting homogenization, what a joke, as if a temperature measurement miles away can tell you something about what happening locally. I have felt temperature differences of 40 F degrees a less than hundred miles apart, on the great plains of the USA. Throw in mountains and valleys and you have no ideal from one weather station to the next what is going on other than the measurements that are taken. Changing such measurement in my opinion is fraud, NOAA and NASA do it all the time and it tends to be hotter every time.
I get a 10-20 F delta on my 40 mile commute.
I’ve seen a 27F difference just 13 miles apart.
It turns out that they do. For example, KSTL and KCOU are about 100 miles apart and yet for the monthly average temperature anomaly R^2 = 0.90.
Another Fake Data fraudster, playing games with (i.e. abusing) statistics.
Desert are the best…. with no water vapour the CO2 does not stand a chance to keep things warm…. Hot and cold desserts are the same!!
Hottest Evah Pulled Out Of Thin Air!
The Guardian has a spin for that…
””How climate crisis made this UK summer feel like a letdown
There has been a widespread feeling that this summer was a big letdown, unusually cool and even cold at times. But was it really so bad?
…
The difference now is that extreme heat is taken for granted””
https://www.theguardian.com/news/article/2024/aug/22/how-climate-crisis-made-this-uk-summer-feel-like-a-letdown-weatherwatch
You have to laugh. On the one hand while we know it has been a total letdown, we are still required to believe its been the hottest evah. Anything else is square peg, round hole; it doesn’t fit the narrative.
So it’s all down to your erroneous perceptions (senses and observations). And nothing to do with the highly variable British weather. Whatsoever.
What is their definition, or anyone’s definition, of “extreme”. My wife and I were visiting Nottingham last summer during a “heatwave”. Maybe for England, but by our standards it was a lovely summer’s day.
It isn’t what they would have you ‘believe’.
In the very early 2000s, my USAF reserve unit (From San Antonio, TX.) was doing our two week, summer training in England at Lakenheath AB. Mid July. We were warned that outdoor work could be dangerous, as the temperature could reach 80F.
1984 but with 2024 ignorance…. do not believe your lying eyes and other 4 senses only Big Brother knows the truth
Out of interest here’s all the places that UAH shows a record July temperature in 2024.
Nowhere near the chilly UK.
But then, we knew that.
Indeed. Met Office has it about 0.7°C below the 1991-2020 average. UAH shows it in the -0.5 – + 0.5°C band, but with a colder spot to the north west.
Yep, that EL Nino warming still hanging in, isn’t it.
No evidence of any human causation, though, is there. !
Looks like lots less than didn’t have a “record July temperature.”
And what does that even mean?! An “average,” I suppose?
So, summer? Got it.
Yes , we know there has been a totally natural El Nino event.
Not even you are dumb enough to call it AGW…. or are you !
Here is the Berkeley data for records in July.
https://berkeleyearth.org/july-2024-temperature-update/
NOAA show rather fewer records set than either UAH or BEST.
Most of the world is covered in water, not land surface. But this aside, the map is showing stations with 10 years of data within the 30-year window. The map does not show many stations that were actively recording in 1881.
And yet we’re expected to believe the global average temperature has increased catastrophically over the last 150 years despite only having a fraction of the data.
We are using point-level measurements to inform us about changes to the whole-earth surface. We don’t need to have every inch blanketed in thermometers to obtain robust estimates. In fact, it’s pretty easy to show empirically that you only need about 60-90 stations to get a good estimate:
https://andthentheresphysics.wordpress.com/2018/08/18/you-only-need-about-60-surface-stations/
There is no global temperature dataset extending back 150 years
There is a dataset comprised of temperature measurements from around the globe that does indeed extend back for 150+ years. From this dataset, scientists can estimate global temperature change.
Utter bullshit
What kind of value do you think your comments are bringing to the discussion?
Redge is right, you are a bullshitter.
Far more value than any of your LIES and MAL-INFORMATION
They can estimate, but how close is their estimate to reality? I would suggest that the error bars would be much larger than you would like to admit.
Berkeley Earth goes back 170+ years.
Berkeley Earth can claim anything they like.
Can they show 170+ years of empirical measurements covering the earth that doesn’t have huge gaps in the data?
Yes. Their grids are available here. There are no gaps in their grids.
And Beserkley Earth IGNORES instrumental measurement uncertainty and inserts dry-labbed Fake Data.
Par for the trendology ruler monkeys.
If there are no gaps in their grids for 170 years then they are making up most of their data. You can call it estimates and adjustments, but it is made up data.
They are using an interpolation method that allows for grid cells without stations in them to use information from nearby stations in adjacent cells.
In other words, they are using Fake Data that does not exist.
Denying that kriging is a valid technique is an odd form of contrarianism.
Why do you defend these fraudulent data manipulations?
That means making up data. Your textbook statistics using simple examples does not mean anything in the real world. I don’t care how many fools wrote papers justifying their contentions.
Again, FAKED from data from a tiny part of the world’s surface.
Show us where their stations were in the 1850s, bet you can’t.
The fact you think anyone can FABRICATE a global temperature when there is data from such a tiny number of stations, shows how mathematically INEPT and how GULLIBLE some of these AGW clowns really are.
So what?
170 years ago the number of global weatherstations was very limited.
Most stations were located in Europe, especially in the UK, France, and Germany. Some stations existed in other regions such as North America, parts of Asia, and a few in coastal areas of South America and Africa.
Ocean-based measurements were few and far between.
How is that a meaningful measure of global temperature?
If you believe a few weather stations in Australia represent the whole continent, you’re delusional.
BEST provides uncertainty ranges, which are quite large in the earliest part of the record, owing precisely to the lack of coverage you cite:
No one is trying to argue that there is perfect coverage for all periods.
If you were honest (which you are not), the uncertainty intervals for those old thermometers would be at least ±3°C.
But this would make your hockey stick look pretty stooopid.
How is it not? That’s a serious question. When you respond avoid the use argument by incredulity because those aren’t convincing. Instead use evidence. And by evidence I mean I want you to calculate a value representing the “meaningfulness” of the global average temperature given the varying sets of available observations at different times. Define your threshold for “meaningful” objectively and then report which years qualified as “meaningful” and which ones didn’t. We’ll then review what you come up with together and see if your methodology makes sense universally.
I do. And I can back up my position with data and a computed R^2 value like I did above. If you disagree then compute average temperature for Australia using whatever method you feel is “right” and then compare that with a method that only uses a handful of stations and demonstrate that there is no correlation (R^2 = 0).
Its true then, you are delusional.
We can agree Australia is a big continent. Perth to Sydney is 4000km.
Let’s imagine a hundred years ago someone records the temperature in Sydney at 20C, while in Perth the temperature is recorded at 24C.
50 years later the temperature in Darwin I’d recorded at 36C, so the average temperature jumps to 26.67 C.
If I’m understanding you correctly, you think the average temperature in Australia is 22C in 1924 based on two data points 4000 km apart and in 1974 the average temperature in the Australian continent is almost 5C hotter.
You can’t infill temperature data that doesn’t exist.
A hundred years ago temperature data for vast areas of the earth were simple not available
“We are using point-level measurements”
Are those similar to ZPMs? Zero Point Modules are Sci-Fi stuff.
“Global” temperature stations didn’t exist in 1881, on land or water.
No way NOAA can know. They made it up.
Because a computer program came up with some numbers are not actual measurements.
It is not science-fiction, it is basic geospatial analysis. In many types of geospatial analysis, it is standard practice to take numerous point-level measurements and use them to estimate or model the continuous surface field. Care has to be take to ensure that you are not over-representing the geographic area with the highest measurement density, or that the measurements do not contain signals inherent to the composition of the measurement network, but these are problems that scientists have worked on for many years and very good solutions exist for global surface temperature change.
This is an argument from your personal ignorance, it is not compelling.
RUBBISH.
The data is way to sparse to create anything remotely real.
It is all just scientific FAKERY.
To think other wise shows your complete mathematical ignorance and gullibility.
It’s “sciency” fakery. There is no real science involved.
Just so we can see test how robust these models actually are can we run them using data only from stations that existed in 1881?
Might help to minimise the scepticism if we could see the results of that.
In the real world you can not make up data between measured points. Are you in 3rd grade or did you just not learn anything in school?
Interpolation is a basic element of geospatial analysis and there is abundant literature on the subject.
Just no abundance of reporting stations in 1881.
It’s called interpolation. And like AlanJ said it is standard practice in all disciplines of science. The thing a lot of people don’t understand about it is that interpolation is performed on a domain no matter what. It’s just a matter of whether which strategy you use. Do you use a more advanced local interpolation strategy like Berkeley Earth or do you just assume the unfilled grid cells behave like the average of the filled grid cells like what UAH does. I think I can easily convince you that assuming the unfilled grid cells behave like the filled grid cells is a subpar strategy.
No, its called Fake Data fraud.
Oh, I agree it’s a subpar strategy. Interpolation is great for some purposes but I don’t think it works well when 99.99% of your points have no data.
Modellers jargon for we make shit up to demonstrate our fantasies
Do you want to buy a bridge?
Who are “we”, data fraudster?
Humans.
A self-selected few “humans” who live is charlatans and fraudsters.
And their low-IQ gullible followers.
“We are using point-level measurements to inform us about changes to the whole-earth surface.”
The point measurements are intensive properties of those points, and should not be averaged with measurements from other points. Yes, you CAN average them, but the result is utterly meaningless.
Scientists average intensive properties and with meaningful and useful applications all of the time. Remember that the mean value theorem for integrals tells us that you can compute ΔU using the 1LOT equation ΔU = Q – W and the heat capacity equation ΔT = ΔU/(m*c) for a body using average values for the change in temperature ΔT and specific heat capacity c instead of doing the full integration. That is an undeniably meaningful and useful application of an average temperature. And generally speaking the mean value theorem for integrals is one of the most powerful theorems in all of mathematics and can be used to transform any average of an intensive property into an extensive property.
The usual pseudoscientific word salad to paper over your fraudulent use of Fake Data.
Good job, ruler monkey.
Copy paste of irrelevant gibberish.
You haven’t got a clue what any of what you just posted is about.
“Mean Value Theorem for Integrals states that a continuous function on a closed interval takes on its average value at the same point in that interval. The theorem guarantees that if f(x)
is continuous, a point c exists in an interval [a,b] such that the value of the function at c is equal to the average value of f(x)
over [a,b].”
What are you talking about? Different points on the earth are not a single body and you are not measuring heat capacity. The mean value theorem for integrals has nothing to do with averaging temperatures from different locations. What is the continuous function that you are measuring? You like to throw around mathematical terms with no understanding of their application.
I’m talking about ΔU of body with a non-homogenous temperature. It can be computed as ΔU = ΔTavg * m * c. This is obviously a meaningful and useful application of an average of an intensive property.
The GAT is a meaningless number that cannot represent “the climate”.
The point level measurements are representative of bodies of air, which are not themselves single points. And the mean is derived by taking the area weighted average of the individual regions, which is clearly meaningful (otherwise we can’t say that the mid-latitudes are warmer places than the poles).
Scientists are also not averaging absolute temperature, but the deviation in temperature from the local climatology. In effect asking, “how much warmer or cooler is this place than it typically is?” If the place gets warmer or cooler over time, and most of the places on the earth also do, clearly this warming trend is meaningful (otherwise we couldn’t say that the planet is cooler during ice ages than during interglacials).
The fact that you condone the wholesale FARICATION of data, tells us all we need to know about your scientific honesty, credibility and integrity …
You have negative values for all.
You’re forgetting that daily air temperature readings are influenced by more than just the immediate atmospheric conditions. Averaging these samples implicitly averages the effects of all factors that influenced those daily temperatures.
An example is the heat capacity of the soil at the field site, which, along with changes in soil moisture due to droughts or heavy rains, impacts air temperature readings. How much of the observed air temperature trends might be obscured by these factors?
A long-term change in these factors that would influence long term temperature trends is de facto change in the regional climate, which is the thing we are trying to measure.
Air temperature averages are not climate.
These changes can be a symptom of broader climate shifts, but they can also happen independently of any change in climate.
Imagine in a forested microsite, an infestation of bark beetles kills a large number of trees, leading to a significant reduction in canopy cover. With fewer trees, the ground is exposed to more direct sunlight, causing the soil to warm up during the day. This loss of tree cover also reduces the cooling effect of evapotranspiration, as fewer trees release moisture into the air.
These changes result in higher local air temperatures, independently of broader climate trends.
If this time series is included in the regional grid anomaly, this localized phenomenon will also be propagated, preventing further reduction of error.
Land cover changes around a station can certainly introduce signals that are not representative of the broader region, and these signals are identified and removed as part of the analysis via sophisticated homogenization algorithms, such as that found in Menne and Williams, 2009:
https://journals.ametsoc.org/view/journals/clim/22/7/2008jcli2263.1.xml
So you’ve identified an important consideration, but not one that is unknown to the scientists who have spent decades solving these problems and continually refining the solutions.
“signals” — /snort/
That is still an example of a change we want represented in a global average because it is a real effect.
The UHI effect is an another example of a change that is real and should be included in a global average.
What you don’t want to include in the global average temperature are effects that aren’t real. For example, the UHI bias (not be confused with the UHI effect) is not a real effect. The UHI bias is when you over/under sample urban/rural spaces whereas the UHI effect is the increase in temperature due to land use changes.
You are fooling no one except yourself.
Not sure how a reliable average can be calculated when the samples you are using have different physical characteristics…
There are currently 8.2 billion people each with different physical characteristics. We can still provide reliable averages of height, weight, etc. of the population. The point…having different physical characteristics does not prevent the calculation of an average. In fact, having different physical characteristics makes an average more meaningful and useful than it would be otherwise. Think about it…why compute an average at all if every element that went into the calculation were exactly the same?
And it is all just as meaningless as the GAT—you trendology ruler monkeys never learned the lesson of the average USAF pilot.
“In fact, having different physical characteristics makes an average more meaningful and useful than it would be otherwise.”
If you’re researching the long term impact of smoking cigarettes on the respiratory system, are you suggesting that instead of dividing your test subjects into cohorts based on their smoking patterns (e.g., light, moderate, heavy), it would be more useful to calculate a single average for the entire sample population?
No. I’m just saying that if all smokers smoked exactly the same number of cigarettes then calculating the average number of cigarettes smoked wouldn’t add anymore meaning or usefulness than was already there. Or if all smokers weighed exactly the same then calculating the average weight wouldn’t add anymore meaning or usefulness than was already there.
Don’t what isn’t being said. I’m not saying an average is the be-all-end-all metric. I’m not saying dividing a population into cohorts isn’t useful. I’m just saying that if there aren’t any differences in physical characteristics of the population then calculating an average is pointless.
BTW…the fact that you mention a scenario in which smokers can be divided into cohorts implies that the physical characteristics of each smoker are different otherwise it wouldn’t be possible to divide them into different cohorts in the first place.
Sorry if I wasn’t clear enough on this:
You’re right that in real-world scenarios, measured samples are rarely identical. Competent clinical researchers try to control for confounding variables to minimize error by categorizing test subjects based on factors like age, weight, gender, and ethnicity.
But, Hansen et al. seems to overlook that air temperature samples can vary significantly from one another, particularly when they use the global average temperature index to infer equilibrium climate sensitivity.
Anyone who hasn’t concluded by now that the probity and provenance of temps “data” are abysmally unfit for scientific research purposes has either been living under a rock or is obstinately, deliberately denying the facts.
As for “averaging” the indicative weather readings from disparate points from the hundreds of unique climatic localities all around the world –
“What Madness is this?”
-Thomas Jefferson in a letter to James Madison, 1801
Well, they had a tree ring to rule them all …
Even less ocean measurements
Only a complete FAKER and LIAR thinks they can produce a “global” temperature map from such sparse data.
But they can tell global temperature back to 1881 to 7 decimal places. They wouldn’t lie to us would they? 😉
HadCRUT.5.0.2.0.analysis.summary_series.global.monthly.csv
Uncertainty for temperature indexes is determined probabilistically, thus the number of digits in the reported values shouldn’t be taken to represent the number of significant digits, and one should consider the reported 95% (or 97.5%) uncertainty ranges, which indeed grow significantly as one goes further back in time:
(I took the HadCRUT series and uncertainty ranges from the file you cited in your comment).
What is the instrumental uncertainty of those thermometers in 1860? I don’t see it in your nice hockey stick.
The usual bullshit from the trendology bullshitter.
You don’t understand Lesson One about measurement uncertainty, but this doesn’t stop you from regurgitating the climate pseudoscience nonsense about “error”.
The uncertainty range of individual thermometer measurements will not be represented in this graph because it is a graph of the mean global temperature anomaly. The uncertainty shown is for the global temperature index.
Liar, subtraction does NOT cancel uncertainty.
You might understand this if you knew anything about the subject (which you don’t).
You’ll be a dear and point out exactly where I said that it did.
A anomaly is a temperature difference, you disingenuous fool.
And those old readings are integer degrees F, but somehow trendology has manufactured resolutions down to milli-Kelvin.
More climatology lies.
The resolution is not down to the millikelvin. You have misinterpreted the graph if you think so.
And more lies, you can’t even read your own hockey stick. I’d say they are claiming a resolution of 10 mK, way smaller than 0.6K.
Trendology (and you) don’t understand significant digit rules.
You’re own source says it is between ±0.16 C and ±0.23 C in 1881. That’s not even remotely close to the ±0.0000001 C you claim they say.
His point flew right over your trendology-addled head.
And you are still delusional if you think you can manufacture ±0.1°C out of integer Fahrenheit data.
“Which is quite miraculous given that they only had a handful of stations in North America and Europe”
Here are all the GHCN land stations with records for 1881.
Here’s the same only showing those with data for July 1881.
There are 859 stations with data for July 1881. More than a “handful” and not just in the North America and Europe.
Truly global. Amazing.
Yes, and undoubtedly UHI’d beyond all recognition, or as colloquially referred to as FUBAR.
Yeah, that covers about 3% of the planet’s surface.
LOL
I don’t see any stations in Antarctica.
(Can’t tell if there were any in the Arctic.)
None for the northern part of South America. And maybe one for the whole African continent.
At least bellboy can’t complain if we use USA temperatures to represent the world.
Thing is, USA temperatures show a much warmer period in the 1930,40s., as do most NH sites… and Africa
But those were old thermometers and you can’t trust them. Made up data is much better.
At that time they thought there was a tropical paradise at the North Pole just past the sea ice. The United States launched an expedition with the USS Jeanette in 1879 to go through the ice and find a sea route through the warm water. Unfortunately the ship was stuck and finally crushed by the ice on June 12, 1881.
Yes, and those temperature records show it was just as warm in the recent past as it is today. Temperature charts from all around the world, in both hemispheres show it was just as warm in the past as it is today, which demonstrates that CO2 has had little effect on the temperatures because it is no wamrer today than in the recent past while there is much more CO2 in the air today than there was then.
U.S.
China
Australia
The written, unmodified land surface temperature record tells the true story: There is NO unprecedented warmth today. It was just as warm in the past according to the written records. CO2 is along for the ride.
You keep pulling this nonsense every few months, and never listen to me or anyone else.
But for the record, the graphs you show are not “unmodified” surface data. It’s the data produced by Berkeley Earth. The same data you claim is junk. The fact you are happy to accept it when you think it proves your point but reject it when you don;t like what it says, is telling.
You are only looking at a small part of the global average temperature. Only maximum temperatures, and only those from the hottest month of the year.
Your still haven’t updated this since 2012, making your claim that it was just as warm in the past as today dubious. There has been warming over the last 12 years.
You are not showing all regions, just a select few that you think agree with your claim – that is it’s cherry picking.
Here for example is Berkeley data for Europe. This is showing Summer (June – August) average max temperatures. (Data up to 2018. Red line is 10 year average.)
Here’s the same for India.
Here’s India just for the month of May – usually the hottest month.
Tell me you don’t believe you can compare the surface readings in 1800 to today and consider that they have any meaning regarding the climate.
Berkeley Earth
ALL the JUNK urban sites combined using JUNK methodology. !
Great indication of just how strong the UHI effect is in India, Europe though.
As you should know, this is exactly the same data set as Abbott is using.
More than a handful but not enough to say what the global average temperature was.
So covering basically a tiny fraction of the land area, and absolutely nothing for the oceans.
It takes a completely gullible fool to think you could create a credible global temperature from that.
They claim they only need 3 stations to get it right!
The fact that they condone the wholesale FABRICATION of data, tells us all we need to know about their scientific honesty, credibility and integrity …
They have NONE. !
red thumbs .. is one of them.
ZERO honesty, credibility or integrity..
Just a mindless low-brain follower.
Yep. And yep.
NASA can see all the brown stuff boiling the oceans-
Nasa captures giant swirling gases in the sky | Watch (msn.com)
Hottest Evah Pulled Out Of Thin Air!
Pulled out of what, now?
The “hot air” likely contained some methane. 😎
Same place they got the 2C tipping point, then changed to 1.5C.
Rectum !!
The high temperature record in Laramie, Wyoming is or was 94F set on three different occasions — June 1954, August 1979, and June 2021. However, on July 13, 2024 the AWOS named KLAR at the regional airport showed a temperature that jumped from 91.9, to 96.1F, and back down again. The 3-day summary just shows highest temperature in each hour. However I could pick up the 5 minute data over at MesoWest at the University of Utah. There it showed over a 15 minute period (20:45 UTC to 21:00 UTC) the temperature rose from 93.2F to 96.8F and back to 93.2F …. Cause? Who knows but KLAR is located between a runway and a taxiway.
it could have been a heat burst. I checked the conditions on July 13th, 2024 at KLAR. There were thunderstorms in the area with an inverted-v sounding environment.
Oh, that must be it then. Presumably, by doing away with fossil fuels (and a significant portion of today’s populace), we’ll be rid of such weather phenomenon, right?
“heat burst” = jet turbine exhaust.
Nah. Heat bursts are a weather phenomenon where the dry outflow of an decaying elevated thunderstorm gets compressed and warmed adiabatically before reaching the surface. The resulting undular bore is brief but accompanied by gusty winds and significantly higher temperatures.
Downdrafts from thunderstorms are cold air, ruler monkey.
The only thunderstorm occurred late in the day and around here thunderstorms, even dry ones, produce cool air at the surface.
According to the radar there were thunderstorms in the vicinity of KLAR during the time you mentioned. Most thunderstorms do produce cool outflows. However elevated thunderstorms in the decay phase in a high dewpoint depression environment will evaporate the precipitation very high up and then the outflow will follow the dry adiabats down the to surface and warm. Heat bursts that are only a few degrees warmer than the environment are common enough that they usually don’t get attention. It’s only the ones that go significantly higher that get publicity. I’m not saying it was a heat burst; just that environment conditions were consistent with that of heat bursts in the past.
Question…on the 5 minute observations did you see an increase in wind speed or a change in direction?
Your “heat bursts” yarn is about as convincing as your “oven door heats the oven” fiction.
Jet turbine exhaust is sorta warm. Whose planes were there that day? 😉
I checked the commercial flights and none near the odd time, but a lot of cross country chartered flights and general aviation use Laramie to refuel. I am thinking the issue might be hot air from a hanger that was opened or other location over pavement just to the north. Wind speed and direction provides no hint.
Which reading entered into the global average?
Neither. The official high recorded at KLAR on July 13th, 2024 was 90 F.
The 4 6hr high temperatures shows one at 96.1F at 6pm. Are you sure you are not reading the final 6 hr period at midnight? It lists 90F.
I took a closer look at the data. 90 F is indeed the official high for July 13th, 2024 at KLAR. But, I do I see a 96 F (35.6 C) METAR at 13:53 LST (19:53 UTC) and several other 91+ F observations. It’s a little more effort to get the 5-minute METARs so I’ll have to do that later. Anyway, the 13th has the “s” flag on it which means the data is suspect and that the observations were quality controlled.
You can view the official data sheets for KLAR here.
Very interesting. I’ve bookmarked the link for future reference. You will notice that the sky conditions on the hourly show clear all through the time in question. My memory isn’t perfect but I recall thunderstorms only later that evening. It was a very warm afternoon for sure and 93-96F at the airport would not have surprised me. In fact in the five minute data you are going to see a lot of 93F.
Based on the hourly observations I don’t think the 96 F observation was caused by a heat burst. I say that because winds were gusty most of the day which is atypical of a heat burst environment. OTOH the dewpoint depression was 72 F which is insane. I don’t see how any outflow could be moist enough to prevent at least some adiabatic compression and heating with a dewpoint depression that high. Anyway, clear skies and lack of rain doesn’t preclude heat bursts. Heat bursts are caused by decaying thunderstorms away from the station and which produce precipitation that evaporates before it reaches the surface. Having clear skies and no rain isn’t enough to eliminate a heat burst. Anyway, the fact that the some of the data is flagged with an “s” suggests the 96 F reading may not be correct.
We know from UAH that, globally, it IS very hot. The fact that, here in the UK anyway, it has been cold and wet pretty much all the time for the last three months should tell us something. And that is that global warming (shown by Willis’s analysis of the CERES database) takes place mainly in the winter, at night and in the Artic. You do not FEEL global warming. Heat waves take place independently of global warming. If it’s 40 degC outside, it wouldn’t be a balmy 22 without global warming: it might be 39.2 – beyond human detection.
There is no “globally”. Local temps are what we feel, not global.
Additional Chart for 1891 to 1920
The Honest Climate Science and Energy Blog: Sparse coverage of Earth’s land surface with land weather stations in the old days
Michael Mann claims the 1881 average temperature is accurate to +/-0.1 degrees C. because it was a number pulled out of a climate scientist’s hat. Not any old random hat.
Michael Mann is a Fake Data fraudster, idolized by all the trendology ruler monkeys who infest WUWT.
Michael Mann is politically (NOT scientifically) driven from start to finish. Anything that dribbles from his cake hole is barely laughable.
When all they have is lies they just screech them louder.
NOAA needs to be drastically downsized from the top down. Keep getting rid of the managers until someone at the top learns to be honest and forthright.
And the FAKERY and MAL-ADJUSTMENTS continue in the US
Fake US Temperature Data | Real Climate Science
There you have it, in B&W.