By P Gosselin on 1. January 2022
There’s been no warming in Tokyo in decades!
By Kirye
and Pierre
The December-2021 mean temperature data from the Japan Meteorological Agency (JMA) for Tokyo and its Hachijō-jima island in the Pacific are in and so first we look the December trends side-by side:

Data source: JMA
Since 1985, the mean monthly temperature for December in Tokyo has been flat since 1975.
The island of Hachijō-jima, located some 287 kilometers south of Tokyo, thus far away from all the urban sprawl and heat island affects, has also seen a flat mean temperature trend – since 1923!
Nothing unusual is happening at these two very different stations in December.
Annual mean temperatures show no warming!
But that’s only a trend for a month. Now that December data are available, it is possible to update the annual temperature trend for these two stations.
First we plot the JMA mean annual temperature for Tokyo going back to 1994:



Data source: JMA.
Here we see that the megalopolis has been cooling moderately, thus defying the predictions of warming. Despite the sprawling megalopolis of steel, asphalt and concrete, Tokyo has been cooling off.
Obviously there’s something other than trace gas CO2 driving the trends there.
Hachijō-jima
Moving offshore to the Tokyo island of Hachijō-jima, away from all the urban heat island affects, we look at the newest mean annual temperature going back to 1950.
Has it been warming or cooling?



Data source: JMA.
If we look closely at the chart, we see there’s something natural and cyclic behind the temperature at the island in the Philippine Sea, but overall no warming in over 70 years!
Found it…Finally…The spot where every place is twice as hot as
Since there has been no warming at all, everywhere else is infinitely hotter – experts say!
a report on the Boston Globe says New England is the winner- that its warming twice as fast as anywhere else
Tony Heller has a video listing the countless other places also warming twice as much as anywhere else- apparently everywhere is warming twice as much as anywhere else and it’s settled science!
😉
Gee, and shoveled 3 inches of snow off my front steps this morning, in 16F temps, and we’re supposed to be warming?
Are the people who make those pronouncements actually living on this planet? Are they space aliens or something?
Funny, but I was thinking the same thing. As we’ve discussed in previous comments I live near you. It took me an hour this morning to shovel out the snow from my driveway.
I suspect what is happening is the Japanese scientists have too much integrity to “adjust” their temperature measurements. These are probably the real temperature measurements. In other countries like the United States, United Kingdom, Germany, and so forth the records are being kept by global warming fanatics and they alter or manipulate the numbers to show the results they want to see. Bias is a terrible problem in the softer sciences.
I don’t think anything at all is happening to the climate. And my memory goes back more than seventy years.
Same here in Australia. Temperature adjustments by our BOM were so inexplicable that conservative Prime Minister Tony Abbott proposed in 2014 to conduct a formal investigation.
This was taken the following year from JoNova’s website (scientific author and TV presenter):
“Last year, Graham Lloyd [environmental editor] wrote in ‘The Australian‘ newspaper about how the BOM had made whopping two degree adjustments to data which turned cooling trends to warming trends and instead of improving the data, it created discontinuities. The BOM’s eventual explanation lamely exclaimed that the stations “might” have moved. (And they might not, too. Who knows, but remember this is what 95% certainty looks like.) Lloyd wrote about how historical records of extreme heat at Bourke had effectively been thrown in the trash. Who cares about historical records?
In response to the embarrassment and revealing questions, Tony Abbott wanted an investigation. But our progressive Environment Minister, Greg Hunt, and his Dept of Environment, opposed the investigation and also opposed doing “due diligence”. What are they afraid of? Instead, Hunt helped the BOM set up a one-day-wonder investigation with hand-picked statisticians that wasted another nine months before admitting that the BOM methods would never be publicly available or able to be replicated. If it can’t be replicated, it isn’t science.”
Someone has to be the Lake Wobegon of climate change, where every trend is above average!
That was a great show- I loved when talked about his Swedish uncle- or whatever- and the other Swedes. Too bad Garrison Keiler flirted with a woman one day and became a victim of the “me too movement”. Another white man beat down over nothing.
Simple math.
Every place on earth is warming twice as fast as the rest of the world.
If we take Tokyo as standard base and multiply 0 warming with 2 the result is 0.
And the alarmists will quickly say, Regional only, but It’s still worse than we thought! Low sunspot activity is masking the current unprecedented warming. Wait until the next cycle! It’s already too late to escape massive climate damage, but we can still save the planet if we act now (Huge carbon tax).
Just send money!
The problem with the whole warming narrative is quite simple: all the data is coming from simple thermometers that measure only surface temperature.
No narrow band IR insolation equipment has been used to measure if the CO2 has been causing what they claim it does… except for the evaporation tanks that we’ve been using globally for hundreds of years which I guarantee would have started showing a discrepancy had a second gas other than water vapor been driving the climate in any detectable way. Since they’re biased towards water due to using water and being driven by evaporation they would very quickly have started to veer off their known patterns had the “rapid” “increase” in CO2 been a problem.
August 4th, 1944, Black Forest, GGR; 554ppm CO2
There are few places where UHI reached its full extent half a century ago. Tokyo is one of them.
So this makes sense. If AGW is mainly UHI.
Can someone please look up figures for Istanbul to test it?
Berkley Earth examined many thousands of temperature series worldwide and conclusively proved that global warming is NOT just UHI.
Really griff ? You might sound more compelling to idiots if you learned how to spell Berkeley ….. or maybe not.
“NOT just”…..only 99.7% LOL!
GBM
Griff’s Binary Mind. A thing is one thing OR the other.
Vaxxes either stop you getting it, or they don’t.
Etc.
Tell me, O Grifflette were you really an ArtStudent™?
It IS or it ISN’T
Black or White
Good or Bad
Saint Greta counts among those that can’t see in shades of Gray
Like Arctic sea ice Griff? It’s at its highest level in 18 years.
2021 ended with a mean of 10.55e6 km2 of sea ice extent in the NH. This is still below the 11.08e6 km2 1991-2020 climatological average. Anyway, 2021 is the 9th lowest in the 42 year satellite record (or 170 year Walsh et. al. 2016 record) and is only the highest in 6 years (2015 was higher) [1].
Honest question. Why is this a bad thing?
I don’t know that it is bad. “bad” is a subjective thing. What might be considered “bad” by some might be considered “good” by others.
I don’t know what you’re quoting from, but NSIDC has Arctic Sea Ice extent at 13.061 million Squ Km on December 31st, well above 2015 at 12.744 million:
https://nsidc.org/arcticseaicenews/charctic-interactive-sea-ice-graph/
As Bob Boder points out correctly, you have to go back to 2003 to find a higher number for December 31st, making it the third highest this century, which could actually portend a bad thing that is going to happen in humanity’s future.
I’m quoting from NSIDC (I even linked to it in my post). The annual mean for 2021 was 10.55e6 km2. Dec. 31st, 2021 is included in the mean for 2021.
Berkeley Earth defined “rural” in a way that included towns large enough to have a UHI. This means that the rural component of Berkeley’s data was effected by UHI. Their definition may have driven partially by convenience since the number of truly rural weather stations as a percentage of the total is dramatically lower today than it was a century ago.
What Kirye did was find weather stations which were not impacted by increasing UHI over a long period of time.
Did Griff use Berkley Earth and Conclusively in the same statement 🙂
UAH TLT is considered by most on WUWT to be immune from UHI and it shows a warming rate of +0.14 C/decade. Also keep in mind that UHI warming and UHI bias are two different concepts. Make sure you are not conflating them. UHI warming is the warming in and around cities caused by urbanization. UHI bias is the contamination of primarily rural grid cells with urban observations (positive bias) or primarily urban cells with rural observations (negative bias) when doing a spatial average on a grid mesh. Berkeley Earth determined the bias on the global mean surface temperature is -0.10±0.24 C/century since 1950. In other words it is statistically equivalent to zero with slightly higher odds of being negative. Yet the UHI warming itself is still likely positive over the same period though it is unlikely to be a detectable amount.
> UHI bias is the contamination of primarily rural grid cells with urban observations
It is also rural stations overrun by development. It is loss of rural stations. It is increasing reliance upon airport stations unsuited for the task, it ismassive adjustemnts to urban station actual measured temperatures. It is retro active upward adjustments to old historical records. It is grid infill. So many things wrong and all biased trending upwards.
Rob_Dawg said: “It is also rural stations overrun by development.”
No. You’re conflating the UHI effect itself with the UHI bias. Rural stations getting urbanized is a real effect. And any warming that occurs because of that urbanization should be included in spatial averages because it is real. It’s only when those same urbanized once-rural stations are used as proxies for predominately rural grid cells that the bias occurs because that warming is not real.
Rob_Dawg said: “It is loss of rural stations.”
Correct. All other things being equal the loss of rural stations may (depending on grid meshing and averaging methodology) causes a positive UHI bias. Likewise, a loss of urban stations causes a negative UHI bias. Similarly an increase/decrease in the rural-to-urban station ratio (again depending on grid meshing and averaging methodology) causes a negative/positive UHI bias.
Rob_Dawg said: “It is increasing reliance upon airport stations unsuited for the task”
Correct. All other things being equal this typically (but not always) causes a negative UHI bias since the movement of the station from the city center to the airport places the station in an environment less effected by the UHI. This is an important factor in why the UHI bias (not the UHI effect itself) is near zero or even slightly negative after 1950.
Rob_Dawg said: “It is retro active upward adjustments to old historical records.”
The myth that never dies.
Rob_Dawg said: “It is grid infill.”
Grid infilling is completely different topic and has little influence on the UHI bias either way.
Rob_Dawg said: “So many things wrong and all biased trending upwards.”
There are definitely problems with existing methodologies. We’ll never be able to measure the global mean temperature with zero uncertainty. But the biases make the warming trend higher than the bias corrected trend over the entire instrumental record. In other words, the adjustments made to correct these biases actually make the warming trend lower; not higher.
“Berkeley Earth determined the bias on the global mean surface temperature is -0.10±0.24 C/century since 1950.”
In other words, the noise is greater than the signal.
If you read through the Berkeley paper, you will see that the language is filled with doubt, qualification and hope (they “expect that this process will remove most of the urban warming.)
They also acknowledge that they are only interested in putting a number to the global average.
In other words, the UHI is a can of worms and will remain a mystery. Any attempt to quantify it to two decimal places is magical thinking at best.
Greg B said: “In other words, the noise is greater than the signal.”
I don’t know how you concluded that. The signal is the warming trend since 1950. It is +0.155 C/decade. The UHI bias upon this trend is -0.034 C/decade to +0.014 C/decade. So the trend could be anywhere between +0.141 and +0.189 C/decade with the value +0.165 C/decade being the most likely. In other words, the UHI bias is more likely than not to be causing Berkeley Earth to underestimate the warming rate. The lower bound is still +0.141 as compared to the published +0.155 C/decade. That’s hardly what I’d call noise being greater than the signal especially considering the signal is literally many times that of the noise (assuming the use of “noise” is in reference to their uncertainty here).
Greg B said: “They also acknowledge that they are only interested in putting a number to the global average.”
They make a good point. Their focus is not on publishing the UHI bias contribution from each and every station. Their focus is on publishing the combined UHI bias on the global mean temperature trend from all stations. That’s not say that knowing the individual contributions isn’t important and useful. It is. It’s just that the publication of those values won’t change the -0.010 ± 0.024 C/decade figure.
Greg B said: “In other words, the UHI is a can of worms and will remain a mystery.”
That’s not what Berkeley Earth is saying. In fact, they are saying the opposite. It isn’t a mystery. It is more likely than not to be slightly negative and constrained to within ±0.024 C/decade of -0.010 C/decade. Could it be having an effect on the warming trend? Yep. Is the effect big? Not really. That’s hardly what I’d call a can of worms.
Greg B said: “Any attempt to quantify it to two decimal places is magical thinking at best.”
Note that 0.24 C/century is equivalent to 2.4 C/millennium, 0.024 C/decade, or 0.0024 C/year. So saying 2 decimal places could be overinterpreted by some at best and could be misleading at worst. It would have been better to say 2 significant figures. But then that would have been incorrect since the uncertainty allows for possible values in which even the first significant digit is different. But even assuming the quantification is 1 significant figure that’s still enough constrain the influence on the warming trend do a reasonably small envelope of possibilities.
And don’t hear what I never said. I never said there were not concerns with their methodology and that you should believe their assessed figure without question. I happen to have my own concerns with their methodology namely the use of the low resolution MODIS grid which might be underestimating the urbanization of some rural classified stations. But I’m not just going to dismiss their result altogether either because that wouldn’t be consistent with a pragmatic and skeptical approach in dealing with lines of evidence.
I’m saying that the Urban Heat Island phenomenon is a phenomenon of such complexity that it can’t be measured with any credibility. I would dismiss the Berkeley result altogether. It is an attempt to solve a complex problem with a single, magic number.
What bdgwx won’t admit to is that UHI is a temperature rise from undeveloped natural terrain. Expecting that temp rise to be constrained to the cylinder of air above the urban area is ignoring all kinds of air currents. It is based on a fundamental rationale of radiation only. No conduction or convection or advection, etc.
The energy added to the atmosphere will be spread across a large area like it or not.
Strawman. I never said that. I never thought that. And I don’t want other people thinking that either. In fact, I’ve posted a few studies on here showing how the UHI effect itself can alter wind, precipitation, etc. in addition to temperature around metropolitan areas. But we’re not discussing the UHI effect itself but the UHI bias upon the global mean temperature. Those are two related but different topics. Don’t conflate them.
And yet it was measured and that measurement has survived many years of scrutiny without the identification of egregious mistakes that would justify dismissing it altogether. If you’ve made it a practice to dismiss lines of evidence without justification then you aren’t being very skeptical.
You are destroying your argument that CO2 is the cause of heating. If UHI is a “bias” then it DOES affect temperatures and other climate phenomena. Trying to adjust it away is ignoring the fact that it exists and is something other than CO2.
If GAT is to be a fair representation of the temperature of the atmosphere, then it should include UHI and be defined as being caused by both GHG’s and UHI. Trying to remove a “bias” that is real only causes more uncertainty.
This whole system is is screwed up and needs to move back to accepted scientific treatment of data and it’s predictions.
I’ll say it again, what is the variance in the absolute temperature data set used to calculate GAT. If you can’t provide that, you are not doing science.
You are conflating the UHI effect with the UHI bias.
UHI effect – The amount of warming above and beyond what would otherwise be expected without urban land use changes. This effect really does cause locally higher temperatures.
UHI bias – The amount of error injected into global mean temperature measurements due to inadequate spatial averaging methodologies that incorporate observations influenced by the UHI effect itself. This bias creates the illusion that spatially averaged temperatures are higher/lower than they really are.
The UHI effect and the UHI bias can be simultaneously positive and negative respectively. They are different things. Do not conflate them.
Nobody is removing the UHI effect because it represents a real component on a spatial average. What they are trying to do is remove the UHI bias because it represents an unreal component on that average. This has nothing to do with CO2 or any other forcing agent’s attribution on the trend of that average.
I don’t want to be too pedantic amongst all the cherry picking and ignoring moved stations, but how is 1994 to 2021, 45 years?
Your argument is yourself to cherrypick … that was only one start date of 4 possible … hardly a glowing beat down :-).
I counted 96 45-year periods possible for the December data if you count the preliminary 2021 data. Similarly there are 115 27-year periods. I don’t know if you are considering the claimed 45-year period or the actual 27-year period, but either way I don’t see how you are getting only 4 possible start dates. Maybe I didn’t understand the point you were making?
It’s not, but if you refer to the first graph, a lightbulb may come on
It may, but it will be very dim.
It was the headline figure, specifically referring to annual data. The first graph is for just December.
Isn’t is obvious just from the graph that something happened to the Hachijō-jima station around 2004?
Checking the supplied JMA data, and sure enough a red line with the footnote saying
Fascinating how that is only important when the data doesn’t support the global warming narrative.
No, it’s important when looking at any temperature time series. But it’s especially important when you are trying to base an entire argument on just two locations, both of which have seen a change in the last two decades.
It’s important regardless of what the data shows. If there are inhomogeneities then they need to be addressed and adjustments made to correct any biases they cause even though the net effect of those adjustments actually reduce the Tokyo warming trend and the overall global warming trend relative to the unadjusted data. If these dataset maintainers were truly working working toward an alarmist narrative you’d think they would ensure their adjustments made the overall warming trend higher instead of lower.
Bellman, it’s not obvious to me that something unusual happened at the Hachijō-jima station around 2004 – I see similar changes in other parts of the record.
On the data page, it appears the inhomogeneity marked by the red line is from August of 2002 through July 2003, but the drop from 18.6 to 17.6degC is from 2004 to 2005.
How was it determined there was an inhomogeneity? Were there changes in instrumentation, observation methods and/or site location? Or was it natural variability?
Without further information (“Remarks” on the data page offered no clarification), I could say that someone (or an algorithm?) thought the inhomogeneity was “obvious” and guessed at possible causes.
Now follow what MarkW said and whenever you see a site relocation the data is invalid as per your claim … you will be consistent in this won’t you because it’s a big problem?
I didn’t say the data was invalid, just think it’s worth pointing out if you you insist on using it alone for your trend line.
It is obvious from the remark that someone THOUGHT there was a problem, but had NOT ONE IOTA of evidence to show what the reason was for the inhomogeneity. That by itself makes me doubt the OPINION that one exists.
But, assuming there is an inhomogeneity, what is the solution? As I have pointed out, the unethical thing to do is to create from whole cloth a replacement set of information showing what you think it should show. Why would you want to adjust the data? To continue the record so one can claim that this is a “long” series of high accuracy. Ha!
Is there another possibility? Of course! The ethical thing to do is to end the previous set of data and start a new series. Does this mean the new series is long enough to use? Probably not. Too bad, so sad. The ethical thing to do is make do with what you have, not what you wish you had.
By the way, the “inhomogeneity” at 2004 looks eerily similar to the early 1960’s. Shouldn’t this be marked similarly?
“It is obvious from the remark that someone THOUGHT there was a problem, but had NOT ONE IOTA of evidence to show what the reason was for the inhomogeneity. That by itself makes me doubt the OPINION that one exists.”
This is obviously SOP, when such inhomogeneity exists. I’m guessing that every other station with similar, otherwise documented inhomogeneities highlights them in this way. They don’t bother with pages of explanation, because they have already warned us off from evaluating pre and post red line data together, as our poster did.
FYI, 1975-2013 December data – the valid portion of the time period referenced in this post, results in a trend with an expected value of 1.59 deg/Century, and with an admittedly high standard deviation of 1.39 deg/Century. That results in a ~7/8 chance that the trend is, in fact, positive.
Of course this all begs the question of why in the world our poster would make anything out of statistically insignificant trends for one station, using bad evaluative techniques. Scraping the bottom of the barrel for reasons to avoid the larger truths, I suppose….
Did you not read the explanation of the claimed homogeneity?
“The red lines, if any, indicate data nonhomogeneity caused by changes in instrumentation, observation methods and/or site location.”
Which one of these are the real cause? Instrumentation? Observation methods? Site location? All three? Maybe something else?
I would like to point out that many claims like this require the assumption that the people actually making the observations were deficient in their ability to make readings. Even to the point of unethically making up data!
“Which one of these are the real cause? Instrumentation? Observation methods? Site location? All three? Maybe something else?”
What does it matter? Any of these reasons is sufficient to avoid using pre and post red line data together.
Folks, lots of hypocrisy in WUWT posters w.r.t. data quality. If it proves their point anything goes. If not, just use the piece of it that does.
Actually most of us were saying the reverse if you are going to exclude it then there are hundreds of sites that need to be excluded for same reasons.
It matters what the cause of the failures were. If you can not identify an actual reason for a failure, you can’t just assume that there was one.
I am glad you have taken my suggestion (made some time ago) that if data are broken then it is unethical to create new information by using other guesses. It is only appropriate that the record should be stopped and a new one started. Trying to make a “long” record by adjusting data to remove any discontinuity is unethical.
“I would like to point out that many claims like this require the assumption that the people actually making the observations were deficient in their ability to make readings. Even to the point of unethically making up data!”
You’ve gone beyond mere fact free denialism to Foxian “Just make **** up” territory….
Just exactly what is “make shite up territory”?
All of this assumes that you can recognize problems in the data better than the folks actually making the readings years ago!
You are claiming that these folks were mere dummies taking measurements with no understanding of what they were doing and not dedicated professionals trying to do the best they could by following best procedures as outlined by the various meteorology agencies in their operating practices.
That sounds like yet another strawman. No one is saying observers were incompetent or challenge their professionalism. What is being said is that stations moved, instruments changes, observation practices changed, etc and that those effects cause biases when individual observations are aggregated in space in time to produce a timeseries of the global mean temperature.
It’s not a straw man! Do you believe established procedures for calibration were followed? How about maintenance procedures? If you think all the data is properly gathered, then a “change” is due to an uncontrolled condition. You can not simply erase this by creating new information to make the record “look” like it is uncontaminated. You should stop the record and create a new one. If you have no evidence that mistakes were made then the data is valid and should not be changed.
I definitely do not think established procedures for station operation were followed to the tee with perfection. In fact, it is quite easy to find cases where there is shoddy sighting and maintenance. What is not easy is finding evidence that there is widespread incompetence among observers or of a significant bias that is introduced into the global mean temperature anomaly timeseries as result of poor station operation. If you know of evidence then please post it. I only ask that it is peer reviewed and that the bias has been quantified in terms of a specific C/decade value upon the currently assessed trends and commentary as to which datasets contain the bias and which do not.
Sounds like Karlization to me
“It is obvious from the remark that someone THOUGHT there was a problem, but had NOT ONE IOTA of evidence to show what the reason was for the inhomogeneity.”
That’s not at all obvious. I don’t know what happened to Hachijō-jima in 2003, but I assume they didn’t just mention it on a whim. They say the same for the Tokyo data, and as I showed previously that was becasue the Tokyo station was moved to a cooler location.
Matched by USCRN 2006 – 2021, until just before Cop26 when they altered the graphic and like the sorcerers apprentice conjured up an uptick.
I call it the Nasa Giss effect.
Of you select “all months” you can see the uptick is just noise.
https://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/time-series/anom-tavg/1/0
Yes that is the new user unfriendly graphic they have. I have read that they intend to pollute the USCRN network by adding non compliant temperature stations.
The results of the network must be an embarrassment to them.
I have lived in Tokyo for many years, and the city has done a lot to try to reduce UHI effects. The most obvious thing is painting the city streets a light grey color so the asphalt doesn’t absorb heat. You can walk on this painted asphalt in bare feet in the summer, something which was not possible previously. I wish they would do this in Vegas, where the asphalt gets so hot in the summer that it becomes soft enough to feel squishy when you cross the street.
The big question- how long will that grey paint last? Maybe the color could be put into the asphalt?
Fairly useless data analysis, with invalid conclusions.
The data sets here were used uncorrected. After you correct the data sets, a positive warming trend emerges. Then, and only then can the correct conclusion be drawn.
Its Worse Than We Thought.(TM)
As many here at WUWT know, one of the techniques used for data correction is homogenization. I have been developing a companion correction technique which I call Pasteurization. Like dairy products which are frequently homogenized and pasteurized, I wanted to develop an algorithm which would benefit temperature data as well.
As the formal definition of pasteurization is to add heat to achieve a desired result, this is a natural for Climate Science.
I hope other climate researchers here at WUWT will consider pasteurization when adjusting and correcting climate data sets.
I’m guessing you’re attempting satire … because –
“As the formal definition of pasteurization is to add heat to achieve a desired result” is NOT the definition of pasteurization … formal or otherwise.
The two salient components of pasteurization are High Temperature, Short Time (HTST), sufficient to kill dangerous bacteria. You missed both criteria. Your method would do nothing useful but prepare the milk for a simple bechamel sauce.
Well, most Climate Scientology ages like fine milk, anyway.
Both end on a sour note? 🙂
I tried developing an algorithm which produced the “High Temperature, Short Time (HTST)” methodology you described.
The result produced temperature plots with implausibly large temperature spikes.
Such an algorithm may be useful for precipitation data. The spikes would correspond to flooding events.
Remember, I am not attempting to kill bacteria here, as bacterial infection is not considered to be a problem with climate data sets.
Research continues apace on this topic.
Whoooosh …
You keep missing the point. The term ‘homogenize’ means to smooth out or make uniform and is therefore germane to a statistical process. Pasteurization is a h/t to Pasteur’s process of making whole milk safe and preventing spoilage in other perishable products without altering its flavour. Other than a reference to temperature and time, the process has no relevance to weather.
I don’t see how you can develop a useful algorithm based on an inaccurate understanding of the concept. “Climate science” is already rife with such wrong ideas.
Your first guess was correct, I believe…
It certainly does alter it taste, or something else makes a big difference between fresh from the cow, NO processing, and bottled from a store.
That depends mostly on which method is used. Most people wouldn’t know the difference because milk can vary from cow to cow, breed to breed and season to season not to mention what their feed is at the time.
I grew up on raw milk and worked on a Caribou ranch during the summer, as a kid. One of my jobs was milking the freshened cows for the ranch house needs. Check out my post below.
Rory, the milk you purchase in plastic bottles is pasteurised at temperatures high enough to kill vegetative organisms. It has a shelf life of less than two weeks.
In the UHT process, spore formers are killed as well, and the milk is filled aseptically into pre sterilised cartons. Its life is over a year, until opened. The milk usually has a slight caramelised taste due to the high cooking temperatures.
God I’m old.
I can remember back when milk came from cows.
Don’t forget the goats’, baaaaaa and the asses’ for bathing, of course.
Yes I know. My family were close friends with dairy farmers on Sumas Prairie, near Rosedale (the place that was in the news near Vancouver Canada where the floods occurred). He was head of the BC Dairyman’s Assoc. and the 1st to install pasteurization, over 70 years ago. The ultra high temp (UHT) alters the natural flavour of the milk too much for most tastes and makes it unusable for many applications.
…a slight caramelised taste…
You mean that it tastes scorched. Just like the temperature record after it’s been ‘homogenised’.
The US climate reference network shows the same for the US going back 17 years – as far back as it goes. I expect NOAA will soon have to come up with an excuse for making some adjustments to the data or perhaps declare the system to be fatally flawed and delete it all.
The spikes at 1998 in both charts is associated with El Nino. Here in Texas, I measured very high total water vapor that year. Moreover, there was a major flood on the Guadalupe River that caused serious property damage.
So reality shows no warming, but we should not be swayed by this “reality” monkey business. Its getting warmer cause the models say so and if the models conflict with reality then reality must be wrong. /sarc
besides, Joey Biden wants to create millions of jobs for landscape destroying wind and solar “farms”
It’s quite clear to anyone with even the slightest understanding of settled climate science that the Japanese are creating CO2 atmospheric holes here and pushing more warming onto the rest of us. Why isn’t the IPCC all over this with an emergency proclamation and sanctions?
They are not using world’s best practice data homogenisers.
The record keepers in Japan should hire in the data fudgers from the Bureau of Meteorology in Australia. They lead the world in this advanced technology and can transform any recorded temperature trend into an upward trend. Australia now on version 2.2 as the weather fails to comply with the climate models. But never let reality get in the way of a good climate model.
I’ve just been listening to a podcast in which the guest, a historian was raving on about how there are more natural weather disasters now. He kept droning on about Katrina, but correct me if I’m wrong, aren’t there now less weather disasters?
Too many people’s history only extends 10 years back.
Few people even remember 10 months back.
The climate models predict increasing disaster. Sadly the empirical data does not support the models, so one must be corrected.
Global Warming Solved – eat more sushi.
Uh?
Well temperatures here in the UK set a new New Year’s day record of 61F yesterday… only frost where I live once this winter.
you can cherry pick all you like, but the world is warming!
Griff: is that how you conclude we’re warming? You might note that those records go back only a few years., And only for parts of the UK.
At this rate the MET Office will claim global warming because is warmer than January.
Isn’t using “Where I live” the ultimate Cherry Pick?
“where I live”
and
“you can cherry pick all you like, but the world is warming!”
That could be your greatest post ever griff. Fabulous. Congratulations.
I am genuinely wondering if they ARE putting something the water supply…Really these days people seem completely mad.
Duh!
Well, O Griffalo,
You can cherry pick all you like but the world is cooling.
The last two days are just weather.
I have had frost on many occasions where I live this winter, but I dont live in a town.
A few weeks ago it snowed. Unusual for December.
Hows that arctic melting happening, O Great Griffalo?
“Well temperatures here in the UK set a new New Year’s day record of 61F yesterday”
you can cherry pick all you like… lol!
The previous record was set in 1916! over 100 years ago at 15.6°C, now they’ve recorded 16.3°C.
The problem for the Grifter is that on the 30th Dec 1995 the UK saw -27.2°C. That’s within the current climate window of 30 years, so why is that less important as a cherry?
How many Brits are bitching about that pleasant temperature?
Griff Fact Check: Statement deliberately misleading.
Actually, it’s a self-parody.
There’s good news on the way for your CAGW Doomsday Death Cult, griff! The UK temperature is set to plummet to a balmy -10C.
You’ll be able to cheer on as plenty of poor and unfortunate people die from bitter cold, because they can’t pay electricity bills which have been increased by 1/3 by ‘green taxes’ mandated by your CAGW Doomsday Death Cult.
Ignoring the fact, already pointed out, that JMA indicate ‘data inhomogeneity’ in the Tokyo data set since December 2014, we can still ask “why start the trend in 1994”? It’s 28 years of full annual data, but the Tokyo data set goes back to the 1870s. Why not use 30 years, the WMO recommended normal period of ‘climatology’?
The answer, of course, is that if you go back the full 30 years you get a warming trend again, as you do, in fact, for any year starting before 1994.
So some cherries here. First, pick data from a single location (there are dozens available in Japan alone); second, don’t mention that the producer urges caution in the use of that particular set after 2014; third, carefully select your start year to produce the longest possible period with no warming trend.
Take the above cherries and declare that there is no global warming because Tokyo didn’t warm since 1994 (possibly). Be applauded for your stunning insight by the WUWT faithful and be carried shoulder high from the building!
Has the climate changed in Tokyo?
We don’t exactly know, because JMA is hedging its bets; but Tokyo is a single location and statistically significant change wouldn’t be expected on that scale at every single location. Better to ask has Japan warmed significantly over the past 30 years? Better still, has that entire region warmed significantly over the past 30 years? If so, then yes, the climate has changed slightly.
If “statistically significant change wouldn’t be expected on that scale at every single location”, then statistically significant change elsewhere would be required to make a mean that shows global warming. Just where is that statistically significant change occurring to end up with a 1.5 deg change?
The data used by the various sources are widely and freely available. Now, if someone was just making it up (actually, hundreds of people from at least a half-dozen global temperature data producers) then I imagine this would have been discovered long before now.
Some stations show little warming, some even show cooling; but many, many more, spread out globally, show statistically significant warming. Hence the statistically significant global warming trend.
Meaning…you don’t like that data so you ignore it.
No, meaning that we include it with all the other data and see what comes out. That’s what the scientists do. Check out JMA’s global warming rate.
Bristle cone data too 😉
A component of climate change occurs when there is sustainable divergence from a natural range (excluding impulse events) over a 30-year period.
Agreed. That’s what is observed in most regions and globally.
If so, then many lives have been saved due to a lack of cold. Remember, Global Warming hypothesis will only materially affect cold temperatures, ie winters, and mainly by preventing night low extremes.
Of course, your CAGW Doomsday Death Cult doesn’t like the fact that your non-C, and probably non-A GW saves lives.
Except the second data set is 60 years at Hajijochima island and as pointed out earlier Tokyo has down much to minimize its UHI effect thus eliminating that warming so earlier warming may be associated with that.
But it really doesn’t mean anything because no one is saying there hasn’t been any warming coming out of the LIA.
The second data set plucks an even riper cherry. It counts December data only. If you count annual data for that location over the past 60 years (available via the link) there is a warming trend of +0.06C per decade. However, being a single location this is also of little or no significance.
You are free to find a location in Japan which shows a strong secular warming trend and post it here.
The adjusted warming rate at Tokyo from 1888-2021 is +0.11 C/decade.
The unadjusted warming rate at Tokyo from 1888-2021 is +0.24 C/decade.
So Tokyo is 0.24 x 13 = 3.12 degrees warmer than it was in 1888? Are you sure?
Graemethecat said: “So Tokyo is 0.24 x 13 = 3.12 degrees warmer than it was in 1888? Are you sure?”
That is the increase using the unadjusted data.


How can this graph be consistent with the data at the top of this article?
The graph at the top is only showing December temperatures. The one showing annual temperatures only starts in the early 1990s.
So no warming in December at least.
The unadjusted trend over the same period (1888-2021) just for December is +0.28 C/decade which is higher than the annual trend of +0.24 C/decade. NTZ got a near zero trend by picking 1975-2021.
If you want to base your argument on a single month using a single uncertain station, selecting a specific period that will give you no trend, then yes, could say there was no warming in December. It just doesn’t seem like a very useful observation.
Meanwhile in the rest of the world, UAH shows Decembers warming at the rate of 0.13°C / decade since 1978.
Forgot the graph.
For the year as a whole the trend since 1979 is much the same at 0.14°C / decade.
Ahh the cold 70s
Adjusted
😉
“You are free to find a location in Japan which shows a strong secular warming trend and post it here.”
Oshima, ID: 47675
Warming trend since 1992 is 0.37°C / decade.
I used 1992 as a start point as that’s the first year after a “red line” data inhomogeneity. But over the whole data set starting in 1939 the trend is 0.27°C / decade.
FUJISAN Station ID:47639
Trend over entire length, excluding incomplete years, starting in 1939 is 0.13°C / decade.
Trend since 1994, the starting date used for Tokyo in this article, is 0.26°C / decade.
TAKAMATSU Station ID:47891
Trend since 1942 is 0.36°C / decade.
Trend since 1970 (51 years) is 0.46°C / decade.
I see more pauses
I see a pause
Oooooo! A warming of +0.6C over the next 100 years! Exactly when does the global warming kick in at this location? You have now dismissed two locations as not being statistically significant. Is that enough to accomplish your goal of proving global warming? How many other places have no statistically significant warming? Sooner or later you are going to have to have very, very significant warming in some places to offset those with no warming. Where are these locations? Don’t show averages of stations, show the individual stations data, just like these two.
But, but, but,,,, It was so warm during Christmas, certainly that means we are all going to die in a firey flood!!!!!!!
A lot of cherry picking accusations that may well be true, but had the writer shown upward trends at the two stations it would quickly be accepted as settled science and absolute proof of global warming.
One interesting stat (if someone has the time) would be the number of countries with warming in the last years. If warming has happened mainly in the northern region, I am guessing that number won’t be too high. There could be interesting uses for that number…
Using GHCN-M adjusted data the warming rate at Tokyo from 1976-2021 (45 years) is +0.33 C/decade and from 1888-2021 of +0.11 C/decade. Note that the unadjusted warming rate for Tokyo is +0.24 C/decade or more than 2x the adjusted rate. Using JMA unadjusted it is +0.24 C/decade from 1976-2021 (45 years) and +0.26 C/decade from 1888-2021. The headline of “New Annual Temperature Data Show No Warming In Tokyo In 45 Years” is incorrect regardless of whether GHCN or NTZ’s own source of JMA is used.
It’s because global warming is so, so . . . local.
How is the temperature reported determined for a city the size of Tokyo? Where I live I notice variations as great as 7 degrees Fahrenheit when I drive a distance as little as 5 miles.
Exactly. This is why inhomogeneities exists in the observational record. That is when a station moves location it also moves to a different microclimate. Other sources of inhomogeneity include instrument changes, time-of-observation changes, microclimate changes (like urbanization). These inhomogeneities must be addressed to mitigate the effects of non-climatic biases when analyzing the observational record for climatic purposes.
As I said elsewhere, your assumption implicitly asserts that the people responsible did not properly address these problems at the time and were probably in contravention of accepted procedure. That’s a serious assertion with no evidence.
It’s not an assumption. We have little or no evidence that observers analyzed each and every changepoint to determine its magnitude at the time of the change. There was little motivation to do so at the time since they did not know their observations would be used for climate research decades later. Of course, if you can post convincing evidence that this analysis did, in fact, occur for each and every changepoint or even just a significant percentage of them at the time the change occurred and it somehow got lost along the way then you should present it to the maintainers of the data repositories so that it can be included.
It is not on me to make an argument that the procedures for maintaining a temperature station as outlined in the appropriate methods and procedures that were required were not followed. I am not the one claiming errors occurred and that replacement information for existing data should be created anew.
It is incumbent on you to prove that the data collection did not meet accepted criteria that the meteorological agencies required at that time. If you can not prove otherwise, then data should be closed at the point you feel an error occurred and a new record started. Creating new information to replace existing data just so a “long” record can be claimed is not ethical.
Yet another strawman. No one is saying that procedures for maintaining a temperature station were not followed. And no one is saying that data collection did not meet accepted criteria. What is being said is that stations moved and that these moves introduce changepoints in those stations record and that the magnitude of the changepoints was not quantified in most cases at the time of the change.
Microclimate change is a existential crisis. It needs to be micromanaged in order to save the earth.
The Earth does not need saving just because Tokyo warmed more than surrounding areas due to urbanization.
You need a thermometer that reads .0001 😉
It’s recorded as record highs and lows, obfuscating their causes and effects a la global warming/climate change.
Climate extinction 😉
Warming, change, observed and modeled trends, records with impulse impurities, and propagation of errors including attribution.
Seems to me that deserts would be the ideal place to look for warming. Islands would be the worst place to look for warming.
You can look for warming anywhere. If you focus on deserts you’ll determine the warming rate for deserts. If you focus on islands you’ll determine the warming for islands. If you focus on land you’ll determine the warming for land. If you focus on oceans you’ll determine the warming for oceans. So on and so on. If you consider all areas you’ll determine the warming globally. It just depends on which area you want to determine the warming. Different areas will have different warming rates. You’ll even find that some areas are actually cooling.
You just described why it is important to know the absolute temperature of an average AND THE VARIANCE that is expected with a specific mean. The “uncertainty” (it is not) stated with a Global Average Temperature when it shows precision out to 1/100ths to 1/1000th is not a statement of the standard deviation of the data.
People deserve to know what the true standard deviation of the entire absolute temperatures in a data base actually is. They would be surprised to see the range from the winter in Antarctica to summer in the tropics, all on the same day of the year.
“People deserve to know what the true standard deviation of the entire absolute temperatures in a data base actually is.”
For every time period evaluated, there is the sum of the variances between the expected values of of each station reading, and its calculated value, from spatially weighted averaging. There is also the sum of the variances found from the square of the standard deviations of the measurement error of every station. These can be simply added, and then using the normal weighted averaging statistical laws, the standard deviation of each “entire” temp/date point can be found.
The Berkeley Earth temp data is the best example I know of where that parameter is provided.