Improving Climate Literacy: 'Temperature Homogenization Activity'

Guest essay by Jim Steele

In 2012 the National Academies of Science published A Framework For K-12 Science Education Practices, Crosscutting Concepts, and Core Ideas. Although the framework often characterizes climate change as a negative and disruptive phenomenon, the framework clearly states students need to understand all the factors causing climate change writing, “Natural factors that cause climate changes over human time scales (tens or hundreds of years) include variations in the sun’s energy output, ocean circulation patterns, atmospheric composition, and volcanic activity.”

However instead of promoting textbooks that critically analyze and debate the relative contributions of a diverse array of climate factors, the American Association for the Advancement of Science (AAAS) has been attacking any state that wants to adopt textbooks promoting climate debate. Alan Leshner, a psychologist and CEO for the American Association for the Advancement of Science and Camille Parmesan (whose debunked climate claims have already been published in text books) argued, “From the scientific perspective, there are simply no longer “two sides” to the climate-change story: The debate is over. The jury is in, and humans are the culprit.

Whatever the outcome of that political battle, the science framework is absolutely correct to state, “The actual doing of science or engineering can pique students’ curiosity, capture their interest, and motivate their continued study.” So I am creating a series of activities (that educators can download for free from my website) to supplement any Climate Literacy science text. Hopefully these lessons will engage students in critical thinking, simulate how NOAA/NASA climate scientists think and “do science” as currently executed.

The current pedagogy argues teachers should not be the “Sage on Stage“ spewing dogma for students to regurgitate. A teacher should be the “Guide on the Side,” alternating short bits of input followed by time slots (such as “Think, Pair and Share”) that allows students to investigate and discuss the issue with their peers. The first supplemental piece is my beta version of the “Temperature Homogenization Activity” and I would appreciate any comments and typo corrections) to help improve this and subsequent lessons before I post it to my website. If you know of any high school or college educators that might give this lesson a trial run, please pass it on.

“Temperature Homogenization Activity.”

Teaching Objective I: Understanding Trends and Anomalies

Students must be able to read and create quality graphs.

Teacher input: Give the students the graph “Maximum Temperature USHCN Raw Data” illustrating trends from 3 quality weather stations in the US Historical Climate Network, all located at a similar latitude in southern California. In pairs or small groups, have them discuss the following questions and formulate questions of their own.

Think, Pair Share:

1) What factors might make Brawley so much warmer than the other stations?

2) Which station(s) experienced the warmest temperatures between 1930 and 1950?

3) Which station(s) experienced the most similar climate trend from 1900 to 2010?

4) Is climate change affecting all stations equally?

clip_image002

 

Teacher input: Brawley is further inland and unlike stations closer to the coast doesn’t experience the ocean’s cooling effect. Drier desert conditions with few clouds and little vegetation creates a microclimate that heats up much more quickly than the other stations. Brawley and Cuyamaca shared the most similar trends but may have been difficult to see due to micro-climate differences. To better extract climate trends that can be compared between stations experiencing varied micro-climates, scientists graph anomalies.

Instruct students to visit the USHCN website and download the raw data for the 3 stations into a spreadsheet like EXCEL. To determine anomalies relative to the 1951-1980 period, calculate the average temperature for each station for that time period, then subtract the station’s average from the raw data for each and every year. This will produce negative temperatures representing cooler years and positive temperatures representing warmer years than average. Have students create their own anomaly graph, and compare their charts with the anomaly graph below.

(Teacher note: Do not use an average from years later than 1980. During the mid 1980s there was a massive change in equipment that also required relocations that brought the weather stations closer to buildings. )

clip_image004

 

Think, Pair, Share:

Have students discuss the value of using anomalies to extract regional trends.

Brainstorm about what factors could cause only Chula Vista’s micro‑climate to suddenly warm relative to the other stations?

Teaching Objective II: Understanding Artificial Inhomogeneities

Teacher input: Because each weather station exists in a unique micro‑climate, individual differences cause each station to exhibit small cooling or warming trends that might not be seen at the other stations. For example, changes in vegetation, sheltering or waste heat from various building configurations, or a difference in topography that funnels wind differently, all can affect short term temperature trends. Changes to the landscape such as removal of trees, a fire, or changes such as increased pavement affect how the soil holds the moisture and how much moisture is transpired into the air. The resulting differences between weather stations are called inhomogeneities. Natural inhomogeneities are expected and an integral part of local climate change. However, scientists must eliminate any artificial inhomogeneities caused by a growing population that alters the surface and add waste heat, or when stations relocate to a different micro-climate. All 3 stations exhibited trends that were reasonably similar until 1982. So what caused Chula Vista to artificially warm, or conversely did Brawley and Cuyamaca suddenly cool?

To answer that problem, instruct students to first visit the USHCN website and acquire the ID# for each station. Then have them visit NOAA’s Historical Observing Metadata Repository, plug in the ID# and look for information regarding any changes at that station and the year in which those changes occurred.

Think, Pair, Share:

Which station(s) moved and in which year? Compare the changes in temperatures from any station that moved to the temperature changes for stations that did not move. Then determine if the re-location caused a warming or cooling. How did the re‑location affect the temperature trend?

Teacher input: Confirm the students’ research. According to the Historical Observing Metadata Repository, Chula Vista moved 2.5 miles in 1982, from a location situated along salt evaporation ponds to an urban setting surrounded by buildings. In 1985, new instruments were installed that required new cable connections. So the weather station was moved 190 feet, presumably closer to a building.

After the 1982 relocation, the temperature at Chula Vista rose by 6°F, in contrast to a drop in temperatures at the other 2 stations, so Chula Vista’s move very likely caused its temperature to rise artificially. An interpretation of artificial warming is also consistent with the relocation to a warmer urban setting.

There was no verifiable relocations or change of instrumentation at the other 2 stations. However 7 months of temperature data in 1992 for Brawley were reported via a Hygrothermograph (HTG) but that would not affect the 1982 comparisons.

Teaching Objective III: Homogenizing Data to Create Meaningful Regional Climate Trends

Teacher input: Good scientists do not blindly accept raw data. Data must undergo quality control analyses that adjust data for documented changes known to create changes unrelated to climate change. After accounting for artificial inhomogeneities, scientists adjust the data to create what they believe is a more realistic regional trend.

Based on what they have learned so far, ask the students to create a graph that best exemplifies southern California’s regional climate change. Simplify their task by using the graph (below) for just Chula Vista and Cuyamaca (which are only 15 miles apart). Students are free to adjust the data in whatever manner they feel best represents real climate change and corrects for artificial inhomogeneities.

clip_image006

Teacher input: After students have graphed their own temperature trends, have them compare their results with the graph below illustrating how USHCN climate experts actually homogenize the data. (The comparison should promote lively discussions as most students will create trends for both stations that resemble Cuyamaca.)

clip_image008

Think, Pair, Share: Discuss why climate experts created such different trends. Why did scientists lower the high temperatures at Cuyamaca during 1930s to 1950s by 3 to 5°F? What other concerns may affect scientists’ expectations about how best to homogenize data.

Teacher input: Clearly the data was adjusted for other reasons than can be explained by Chula Vista’s relocation. Adjusting data for unknown reasons is different from quality control adjustments and is called homogenization. The use of homogenization is contentious because a change in a station’s trend is often assumed to be caused by unknown “artificial” causes. However the natural climate is always changing due to cycles of the sun, ocean oscillations like El Nino and the Atlantic Multidecadal Oscillation that alter the direction and strength of the winds, or natural landscape successions. So how can scientists reliably separate natural climate changes from undocumented “artificial” changes?

One method suggests comparing the data from more reliable weather stations that have undergone the least amount of known artificial changes to determine a “regional expectation.” That regional expectation can serve as a guide when adjusting trends at other les reliable stations. However as we have seen the most reliable stations can undergo the greatest adjustments. So what other factors are in play?

 

Many scientists working for NOAA and NASA believe that rising CO2 explains recent temperature trends. In addition, many scientists suggest that the proximate cause of regional climate change is driven more by natural changes in ocean circulation. In 2014 climate scientists published a peer-reviewed paper (Johnstone 2014) suggesting that climate change along the coast of North America could be best explained by natural cycles of Pacific Decadal Oscillation (PDO) due to its affects on sea surface temperatures in the eastern Pacific. Give the students the graph below from Johnstone 2014 and ask them to compare changes in sea surface temperatures (SST in red) with the raw and recently homogenized temperature data from southern California.

clip_image010

 

 

 

Think, Pair, Share: Which data sets (raw or homogenized trends) best agrees with the hypothesis that ocean temperatures drive regional warming trends? Which data sets best agrees with the hypothesis that rising CO2 drives regional warming trends? Could a belief in different hypotheses affect how temperature trends are homogenized?

 

Teacher input: Have students compare the temperatures trends for the northern hemisphere (below; created by the Japanese Meteorological Society and published by the National Academy of Science in 1977) with the new global trends presented by NASA’s Gavin Schmidt who argues 2014 was the warmest year on record. Point out that earlier scientific records suggested temperatures dropped by 0.6°C (1.1°F) between 1940 and 1980, with the 1980 temperatures similar to 1910. Compare those temperatures with Schmidt’s 2014 graph that suggests 1980s temperature anomalies were 0.5°C higher than the 1910s.

clip_image012

clip_image014

 

Think, Pair, Share: Why does the period between 1940 and 1980 in the 2 graphs disagree so dramatically? Does the new graph by NASA’s Gavin Schmidt’s represent real climate change or an artifact of homogenization? If the difference was due to homogenization, is that a valid reasons to alter older trends. If Gavin Schmidt’s starting point for the temperature data from 1980 to 2014 was lowered, so that 1980 temperatures were still similar to 1910 as suggested by earlier research, how much higher than the 1940s would the 2014 global temperature be?

Teaching Objective IV: In‑filling

Teacher input: As seen for the USHCN weather stations, raw data is often missing. Furthermore extensive regions around the world lack any weather stations at all. To create a global temperature climate scientists must engage in the art of infilling. A recent scientific paper, Cowtan and Way (2014) used in-filling to contradict other peer-reviewed research that determined a pause to global warming for the past 15 years or more. By in-filling, these scientists argued that there was no hiatus and warming trend continued.

Give the students a map of the world and an amber colored pencil. Instruct them to lightly shade all the continents to show they have all warmed. Now provide the map (below) of Global Historical Climate Network Stations showing the station locations. Instruct students to simulate infilling, by darkening the regions on all the continents where ever weather stations are sparse (the whitest areas). Then give them the NOAA’s 1950-2014 map modeling the continent’s warmest regions

clip_image016

clip_image018

 

 

Think, Pair, Share: Can in-filling reliably represent local temperatures trends? The warmest regions appear to be related to infilling. What other reasons would cause greater warming in the in-filled regions? Should regions without a sufficient density of weather stations be included in a global average temperature?

 

 

Extended Lesson Activities:

 

Have students pick a group of stations within a 500‑mile area within similar ecosystems (i.e. the Great Plains, or Southeastern forests, New England Forest) and examine the differences between raw and homogenized temperature data for all those stations. (See an example of such a comparison for Massachusetts in an analysis of moose migrating southwards) Use the metadata for those

I suggest looking at just the maximum temperatures in this extensions activity because minimum temperatures are much more sensitive to landscape changes and other microclimate changes. The next activity will examine differences between maximum and minimum temperatures and the effects of the landscape on temperature trends.

Related Teaching moment: Tolerance and Respectful Debate

Have students write an essay about how our nation is fostering greater tolerance and compassion toward different ethnicities, races, religions and people with different sexual preferences. Then contrast that tolerance with the condoned hatred that’s been hurled at people for having different climate beliefs. Often those very different beliefs are simply a result of trusting different virtual realities created by statistical temperature trends. Would more respectful debate, between scientists who trust homogenized trends and those who don’t, help the public better understand climate change?

0 0 votes
Article Rating
175 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Sweet Old Bob
January 28, 2015 6:25 pm

Great material . Hope “Party Line” instructors are brave enough to use this.
Not holding my breath….(8>()

Lance Wallace
January 28, 2015 6:29 pm

Very thoughtful. I suggest adding figure numbers to make it easier to discuss. Under your Objective 3, the figure might benefit from an arrow showing the move of Chula Vista station in 1982.

Reply to  Lance Wallace
January 28, 2015 9:02 pm

Agreed

u.k.(us)
January 28, 2015 6:51 pm

Wait….what ?
Your final paragraph, reads like a call to arms.

clipe
January 28, 2015 6:56 pm

Related Teaching moment: Tolerance and Respectful Debate

Not likely
http://bishophill.squarespace.com/blog/2015/1/28/tol-on-radical-greens.html

Reply to  clipe
January 28, 2015 9:05 pm

clipe, I understand your pessimism but I can only hope as students witness the tremendous complexities and uncertainties of climate change that they will be more sympathetic towards diverging viewpoints, and help stave off the growing attempts to impost intellectual tyranny.

January 28, 2015 6:58 pm

2 + 2 = global warming

Reply to  Max Photon
January 28, 2015 7:36 pm

no. In the Church of CAGW, 2+2 = 5, just as Orwell predicted.

garymount
January 28, 2015 7:20 pm

Typo : XCEL should be Excel.

Global cooling
January 28, 2015 7:27 pm

Well, it is always a good idea in science to go to the original raw data and research. In this case, students should think whether the given samples (of weather stations) prepresent the whole population.
I have been wondering if we really need the global average temperature if we are only interested in trends. Is one tree in Yamal enough or should we infill missing data in paleoclimatology as well as.
Confirmation bias is very strong. IPCC needed research to support its political agenda and found Mann. Researchers noticed the demand of results showing warming and created them. Selection bias in funding and publicity makes the rest. No need for conspiracy.

True Conservative
January 28, 2015 7:34 pm

Brilliantly done, Mr. Steele, brilliant!

John Pickens
January 28, 2015 7:34 pm

In addition to the correct spelling of Excel, you could suggest a free spreadsheet program like Open Office. Something like: “import the data to a spreadsheet program like Excel or Open Office”. Encouraging schools to use open source operating systems and software could save thousands of dollars per student over their 13 year public school cycle.

Reply to  John Pickens
January 28, 2015 7:56 pm

That’s a good idea John. I would need to try out the open source programs first, before I recommend them. Is Open Office the best open source?

garymount
Reply to  jim Steele
January 28, 2015 8:26 pm

This is what William Briggs writes in his book “Breaking the Law of Averages” :
If you want to create your own data set, open a spread sheet (like OpenOffice.org) …
Please don’t mention open source operating systems.
Microsoft will be offering free Office software to buyers of inexpensive computing devices in the near future.

John Andrews
Reply to  jim Steele
January 28, 2015 8:29 pm

Open Office and Libre Office are essentially the same set of progrms. Libre Office is a fork of Open Office and is frequently used in Linux distributions. Both and others are available at no or low cost. I would select Libre Office first because many of the contributors to that program set have left Open Office to work with less control over how and why they do things by the commercial supporter of Open Office. Software politics going on.

Kit
Reply to  jim Steele
January 29, 2015 3:16 am

libreoffice is better than open office, altho either will work for this purpose

Reply to  jim Steele
January 29, 2015 9:18 am

OpenOffice and it’s fork LibreOffice are both seemingly about the same quality wise and are generally considered better than other open soure office suites. OpenOffice and LibreOffice definately open excel documents better than excel does with Open Document Format files, frequently OpenOffice and LibreOffice will handle older versions of Excel documents better than Excel will. Puritanical open source advocates will recomend LibreOffice over OpenOffice mostly for political reasons.
I use it at work as much as I can, Excel has some annoying bugs, where you can’t display blanks in cells with zero, not-a-number values or blanks as easily as you should be able too.

John Pickens
Reply to  jim Steele
January 29, 2015 1:14 pm

I use Libre Office, which is the version of Open Office used in the Ubuntu Linux operating system I use. Libre Office and Apache Open Office both share much of the same code. Libre Office seems to be updated by its user base more often. They are both free and open source.

January 28, 2015 7:35 pm

“Why does the period between 1940 and 1980 in the 2 graphs disagree so dramatically? Does the new graph by NASA’s Gavin Schmidt’s represent real climate change or an artifact of homogenization?”
A very reasonable answer would be that one is Northern Hemisphere, the other is global. If, that is, the students are ever told that.

Michael 2
January 28, 2015 7:55 pm

Really excellent. I can appreciate adjusting data when you have a good reason to do so and a reasonable baseline; in other words, an obvious step change in the trend.
It would be better not to adjust data but one then has to discard much of the record entirely. So, “adjusted data” can be used with caution.
“Infilling” is in my opinion improper science. If you don’t have data, don’t invent it.

jimmyjoe
January 28, 2015 7:56 pm

You probably should include an explanation of what an ‘anomaly’ and how to calculate it is for the high schoolers.

Reply to  jimmyjoe
January 28, 2015 9:07 pm

Agreed a definition of an anomaly would improve things.

January 28, 2015 8:00 pm

NIck Stokes says “A very reasonable answer would be that one is Northern Hemisphere, the other is global. If, that is, the students are ever told that.”
That seems a tad snarky Nick in light of the fact that I said, “Have students compare the temperatures trends for the northern hemisphere (below; created by the Japanese Meteorological Society and published by the National Academy of Science in 1977) with the new global trends presented by NASA’s Gavin Schmidt”
Also consider that all climate scientists acknowledge that the most reliable data exists only for the northern hemisphere.
2 demerits for you Nick.

Reply to  jim Steele
January 28, 2015 8:09 pm

It’s still a very reasonable answer. Especially as one is NH land only, the other global land/ocean. It’s a very obvious difference. Why would it be attributed to homogenisation?
I and others have made indices of non-homogenised data. Zeke Hausfather surveys that here. Homogenisation makes very little difference. Students might want to get quantitative instead of hand-waving.

Mike the Morlock
Reply to  Nick Stokes
January 28, 2015 8:17 pm

The purpose of the exercise is to get students to think for themselves just as you do. By the way Nick Congratulations you are becoming a skeptic. You that came here drank the water breathed the air …the change is inevitable it is just a matter of time…
michael

Reply to  Nick Stokes
January 28, 2015 8:39 pm

Nick, My lesson emphasizes scrutinizing the data, yet you label it hand waving?!? The activity simply asks them to explore and discuss the data. You get 2 more demerits for false accusations
NIck, It was not clear to me that the National Academies of Science graph of 1977 was and only for land. Where did you find that information because I agree it is an important distinction. However many climate scientists will suggest that changes in land temperatures should follow changes in ocean surface temperatures. I remember a video of Phil Jones justifying altering data to make land and ocean temperatures more homogeneous. So based on your criticism I will ask students to discus correlations between ocean and land temperatures, and add it to the discussion of the Johnstone paper.
Of additional interest might be that National Academies of Science discussed Budyko’s climate model in some detail writing, “Budyko assumed that the release of waste heat increased by 4 percent each year so that after 200 years waste heat, rather than solar energy, would be the controlling factor in climate.” Such a historic scientific discussion of waste heat is a valuable addition to CO2 warming.
The National Academies of Science’s discussion of waste heat preceded arguments on WUWT by 20 years. Curiously like CO2 advocates they also argued that waste heat could melt Arctic ice writing, “If we follow through Budyko’s arguments, we would expect the polar ice covers to retreat and eventually disappear.”

Reply to  Nick Stokes
January 28, 2015 9:14 pm

NIck how much infilling was required to generate ocean surface temperatures before the advent of Argo? and after Argo? I would expect greater interannual variations due to El Nino/LA Nina. I went to your link and didn’t see any error bars on your graphs. Would your lack of error bars qualify as “hand-waving”? And lastly what is the method of quantifying error bars regards infilled data that may easily differ 100% from reality?

Reply to  Nick Stokes
January 28, 2015 9:26 pm

Jim,
There were no suitable collections of SST data available to Budyko in 1967. There was precious little land data. Very little had been digitised. I think he mentioned 260 stations. Even in the late 1980’s, GIStemp was using a Met Stations only index.

Reply to  Nick Stokes
January 28, 2015 9:55 pm

Jim,
“Would your lack of error bars qualify as “hand-waving”? “
There are no error bars on any of your graphs here either.
In fact that exercise was about methods. Different methods of calculating an index. We showed that alternative methods to GISS etc gave the same result. With or without homogenisation.

Reply to  Nick Stokes
January 28, 2015 10:08 pm

Nick, forgive my skepticism regards your sincere efforts. However based on the gross dissimilarities between raw data trends and homogenized trends, especially between the gross dissimilarities between raw data and homogenized for stations that have not relocated or changed instrumentation, plus the gross dissimilarities between homogenized data in 2011 vs 2015, I honestly don’t understand how you can truly claim, “We showed that alternative methods to GISS etc gave the same result. With or without homogenisation.”

Reply to  Nick Stokes
January 28, 2015 10:22 pm

NIck says “There were no suitable collections of SST data available to Budyko in 1967. ”
I totally understand. But the lack of data for the pre-1970s could not improve by much without a time machine. How much did digitization add? Whatever the case may be, the absolute lack of data suggests any homogenization of trends before 1970s should be viewed with suspicion. On the other hand Budyko’s, and NAS’ updated graph, are supported by my many proxy studies showing that temperatures have not exceeded the 1930s to 50s, depending on location. To some, a homogenization process that lowers the peak warming that is suggested by many sources, is just another example of “hiding the decline.”

Robert B
Reply to  Nick Stokes
January 28, 2015 10:38 pm

The differences seem to only occur for land temperatures in the NH after 2000. The problem from 1910 to 1980 is still there.
http://www.woodfortrees.org/graph/crutem3vnh/plot/hadsst3nh/plot/hadcrut4gl

Reply to  Nick Stokes
January 28, 2015 10:51 pm

Jim,
“How much did digitization add?”
It’s the whole thing. Before digitisation, the information sat on handwritten log books and typed forms. Current indices like CRUTEM use of order a million daily data a year. Easy if you can get it into a computer, even in the 1970’s. But a lot of typing, even if you can get the docs together. It was Jones and GISS in the ’80s that made land data usable, although the Met Offices did the brunt of the typing. And more slowly, Folland etc for SST. The GHCN project of the early ’90s capped it off and put the land data on CD.
‘I honestly don’t understand how you can truly claim, “We showed that alternative methods to GISS etc gave the same result. With or without homogenisation.”’
Well, we did it. Those are the results.
The point is that again, the global average is made up of thousands of stations. Some are adjusted up, some down. It evens out. In fact, more mathematically, much homogenisation involves partly replacing station data with a mean of neighbors. In the average, that is just a minor reweighting of the same data. That is what would have happened to your Cuyamaca data. It’s a mess, and the adjustment would have introduced a lot of other station data. That makes a big difference to the Cuy plot, but not to the average.
The second point, of course, is that the land/ocean data is dominated by ocean.

Reply to  Nick Stokes
January 29, 2015 1:28 am

“Homogenisation makes very little difference.”
Then stop doing it!

Solomon Green
Reply to  Nick Stokes
January 30, 2015 6:25 am

I do not see anything in the site to which we are referred by Nick Stokes that indicates Zeke was using raw data.

Mike the Morlock
Reply to  jim Steele
January 28, 2015 8:21 pm

oops cut the “that” after “You” I really must proof read. Tsk

Reply to  Mike the Morlock
January 28, 2015 8:40 pm

Mike we share the same affliction.

mothcatcher
Reply to  jim Steele
January 29, 2015 1:23 am

Jim – it would do no harm to add a pointer to (or lead the students to find) the cautionary logic that Nick seems to be pointing out (especially about the SH uncertainties). Would add a bit of balance in my view, and wouldn’t detract at all from your excellent piece. In fact, I think it would add something to the whole. Sure, Nick is primarily picking holes, but that’s what he’s there for!

rooter
Reply to  jim Steele
January 29, 2015 4:13 am

This shows that this teacher does not know much about the subject at hand. He does not know that there is a difference between global land ocean indexes and land only. He also just seems to ignore the fact that the number of stations are much lower in the firste graph and in the second. And there is a different geographical distribution of those stations. How could the students know this?
And considering the teachers stressing of the sparsity of stations later this becomes even more puzzling. The teacher could better ask why are the indexes not more different than they are.

Reply to  rooter
January 29, 2015 10:31 am

Rooter, I am not sure why your emphasis is on denigrating “the teacher” . When you say the teacher does not know the difference between land and ocean plus land indexes, that is your own fabrication. You must hail from the Slandering Sou school of sniping as she likes to create such straw dogs and false attributions. People erroneously validate their anger by hurling different graphs for different time periods, and different conditions at each other.
So I want to focus your attention on the last extension activity regards tolerance and respectful debate, “contrast that tolerance with the condoned hatred that’s been hurled at people for having different climate beliefs. Often those very different beliefs are simply a result of trusting different virtual realities created by statistical temperature trends. Would more respectful debate, between scientists who trust homogenized trends and those who don’t, help the public better understand climate change?”
Perhaps writing down your thoughts will help you be sincerely introspective and help you understand that snarky words of anger will not improve the lesson.

rooter
Reply to  rooter
January 29, 2015 11:59 pm

Jim Steele wants to focus on tolerance and respectful debate. By writings like this:
” You must hail from the Slandering Sou school of sniping as she likes to create such straw dogs and false attributions.”
Thanks to Steele for the lecture.
It is a fact that the teacher in question did not know this difference between data for land only and data for land-ocean.
Or perhaps the teacher wanted to draw attention to the fact that even with these differences in data type and difference between amount of data the difference was actually small?

Reply to  jim Steele
January 29, 2015 7:29 am

Jim,
What Nick (and I) did was simply download the raw GHCN-M data and use it to calculate global land temperatures. We got results similar (though not identical) to NASA/NOAA/Hadley results. So did skeptics Jeff Id and Roman M. You can see a discussion of all the land temp reconstructions that were being done back in 2010 here: http://rankexploits.com/musings/2010/another-land-temp-reconstruction-joins-the-fray/
We’ve improved methods a bit since then, but the general conclusion remains. While the impact of homogenization is particularly strong in the U.S, its impacts globally are much smaller, as shown in the first figure here: http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/
Finally, I’m not entirely sure what you are trying to imply with your discussion of infilling. Land temperature stations are not evenly distributed (in fact, about 2/3rds are in the U.S. or Western Europe). Simply averaging all the anomalies would give you a representation of a world where 2/3rds of the global land area is in the U.S. or Western Europe, something I hope you’d agree is rather incorrect. Thats why groups like NCDC or Hadley use grid cells (usually 5×5 lat/lon) to ensure that each station’s temperatures are weighted proportionate to the land area they represents in global reconstructions. Groups like GISS, Cowtan and Way, and Berkeley Earth do more complex interpolation and tend to infill areas further from stations, though this mainly matters in the Arctic where station coverage is sparse.

Reply to  Zeke Hausfather
January 29, 2015 8:39 am

Zeke says “While the impact of homogenization is particularly strong in the U.S, its impacts globally are much smaller”.
Another way of conceptualizing the problem is that in places with a greater concentration of stations as well as more stations that extend over the longer time periods like the USA and western Europe, those are the stations that most distorted the most by homogenization. That suggests homogenization is distoring the most reliable data. It is a red herring that anyone is suggesting “averaging all the anomalies” so “2/3rds of the global land area is in the U.S. or Western Europe”
It also suggests the global average is nw dominated by stations that have shorter life spans of 60 years or less, and averaging those stations will indeed give a nice warming trend. But those stations can not provide information about the previous warming trends in the 20s through 50s, so they do not suffer from homogenization like the Cuyamaca where earlier warming peaks are shaved off more and more with each round of homogenization. The short-lived stations can not provide insight about the cyclical nature of climate change that one would expect from dominating influences like Pacific And Atlantic Oscillations.
It was seems obvious when we examine the California raw data, there has been a definitive impact from the PDO but the homogenization process obliterates. Short term stations are tainted by the population boom. Tuscon comes to mind and other more urban Arizona stations. The heat waves of the 30s fostered a boom in air conditioning, which in turn opened much of Arizona to rapidly expanding retirement communities beginning in the 60s, and right abut that time, we see rapid warming of minimum temperatures in those regions.
To repeat the homogenization process obscure local dynamics and creates the greatest errors in the most reliable station. Assuming the errors will simply cancel out is folly, obscures the real underlying issues, and leads to incorrect diagnoses. To repeat it reminds me of the joke about the man with his head in the oven and feet in the freezer. The doctor attending to his pain, suggested his problem was psychological because on average his temperature was just right. That’s what canceling out gets you

Reply to  Zeke Hausfather
January 29, 2015 10:11 am

Jim,
Most of the net effect of homogenization in the U.S. is to deal with two things: CRS to MMTS transitions, and TOBs changes. Both of these introduce strong (and measurable) cooling biases. Other more minor factors that tend to be picked up during homogenization are station moves and urbanization biases. This is why the residual impact of homogenization on U.S. temperature trends is small when you exclude TOBs-related adjustments and the post-1980s MMTS transition, as shown in the figures here: http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/
Homogenization doesn’t have nearly as large an effect in Europe as in the U.S. The major reason is that temperature measurement systems in Europe were traditionally state-run rather than volunteer-run, and didn’t have large-scale TOBs changes or other system-wide changes that introduce systemic biases like in many U.S. stations. Generally speaking, homogenization has the largest impact where individual stations tend to do things radically different than their near neighbors. This is unrelated to station density, apart from the fact that denser networks allow for better pair-wise comparisons. This is why the Berkeley record (with 42,000 stations and 16,000 in the U.S. alone) ends up with a nearly identical homogenized series as the NCDC record (~7,000 stations and 1,218 in the U.S.).

Reply to  Zeke Hausfather
January 29, 2015 10:12 am

Jim,
If you object to what you term “infilling”, how exactly do you propose to calculate global mean land temperatures from a discrete number of stations that are not evenly geographically dispersed? It seems like you would have to use some form or another of either gridding or more explicit spatial interpolation.

RACookPE1978
Editor
Reply to  Zeke Hausfather
January 29, 2015 10:23 am

Zeke Hausfather (replying to Jim)

If you object to what you term “infilling”, how exactly do you propose to calculate global mean land temperatures from a discrete number of stations that are not evenly geographically dispersed?

Why use in-filling at all> UNLESS, it is your specific political goal to show as many “spread out” as many – and as few! – hot (red) areas across as many parts of an exaggerated Northern-hemisphere Cartesian-coordinate-map as possible to present the most effective propaganda as possible?
The methods chosen by Hansen and Mann and Nye and the others need to show “red areas” across as much of a map as possible to create their hysteria and thus their power. A Cartesian coordinate map is the worse possible to show scientific data or impacts or relationships, and it skews all interpretation of all secondary data and all data trends over time.
For example, what are the actual “grids” that GISS uses as they skew data and smear it across the tundra and oceans? Where are their data “points” and what areas are there no data points? We don’t know, and they use this deliberately.

Reply to  Zeke Hausfather
January 29, 2015 10:57 am

Zeke asks, “If you object to what you term “infilling”, how exactly do you propose to calculate global mean land temperatures from a discrete number of stations ”
As an ecologist there is no denying that all organism respond locally. The importance of understanding climate change is at the local and regional level. That’s where changing weather exerts its impact. Teach people about the more concrete examples of regional climate change. If you have concerns about heat stress and heat accumulation, then talk about the maximum temperatures, instead of averaging a minimum temperature that is tainted by landscape changes, and amplifies the average without accumulating heat. So I propose looking a climate change from local and regional perspective based on actual data.
If you want to create a global average for the purpose of illustrating the effect of CO2, then make it absolutely clear that much of the data is hypothetical. Subtract the warming from ventilating Arctic heat, instead of using it to amplify and adjust for theoretical missing heat.
Indeed regions with greater station density will yield greater inhomogeneities. But that proves my point that more stations with smaller life spans create short term trends that obscure the dynamics of the past 100 years. Furthermore studies have shown homogenization has skewed European data similarly, removing peak warming in the 40s and making a similar warming trend that suggests a systematic bias in the way homogenization is applied
http://landscapesandcycles.net/image/88852915.png
Chylek 2006 used the data from Greenland’s only two long term stations, so no homogenization possible and he illustrated the cyclical nature of Greenland’s climate and a faster rate of warming in the 30s.
http://landscapesandcycles.net/image/78283826.jpg

Reply to  Zeke Hausfather
January 29, 2015 11:18 am

Zeke says, “Most of the net effect of homogenization in the U.S. is to deal with two things: CRS to MMTS transitions, and TOBs changes. Both of these introduce strong (and measurable) cooling biases.”
First, your claim does not explain why homogenized trend in 2015 is much steeper than the homogenized trend in 2011, long after the MMTS moves in the 1980s. It does not explain the change in trends of in stations with no MMTS moves.
Second I do not see your claim manifest itself in all weather stations so homogenization needs to be done station by station. Perhaps a sampling bias but many stations show a warming after MMTS deployment. The Doesken Ft Collins study illustrated a warming bias for minimum temperatures and a cooling bias for maximums but the degree of that bias varied with each month and over the years.

DHF
Reply to  Zeke Hausfather
January 29, 2015 1:36 pm

Zeke,
The intentions of most actors may have been the best, but anyhow, now there is near perfect linear correlation between adjustments from raw to final temperatures for USHCN and the rise of the CO2 content in the atmosphere.
Ref: NCDC Breaks Their Own Record For Data Tampering In 2014
https://stevengoddard.wordpress.com/2015/01/05/ncdc-breaks-their-own-record-for-data-tampering-in-2014/
In my opinion data trumps good explanations. I think that scientific minds should feel inclined to stop telling how fabulous the adjustments are, and start asking: How can it be?
Do you have an understanding of why there can be so good correlation between the increase in adjustments and the CO2 increase?

Reply to  Zeke Hausfather
January 29, 2015 7:35 pm

“If you object to what you term “infilling”, how exactly do you propose to calculate global mean land temperatures from a discrete number of stations that are not evenly geographically dispersed?”
Or one might ask people like Zeke why they take temperature readings of a tiny point in space and then smear that temperature reading over huge distances. If the temperature reading is in a valley or on a hill, or near the sea, or inland, or in an urban area, or in the country, you’ve completely destroyed any meaning that temperature reading might have had. Stop infilling. It’s STUPID.

rooter
Reply to  Zeke Hausfather
January 30, 2015 12:17 am

Jim Steele is still wrong:
“It was seems obvious when we examine the California raw data, there has been a definitive impact from the PDO but the homogenization process obliterates.”
The only land temperature data he has shown is homogenized data. It is the homogenized data that shows the close match to sea surface temperatures.
Why does Steele continue in repeating the same error?

rooter
Reply to  Zeke Hausfather
January 30, 2015 12:25 am

Steele is trying this one:
“Indeed regions with greater station density will yield greater inhomogeneities. ”
He lost the explanation from Zeke. In the US the stations were run by volunteers and there were changes in time of day for reading the measurements that created the largest inhomogeneities. In Europe with great station density that is not so and the inhomogeneities in those data are smaller. Steele’s claim is just not true.

rooter
Reply to  Zeke Hausfather
January 30, 2015 12:32 am

Steele writes:
“Chylek 2006 used the data from Greenland’s only two long term stations, so no homogenization possible and he illustrated the cyclical nature of Greenland’s climate and a faster rate of warming in the 30s.”
What is Steele trying to say by this? Two stations is enough for Greenland?
Might be. But whatever happened to the problem with small density of stations and infilling? Not a problem in Greenland?
(.. and we do not want to use updated data, do we?)

Reply to  Zeke Hausfather
January 30, 2015 11:43 am

Rooter bombs away “The only land temperature data he has shown is homogenized data. It is the homogenized data that shows the close match to sea surface temperatures.
Why does Steele continue in repeating the same error?”
Rooter you are truly a Slandering Sou reincarnation. Why do you keep ranting about the wrong point. Over the past year the homogenization has increasingly amplified the warming trend. INstead of critically examining the sequential changes in the trend due to homogenization, you blather about nebulous homogenization as if you are proving a point. Your false accusations and choice of words will not change the documented amplification of warming trends that after serial homogenization obliterated the cyclical nature of climate change. Somehow you think if you keep bombing away that someone will believe that 2015 homogenization looks like the PDO and NE Paciic SST? Your biting off your nose trying to save your homogenized face.

Reply to  Zeke Hausfather
January 30, 2015 2:41 pm

Zeke Hausfather
January 29, 2015 at 7:29 am

Finally, I’m not entirely sure what you are trying to imply with your discussion of infilling. Land temperature stations are not evenly distributed (in fact, about 2/3rds are in the U.S. or Western Europe). Simply averaging all the anomalies would give you a representation of a world where 2/3rds of the global land area is in the U.S. or Western Europe, something I hope you’d agree is rather incorrect. Thats why groups like NCDC or Hadley use grid cells (usually 5×5 lat/lon) to ensure that each station’s temperatures are weighted proportionate to the land area they represents in global reconstructions.

Well, I think it’s closer to what the measurements say, and I think you can make more use of the measurement than what’s done.
For a baseline I use the same stations prior day’s reading. So I calculate a station day to day min temp change, and a day to day max temp change. Now with NCDC GSOD data set I have ~120 million samples of now much the temp changed day over day. I average this anomaly by the day for the station in an area, from 1 x 1 degree cells all the way up to the globe. I also take a years average for the same areas.
With the daily data for the 2 hemispheres you can see the average daily rate of temperature change as the length of day changes, and with a larger area you can average out a lot of the effect of weather. Then you can do things like look at the rate of change in an area as the amount of Sun per day changes, for each year, for both warming and cooling.
which would look like this
http://www.science20.com/sites/all/modules/author_gallery/uploads/543663916-global.png

rooter
Reply to  Zeke Hausfather
January 31, 2015 4:45 am

Jim Steele asks:
“Why do you keep ranting about the wrong point.”
Answer: Because you keep on repeating your error after you should know it is wrong. Johnston compares SST (of course adjusted and homogenized), SAT (land surface, homogenized) and SLP (I believe those are actually reanalysis, ie. modelled). And you keep on repeating like this:
““It was seems obvious when we examine the California raw data, there has been a definitive impact from the PDO but the homogenization process obliterates.””
You should change that part of section 1:
” Give the students the graph below from Johnstone 2014 and ask them to compare changes in sea surface temperatures (SST in red) with the raw and recently homogenized temperature data from southern California”
There at no raw unadjusted land data in that graph. Not raw unadjustet SST data or SLP data either.

rooter
Reply to  Zeke Hausfather
January 31, 2015 4:53 am

Jim Steele writes:
“Your false accusations and choice of words will not change the documented amplification of warming trends that after serial homogenization obliterated the cyclical nature of climate change. Somehow you think if you keep bombing away that someone will believe that 2015 homogenization looks like the PDO and NE Paciic SST? Your biting off your nose trying to save your homogenized face.”
Jim Steele did not know that he was trying to show his point about cyclical nature of climate change by using homogenized and model data. So how could homogenization (and modelling..) remove the cyclical nature of climate change when those were the type of data Steele used?

Geoff Shorten
January 28, 2015 8:16 pm

“Teaching Objective III: Homogenizing Data to Create Meaningful Regional Climate Trends
Teacher input: Good scientists do not blindly accept raw data. Data must undergo quality control analyses that adjust data for documented changes known to create changes unrelated to climate change. After accounting for artificial inhomogeneities, scientists adjust the data to create what they believe is a more realistic regional trend.”
Suggest replacing the first occurrence of ‘changes’ with e.g. ‘events’.

John Andrews
January 28, 2015 8:21 pm

A couple of thoughts after reading about Cuyamaca where I went fishing as a boy, and mostly skimming the rest of the article. First, Cuyamaca is in the mountains behind San Diego and the campgrounds there are at 4000 and 5000 feet elevation. Cuyamaca Lake is lower. I don’t know where the weather station is located, but elevation differences should be noted in the teaching materials. Chula Vista is at sea level along the coast right beside San Diego Bay.
Homoginization is a tricky subject and I think it is misused when it is used to change recorded data. Proper use might be to estimate what the temperature might be at an unmonitored location between similar monitored sites. This is a computed value and not raw data. Emphasis is needed here. The data is the raw measurements and this must be preserved. Important point. Anomalies are not data because they are based on an average (a calculated value, not data) subtracted from each reading.
In-filling is useful but there are a number of ways to do it. The choice of method must be defended. Again, the in-filled results are not data, but are only estimates.
Very good start. I hope it is accepted widely.

Reply to  John Andrews
January 28, 2015 8:53 pm

John writes “but elevation differences should be noted in the teaching materials.”
Absolutely agree. I am in the midst of developing additional activities evaluating how small changes in topography, elevation, vegetation, etc. create different microclimates and affect maximum and minimum temperatures in different ways. The great complexities of local and regional climate change can not be addressed all at once. Nonetheless i can simply add that when students visit the USHCN website that they note the elevational difference as an important variable. The next activity in this sequence will request students to zoom with the satellite view for each USHCN to view the surroundings.

Mike the Morlock
Reply to  jim Steele
January 28, 2015 9:25 pm

John Jim,The problem you are going to run into is class time. How many hours can a teacher devote to this. One, two, three periods? Just to teach the basic’s requires a full semester. Plus not all kids have computers. Means they would have to use school ones. And if you know who blocks the sites at school….
For this to work skeptics must have a presence on school boards and in the local PTA’s
If teachers are going to use this they must first be safe and protected.

Bill_W
Reply to  jim Steele
January 29, 2015 3:02 am

I like the exercise but it seems as if it would only be suitable for an advanced high school class. It would be beyond the comprehension of most high school students and you will be lucky to find 1 in 5 teachers who could handle it either.

Dr. S. Jeevananda Reddy
January 28, 2015 8:34 pm

Excellent job — One of my friend from IITM/Pune did such job with rainfall data. He published the data series [monthly, seasonal, annual] sub-division-wise for 1871 to 1994. He now retired..
Dr. S. Jeevananda Reddy

mebbe
January 28, 2015 8:40 pm

“Teacher input: Good scientists do not blindly accept raw data.”
This enjoinder is too moralistic for my taste.
It’s not good science to accept unquestioningly raw data.
A scientist should not accept raw data without subjecting it to scrutiny.

Richard G
Reply to  mebbe
January 29, 2015 2:30 pm

Students should be asked to collect their own data. This should sensitze them to the problems involved in collecting good data, and how easily field work can yield bad data through bad technique.

Marty Cornell
January 28, 2015 8:47 pm

Brilliant sarcasm! I especially liked the infilling section. I’m still laughing at the absurdity of politically correct climate science that your piece so wonderfully exposes.

mebbe
January 28, 2015 8:55 pm

Is climate change affecting all stations equally?
The only thing climate change can change(affect) is the mental state of those that conceive it.
Climate is an abstraction, so a revised version of that abstraction is also an abstraction and cannot cause one molecule anywhere to zig where it would have zagged. (This is slightly overstated since brains are molecules, too).
Are the changes in all stations equally indicative of climate change?

Reply to  mebbe
January 29, 2015 5:09 am

“Is climate change affecting all stations equally?”
No.
I effects the stations that don’t exist more than it does the ones that do.
On the other hand, the Urban Heat Island effect seems to effect stations not under the UHI.
Amazing how that works, isn’t it?
/sarc

masInt branch 4 C3I in is
January 28, 2015 9:00 pm

“Climate Modeling” by GISS, NOAA, UK Met Of and UEA, and arithmetic fudging by GISS, NCDC and NOAA are giving Applied Physics a very bad name.
The immediate result is that the Applied Physics community will be forced to rise up and kill GISS, NOAA, UK Met Of and NCDC (including Executive and Operative Personnel) stating that their activities amount to Government Sponsored Fraud that includes defrauding IRS.
“Obama’s Last Stand at Little Big Horn” is taking shape.
Good!

January 28, 2015 9:02 pm

Typo corrections:
What factors might make Brawley so much warmer than the other stations.
Have a ?
3: Which station(s) experienced the most similar climate trend from 1900 to 2010? 4. Is climate change affecting all stations equally?
Start 4 on a new line.
Give the students a map of the world and an umber colored pencil. 
amber colored pencil?
by darkening the regions on all te continents where 
the continents
[Done. .mod]

Reply to  Werner Brozek
January 28, 2015 9:18 pm

Thanks Werner

mebbe
Reply to  Werner Brozek
January 29, 2015 8:38 am

Umber is as much a colour as amber!

Reply to  mebbe
January 29, 2015 10:05 am

… and in the spirit of Global Warming wasn’t there a “burnt umber” in the Crayola box?

Reply to  mebbe
January 29, 2015 11:25 am

actually I was thinking of burnt umber and I need to dig out a color chart and compare umber and amber to see which bests describes the charts colors

January 28, 2015 9:58 pm

I want to thank everyone for the helpful comments. Within 24 hours I will make all the needed corrections and revise the final lesson plan on my website at:
http://landscapesandcycles.net/temperature-homogenization-activity.html
After that time please forward this first activity to any educators that you think might use it in their classroom and encourage them to provide feedback in order to improve the lesson. Thanks again.

Jake
Reply to  jim Steele
January 29, 2015 7:24 am

Jim; I am an AP Chemistry teacher in the Northeast, where our school schedule leaves a substantial gap of time between the test and the end of our school year. I usually fill that time doing cool experiments so that the kids leave high school with a thirst for more science, but I’m certainly entertaining the idea of working on this. I’m blessed to work with truly talented/gifted kids, so the technical content would provide a valuable learning experience for them. If I do have an opportunity to follow through, I will do my best to provide feedback for you.
As an aside, the “religious” aspect of the CAGW movement has had an amazing negative impact on the students who I work with today. The brainwashing I have observed in the students I work with over the last five years, as far as I can tell, will be debilitating for them. Questioning and free thought are at an all-time low, and blind acceptance the norm. I wonder how this young generation will go about research, and I fear bias driven conclusions will be a larger issue than we already understood them to exist.. In this sense, the fanatics have won, but I am hopeful that (over time) a greater acceptance of all angles of research will once again become the norm.

Reply to  Jake
January 29, 2015 8:00 am

I love to hear you may try this. And I agree about the brainwashing problem. Throughout the USA the standards promote a one-size fits all type of teaching so kids can score high on standardized tests. Because there is so much depth and breadth to be covered, rarely can teachers cover the whole suggested curriculum. So students memorize the key jargon and move to the next topic with little time to dig deeper.

Victor Frank
January 28, 2015 10:09 pm

For what age group are your lesson plans intended? My experience is that most students in public colleges taking their first introductory meteorology or oceanography labs are woefully unprepared for reading or drawing graphs or using Excel, or solving formulas, or using statistics. You would hope they had picked up these skills in high school, but this is unfortunately not the case, at least in the SF area. They even have trouble interpreting the simple graphs in the CAHSEE Math tests (which start next week).

Reply to  Victor Frank
January 28, 2015 10:36 pm

Victor, Indeed there is a large segment of the college population that is “woefully” unprepared. When I taught the Introductory to biology for majors labs at San Francisco State University, I would estimate 30% were clueless regards using a spread sheet. But for my purposes I could bring them up to speed within a period. Regards this lesson, I think any student from high school on up, can quickly be shown how to use the spreadsheet functions.
They downloaded data in text form from USHCN which can be converted quickly via the text to columns menu. Students can then copy and paster select data from 1950 to 1980 into a new column and easily get the average. To get the anomaly they can quickly be shown how to create a new column of anomalies by creating an equation that subtracts the calculated average from the first year’s temperature and then dragging that formula down entire length of the column from, say, 1900 to 2015. Graphing that data can be easily taught as well. It is an easy enough procedure to master that I think an 8th grader could master it in all in no more than two 50 minutes periods.

Juan Slayton
Reply to  jim Steele
January 28, 2015 10:51 pm

Hmmm…. Could you get Phil Jones to enroll?
: > )

Chuck Bradley
January 28, 2015 10:44 pm

I think you will get better comments by being clear about what skills and knowledge the students have. Some 9th grade students are proficient with elementary algebra and some 12th graders are not.
Do something with smoothing, using real data. Hourly data vs daily high and low. Weekly, monthly, annual averages. Where did the wild swings go?
Look at the basics of data integrity. Where is the original data? What was changed, in detail, and why? Then repeat the questions for each subsequent change of the previous data.
What do you think of someone that will not show you the data or how they changed the data, kids?
Explain the scientific method, using Bacon, Einstein, Feynman, others.
If you make it available, make sure the downloader can not be traced by anything on your site. I expect most of the downloads will be by pseudonyms. Be prepared for denial of service attacks.
Thanks for the effort, and good luck with it.

Robert B
January 28, 2015 10:46 pm

Just a note for any budding teacher
“The current pedagogy argues teachers should not be the “Sage on Stage“ spewing dogma for students to regurgitate. A teacher should be the “Guide on the Side,” alternating short bits of input followed by time slots”
Sounds good in theory. The truth is that you need to be a Sage on Stage for classroom control. You need to feed them material and get them to rote learn to control them, where there is zero chance of cocking the activity up.
You also need to give them activities to discover in order to learn but you risk rebellion because they risk cocking things up. The timing and balance is not only difficult to get right, it changes for the same class during the day.

MikeB
Reply to  Robert B
January 29, 2015 3:57 am

The Seven Myths:of Modern Teaching
* Facts prevent understanding
* Teacher-led instruction is passive
* The 21st century fundamentally changes everything
* You can always just look it up
* We should teach transferable skills
* Projects and activities are the best way to learn
* Teaching knowledge is indoctrination.

Reply to  Robert B
January 29, 2015 7:52 am

Robert I agree with your concerns. There is a need for some rote learning from a sagely teacher to prevent students from “cocking” things up, and I am not advocating that. The new pedagogy is aimed at secondary teachers who lecture to much. The Teacher Inputs are the needed short sage on stage moments. BTW “Cocking things” up suggests males are the problem. In America we use the F word as it has a broader application. Or is your phrase a way of skirting the censors?

old construction worker
January 28, 2015 11:07 pm

Jim, I may have missed it but you should add a link to surfacestation main and surfacestation gallery.

Reply to  old construction worker
January 28, 2015 11:20 pm

old construction worker,
You didn’t miss it, but I hope to make that link for the next activity looking more closely at the effects of micro-climates, but the link to the gallery doesn’t appear to be working.

January 29, 2015 12:24 am

It seems to me the way forward on a long march to critical reasoning. Thanks for this antidote which is a welcome contribution to common sense.

Peter Miller
January 29, 2015 12:50 am

The Climate Inquisition is not going to like this, you must not question the adjusted numbers – you must have faith, trust the models not the facts.
The Lumpen Proletariat repeatedly have to be told the same consistent story of: “We are doomed because CO2 is such an evil gas. We must repent, destroy our economies and return to the more enlightened lifestyles of medieval times.”
There is to be no more debate, the science is settled and as one of the grand inquisitors states “Homogenisation makes very little difference.”
And if you believe that, I have a bridge to sell you.

Editor
January 29, 2015 1:53 am

Nick Stokes:
“Homogenisation makes very little difference”.
“…alternative methods to GISS etc gave the same result. With or without homogenisation”.
“Some are adjusted up, some down. It evens out.”.
Well, the obvious conclusion is : don’t homogenise. It adds nothing but error and distrust. It clearly isn’t needed (because Nick Stokes was able to get a result without homogenising).

Reply to  Mike Jonas
January 29, 2015 2:02 am

Well, I don’t. But there are genuine inhomogeneities, and they can be detected. They may have cancelling effect, and seem to (so no difference), but you can’t be sure. That is why they correct.

knr
Reply to  Nick Stokes
January 29, 2015 2:24 am

Correcting data from years ago because that data fails to support the idea your selling is not ‘correcting ‘ in any real sense. The trouble is not that past measurements have have had issues , as indeed do current ones , but the notion that by some form of magic you can offset them when you don’t even know their magnitude nor direction. And when person doing the ‘correcting’ has strong vested interest in getting the right results , that is right for them not right has in honest, they you can see why this area has so many problems.
At the bottom of this it the old problem that has very recently been seen again ,our ability to predict weather or fully understand it is not where near has good has some claim it to be. In the past this was accepted has a reality , but with AGW we seen the raise of the skyscrapers of the ‘unprecedented’ and the ‘settled ‘ built on the same swamp of the unknown has our past inabilities in this areas. Therefore its hardly a surprise to find these skyscrapers often have so many issues with ‘settling ‘

Reply to  Nick Stokes
January 29, 2015 2:48 am

“but the notion that by some form of magic you can offset them when you don’t even know their magnitude nor direction”
The Chula Vista correction shown above is a clear case. Menne’s algorithm will compare that with neighbors and tell you that there is a discontinuity, of clear amount. And yes, the record tells that there was a 2.5 mile station shift at that time.

Reply to  Nick Stokes
January 29, 2015 3:59 am

That is why they distort. By your own admission, they even out. So they are useless for anything but hiding bad science. Science relies on raw data – not manufactured data. By destroying raw data, you destroy the ability for new insights to be gleaned from it.

Reply to  Nick Stokes
January 29, 2015 5:14 am

“philjourdan
January 29, 2015 at 3:59 am
That is why they distort. By your own admission, they even out. So they are useless for anything but hiding bad science. Science relies on raw data – not manufactured data. By destroying raw data, you destroy the ability for new insights to be gleaned from it.”
I was thinking the same thing. We don’t need a process that “evens out” data. If the process doesn’t accurately reflect what the data should be, then the process is not beneficial.

Reply to  Nick Stokes
January 29, 2015 7:39 am

NIck says, “there are genuine inhomogeneities, and they can be detected. They may have cancelling effect, and seem to (so no difference), but you can’t be sure. That is why they correct.”
There in lies the problem. A process that creates more errors but happen to cancel out, only obscures the real underlying data, and leads to incorrect diagnoses. You remind me of the joke about the man with his head in the oven and feet in the freezer. The doctor attending to his pain, suggested his problem was psychological because on average his temperature was just right.

mebbe
Reply to  Nick Stokes
January 29, 2015 8:46 am

Nick,
If you can’t decide what you think, how are we supposed to make sense of your opinion?

Bill Illis
Reply to  Mike Jonas
January 29, 2015 4:02 am

How about we just lock in the historical records now Nick. Declare victory, call it a day.
No more adjustments to any pre-2010 records. They are all supposed to be fixed now. There have been at least 5 different processes used to incrementally fix the historic records and that should be all finished now.
Lock it in.
I get the feeling this would not be acceptable to those in control of the adjustment processes because then, if temperatures continue to rise far less than predicted, at some point they will have to admit the theory is not fully correct.
By continuing the temperature adjustment processes, they get to delay that day as far as needed into the future because the science finds this delay process to be perfectly acceptable.
To me, its like saying unemployment in the 1930s depression was only 3.2%, or in other words, full employment. There was no depression, there was no WWII, Communism collapsed because it wasn’t communist enough. Rewriting history works against the interests of mankind because we do not know what works, what doesn’t, what is right and what is wrong. Humans incrementally make society better by learning what works, what is the truth about the nature of the universe, we go backwards when history is rewritten.

rooter
Reply to  Bill Illis
January 29, 2015 5:59 am

Make your own index with unadjusted data.
Like Stokes and others have done. Did you miss that part?

Reply to  Mike Jonas
January 29, 2015 10:23 am

I’m saying that the effect of the inhomogeneities will often cancel out. Some stations move to warmer places, some to cool, etc. That’s what I find when I do the calc without homogenisation. It doesn’t make much difference.
But you don’t know that until you’ve tried (homogenising).

knr
Reply to  Nick Stokes
January 29, 2015 1:20 pm

Some stations move to warmer places, some to cool, etc. and how do you know this given you don’t have an accurate measure of the situation the station of have moved into before the move otherwise why move a station into , how can you correct for that which you do not know ?
Its guess work , no matter how much maths you throw at it , your still making assumption based on facts you do not know that well . Its frankly hilarious to instrument people that others claim levels of precision not actually possible form the very means used to take the measurement , and that is without any error bars for the data .
You want to make a good claim to three decimal point precision , the make sure you can measures to four decimal points . Which no tree ring , no matter how magic , is going to give you . It may well be that in the land of models you can have has high degree of precision has you like , but in the real world that does not work like that .

Bill Illis
Reply to  Nick Stokes
January 29, 2015 4:29 pm

That is proven to be completely incorrect Nick because the adjustments have “increased” the trend by 0.6C or more. I believe about 0.3C of this is unjustified padding.

rooter
Reply to  Nick Stokes
January 30, 2015 12:41 am

Illis:
“That is proven to be completely incorrect Nick because the adjustments have “increased” the trend by 0.6C or more. I believe about 0.3C of this is unjustified padding.”
Perhaps you could tell how do you know this? Let me guess; the world consists of continental US only and it makes no difference at what time of day recording of measurements are done.

richardscourtney
Reply to  Nick Stokes
January 30, 2015 1:04 am

rooter
Your thread bombing is annoying.
One of your posts which wastes space on the thread responds to Bill Illis having said

That is proven to be completely incorrect Nick because the adjustments have “increased” the trend by 0.6C or more. I believe about 0.3C of this is unjustified padding.

by saying

Perhaps you could tell how do you know this? Let me guess; the world consists of continental US only and it makes no difference at what time of day recording of measurements are done.

WHEN YOU DON’T KNOW THEN ASK; DON’T “GUESS”.
You have already displayed too much of your ignorance on WUWT.
This shows you the global changes for GISS with one mouse click.
And, assuming you can read at a level above ‘Noddy Goes to Toytown’, then read all of this for a discussion covering the frequent changes to all the global temperature data sets.
Richard

richardscourtney
Reply to  Nick Stokes
January 30, 2015 1:06 am
rooter
Reply to  Nick Stokes
January 30, 2015 6:06 am

richardscourtney:
Your link is to graphs with land-only measuremnts mostly in the northern hemisphere from 1980. It was even before gistemp. The second is gistemp met stations only. The third is land-ocean index. Different type of data. Different number of observations. Different coverage.
But of course: You did know that. Or did you not? Why are they not more different is a better question.

January 29, 2015 2:31 am

just some typos:

That regional expectation can serve as a guide when adjusting trends at other les reliable stations.

should be “less”
and

by darkening the regions on all the continents where ever weather stations are sparse (the whitest areas)

I’d go with “wherever”
Pedagogically – geez I dislike that word – I’d provide the data sets with source info so they can get right to it. And I think an intro might be good – “many are concerned that our world may be warming due to our actions and influences upon it (industry, technology, population, emissions . . . ) well you get that.
I’d also pose some other questions (softly) toward the end – why is there such disagreement, passion about all this? What differences could this all make? – in your life, your home, your school???
Best of luck.

January 29, 2015 3:06 am

All 3 stations exhibited trends that were reasonably similar until 1982. So what caused Chula Vista to artificially warm, or conversely did Brawley and Cuyamaca suddenly cool?

This is misleading because it’s two against one. At first glance two is greater than one therefore the warming is the spurious phenomenon not the potential cooling.
It would be better if the teacher’s notes included a discussion of why that logic is false.
Are the two independent? Or are they connected in some way and subject to the same events or errors? They are both coastal, of course.
This would also introduce another part of investigating the raw data.
Are the raw data independent of each other?

rooter
January 29, 2015 3:51 am

This teacher input is confusing:
“Give the students the graph below from Johnstone 2014 and ask them to compare changes in sea surface temperatures (SST in red) with the raw and recently homogenized temperature data from southern California.”
And goes on to showing a comparison between SST and homogenized temperature data. There are no raw data in that graph.
And just to confuse the students even more he says:
“Which data sets (raw or homogenized trends) best agrees with the hypothesis that ocean temperatures drive regional warming trends? Which data sets best agrees with the hypothesis that rising CO2 drives regional warming trends? Could a belief in different hypotheses affect how temperature trends are homogenized?”
What is the teacher trying to achieve here? Trying to demonstrate his own fallibility? There still are no non-homogenized data in that graph.

January 29, 2015 4:01 am

Tremendous stuff! Well done Jim Steele for this really positive contribution for schools and teachers. The awful impact of climate schreechers and their absurdly agitated followers has gotten into schools where it has surely been adding to the psychological and educational harm being brought to children by the widespread promotion of a narrow, one-sided, politically-loaded, emotionally unhinged view of the climate system and our impact on it.

Alx
January 29, 2015 4:12 am

This lesson plan would require skilled and talented teachers and advanced students. Which is fine not all classes are designed equally.
It is interesting that it is using climate studies to teach critical thinking often lacking in climate studies. At any rate I see critical thinking and the practice of basic scientific principles as the primary benefit of these lesson plans.
In that regard, this training should be required at NASA and NOAA

zemlik
January 29, 2015 5:14 am

when I was a child aliens would visit me as I slept and explain stuff, which is why I was always lucky because I had that understanding and I would never feel a requirement to tell a lie. I would say ” why would I need to lie to my inferiors ? “

Reply to  zemlik
January 29, 2015 8:37 am

Why did they stop?

Kelvin Vaughan
Reply to  M Courtney
January 29, 2015 9:52 am

As my Dad used to say to me “I’ve taught you all I know and still you know nothing”.

January 29, 2015 5:30 am

Jim–I am not sure what ages/grades this is aimed at, but as a parent of recently graduated high school students, this seems to me to be too much in a short period of time. I would peg this teaching program at 1-2 months of time minimum, not the sort of thing to be done in a week or a class. And while kids are better at computers than me–a huge understatement–you assume programming knowledge and math skills that are pretty advanced. Finally, you assume teacher skills that are pretty advanced as well.
Other than that, I love the concept. Teach thinking, not just raw facts.

Doug S
January 29, 2015 5:31 am

A great post Mr. Steele, thank you. This is the approach I favor for exposing the political and religious agenda behind the global warming movement. If we teach children how science is done and create a love for science and math in them, these children will be well equipped to defend themselves and their country against the con men and witch doctors that promote these false religions.

rooter
January 29, 2015 6:05 am

Section in-filing:
“Can in-filling reliably represent local temperatures trends? The warmest regions appear to be related to infilling. What other reasons would cause greater warming in the in-filled regions? Should regions without a sufficient density of weather stations be included in a global average temperature?”
The question should rather be: What method of infilling will best match the local temperature and temperature trends? Infilling from neighboring areas or infilling with global or hemispheric averages? Those are the alternatives. If you ignore areas where there are no measurements those areas will not influence the global or hemispheric averages. Then those areas are in effect infilled with global or hemisperic averages.
The answer could in this case very well be found in the very same paper from where the first figure was copied.
http://en.climatechange.cn/EN/article/downloadArticleFile.do?attachType=PDF&id=8468
There they increase the number of stations and reduce the areas with no stations.
http://i.imgur.com/1vKT8Bj.png
They are doing infilling by adding stations. The result is this:
http://i.imgur.com/V7xd1iz.png
See what happens during the last period for the integrated data set and crutemp3. Compare to this:
http://woodfortrees.org/graph/crutem3vgl/compress:12/plot/crutem4vgl/compress:12
Same story.
But do not show this to the students. It might give them the impression that infilling with help of adjacent areas is better than infilling with global or hemispheric averages. We don’t want that to happen do we?

Reply to  rooter
January 29, 2015 7:26 am

Rooter your graph does not match your narrative.Your Figure 3. shows a decline in the number of stations since 1970, which would mean more infilling is now needed.

rooter
Reply to  jim Steele
January 29, 2015 8:15 am

Jim Steele did not get it. It is Crutemp3 that has the lowest number of stations during the latest years. GHCNv3 has more stations and the integrated data has even more. The result: The index with lowest number of stations has the lowest temperature during those years. Increase the number of stations and coverage and the result is higher trend.
A comparison in coverage for Crutemp3 and Crutemp4 2000-2010:
http://i.imgur.com/2VJGWjX.png
For some reason not understood by some the temperature trend of those gridcells with missing observations in Crutemp3 have trends that are close to the trends of the neighbouring gridcells.
Again: exactly the opposite of your claims. Infilling with actual measurements increases the trend. You will get result that closer to that result by doing statistical infilling with values from the nearest regions of missing observations. Your solution with infilling with the hemispheric or global averages is the method is the most wrong method.

Reply to  jim Steele
January 29, 2015 8:55 am

Rooter, You bait and switch again. You first show a graph with a decline in stations since the 70s and then use maps from 2000-2010 during a period when there has been no warming.
What Rooter fails to understand is the the question is how reliable is infilling? and why do it at all?
Studies in Yosemite (that measured trends between areas far closer than neighboring stations in Eurasia and other infilled areas), revealed that changes in the wind caused one area to warm, one to cool and one with no trend. Assuming that similar issues apply to Eurasia, depending on the station I could be infilling very different trends. Furthermore how much of the warming in Eurasia is due to advection of heat ventilating from a cooling Arctic Ocean. An anomalously warm infilled Eurasia tells us nothing about sensitivity to rising CO2, but only creates the illusion of that attribution

rooter
Reply to  jim Steele
January 29, 2015 1:20 pm

Jim Steele is saying that there was no warming 2000-2010 for land stations.
How does he know that? He does not. There is no warming when the infilling is done with hemispheric averages. When measurements are supplemented that changes. Easy to see why. Just compare the coverage in Crutem3 vs Crutem4.
Steele asks why do infilling. He fails to understand that the choice is between doing infilling with values from the nearest areas vs infilling with hemispheric averages. The difference between Crutem3 and Crutem4 illustrates this.

Reply to  jim Steele
January 30, 2015 11:19 am

Rooter, Satellite data shows no warming !

rooter
Reply to  jim Steele
January 31, 2015 6:06 am

Jim Steel says:
“Satellite data shows no warming !”
That is 2000-2010. This time Steele did not understand that troposphere is different from surface. That comparison is not unproblematic. Even when comparing lower troposphere over land and surface. But we can try. We can compare lower troposphere over land and land surface:
Trends 2000 – 2010:
Crutem4: 0.24 deg C/dec
Gisstemp land masked: 0.26 C/dec
BEST (land): 0.27 C/dec
RSS over land: 0.12 C/dec
UAH over land: 0.36 C/dec
Did Steele not check before making his claim? Also watch the closer match for the surface series than for the lower troposphere series.

TYoke
Reply to  rooter
January 29, 2015 4:18 pm

Rooter, Jim Steele writes, “Can in-filling reliably represent local temperatures trends?”, and you respond, “The question should rather be: What method of infilling will best match the local temperature and temperature trends? Infilling from neighboring areas or infilling with global or hemispheric averages? Those are the alternatives.”
Your re-framing of the question is quite substantial. I think that we can all agree that the absence of decent historical global temperature coverage is a problem, and that a good global average temperature plot would be good thing to have. However, those two facts do not oblige us to simply accede to one or the other of your pair of alternatives.
What if both of the proposed infilling methods are badly inaccurate? Note that in either case, the temperature records for vast regions of the globe are in effect being modeled rather than being the result of observation. How accurate really, are the infilling models?
Whether “global averages” are used, or “neighboring areas” are used to guide the infilling model, the possibility of systemic errors are simply rife. For instance, one suspects that the records that do exist are biased towards lower latitudes than are the low coverage areas. Consequently, infilling using the low latitude records will have a warm bias. Also, with skimpy data, UHI effects seem likely to be exaggerated.
The point is that Jim Steele’s question at top is not answered by merely insisting that we simply accept one of your two choices. If both of your alternatives are bad, then the answer produced by those alternatives is bad.

rooter
Reply to  TYoke
January 31, 2015 6:15 am

We know the answer to what method is best. Methods with infilling (gistemp, Best) matches better Crutem4 than Crutem3. Crutem4 vs Crutem3 is a demonstration of the effect of increasing coverage. Crutem3 differs from the infilling methods much more than Crutem4 does. See my answer to Steele above with trends 2000 – 2010.

Reply to  rooter
January 31, 2015 7:57 am

@rooty
“We know the answer to what method is best. Methods with infilling (gistemp, Best)”
Are you dense? The best method is to actually measure local temps, making it up by infilling is not better.

rooter
Reply to  TYoke
January 31, 2015 9:22 am

Mi Cro:
So you think it best with actual measurements.
Totally agree. That is why we can check which infilling method is best. Infilling with hemispheric means where there is large measurement gaps like in Crutem3 vs interpolating for doing the infilling as in Gistemp. Gistemp ends up being close to Crutem4 where there are more measurements and better coverage (actual measurements). BEST has both more measurements (more than Crutem4) and local infilling.
If you has some other evidence for why making up the infilling with hemispheric average is best, please tell.

Reply to  rooter
January 31, 2015 9:46 am

If you live some place where the weather is most always the same, infilling so you can determine climate might make sense. If you live some place where the weather changes all the time, infilling makes little sense, as the space between stations can contain significant weather that just disappears. How do you include weather you don’t measure into your climate and think your climate is even close to right.
All we know is what the stations register.

rooter
Reply to  TYoke
January 31, 2015 10:59 am

Mi Cro:
Hard to make sense of your local changes etc.
Make it simple. Would you prefer Crutem3 or Crutem4?
http://i.imgur.com/2VJGWjX.png
For the trend 2000-2010 Gistemps trend is closest to Crutem4
Is infilling with hemispheric averages (Crutem3) or infilling from nearest areas (Gistemp) most like the trend from a series with more actual measurements (Crutem4 or BEST)?

Reply to  rooter
January 31, 2015 1:30 pm

It’s easy, they are both wrong.

TYoke
Reply to  TYoke
January 31, 2015 6:52 pm

Rooter,
Your reply was entirely non-responsive to my point. You try to insist at the starting gate that we MUST choose from one of your two alternatives. You permit no other option. That is my objection.
What if BOTH models of the missing data are lousy models? You’ve proved nothing by arguing that one modeled set of “observations” is better than the other.
That is only evidence that one may be less lousy than the other, but it certainly does not prevent both of them from sucking. Even if recent years, where more complete data is available, shows a better fit for one infilling method than the other, that is in no way a proof that that same method provides accurate results for earlier decades.
The data is missing, and models of that missing data certainly contain error. When we then also note that the “adjustments” and “homogenization” always have the effect of cooling the past and warming the present, lots of suspicion is justified.

rooter
Reply to  TYoke
February 1, 2015 2:32 am

Mi Cro’s last resort is of course “they are both wrong”
Well Mi Cro. 100% right will never happen with observational data. That is so.
But how do you know that they are both wrong? Do you know the right answer?

Reply to  rooter
February 1, 2015 3:25 am

Of course not we haven’t measured it to know what temp all the bits are.

rooter
Reply to  TYoke
February 1, 2015 2:55 am

Ignoring TYoke’s “what if the Moon is a cheese” argumentation, TYoke seems to be capable of saying that infilling from nearby areas is less wrong. But he complains that there must be other alternatives.
Obviously there are different methods of infilling from nearby areas and those are being used. And ends up producing results in agreement. But of course one method can be better and preferred.
But if TYoke has even a better method, that is excellent. Show us that better method.
TYoke’s claim that adjustments and homogenization always cool the past and warm the present is just wrong. There is a massive warming adjustment of past sea surface data. And even after 2000 the NCDC’s met station homogenization method reduces the warming:
http://www-users.york.ac.uk/~kdc3/papers/coverage2013/update.140404.pdf

rooter
Reply to  TYoke
February 1, 2015 7:07 am

Mi Cro says:
“Of course not we haven’t measured it to know what temp all the bits are.”
And stops there. The question Mi Cro avoids is what kind of values are most likely to be closest to the temperature in those areas. The hemispheric average or estimation of the temperature from nearby areas.

Reply to  rooter
February 1, 2015 7:21 am

If temperature was spatially linear you could approximate it, but it’s not.
City A is 200 miles west of city B, city A is 60F, city B is 70F, so what is the average temp between city A and city B?
We don’t know what the average is, and without any measurements we will never know what the average is as it could be anywhere from maybe 75 to 55F, 65F is just as likely wrong as anything else.

rooter
Reply to  TYoke
February 1, 2015 12:29 pm

Mi Cro is trying this:
“We don’t know what the average is, and without any measurements we will never know what the average is as it could be anywhere from maybe 75 to 55F, 65F is just as likely wrong as anything else.”
Still has not grasped the concept of anomalies.The anomaly between those cities is certainly closer to an interpolated value between those cities than the hemispheric average. And of course, that can be checked empirically.

Reply to  rooter
February 1, 2015 2:19 pm

“Still has not grasped the concept of anomalies.”
I prefer the derivative of an anomaly based on the stations previous day’s measurement.
” And of course, that can be checked empirically.
Not if there isn’t a station there! That’s the problem, there isn’t.

rooter
Reply to  TYoke
February 2, 2015 2:21 am

“Not if there isn’t a station there! That’s the problem, there isn’t.”
Easy. Pick two stations where there is a station between those two. Estimate the anomaly of that mid-station from the two other stations. Check with the result. Repeat so much as you can there and elsewhere. And then see which infilling method is best.

Reply to  rooter
February 2, 2015 6:15 am

rooter commented

Easy. Pick two stations where there is a station between those two. Estimate the anomaly of that mid-station from the two other stations. Check with the result. Repeat so much as you can there and elsewhere. And then see which infilling method is best.

Neither method is acceptable.
There are whole countries without any stations, most of the amazon, greenland, both poles aren’t measured, Mosh even said that as they add more stations the temp changes, if they are doing a good job estimating it wouldn’t change at all.
The station in the GSoD and CRU datasets.
http://maps.google.com/maps/ms?msid=218403362436074259636.0004aab6c505418fa54c7&msa=0

rooter
Reply to  TYoke
February 2, 2015 6:32 am

“Neither method is acceptable.”
Says Mi Cro in response to how to check which infilling method is closest to actual measurements.
That is: It is not acceptable to to use actual measurements!
But as Mi Cro is digging something for himseelf, Mi Cro can check this again:
http://i.imgur.com/2VJGWjX.png
Look at gridcells without measurements in Crutem3 and look at the same gridcells in Crutem4. Do the trend in those gridcells look like the trends in the nearest gridcells or do they look like the hemispheric average.
Of course the results will change with added observations! And some complain about that and call that “adjustments”. The results changes most when you use global or hemispheric averages for the infilling. Changes less with local interpolation.
Which of course has been the point all along. Gives a better answer.
Agreed?

January 29, 2015 6:23 am

Re the last paragraph of the lesson and “different climate beliefs”:
Aside from the conviction that the scientific method is a valid way of enhancing our understanding of the natural world, the term ‘belief’ has no place in science. When a conclusion becomes a matter of belief, and doubters are condemned as heretics, it is a sure sign that a line of inquiry has departed from the scientific method.
/Mr Lynn

mwh
January 29, 2015 7:00 am

Jim – you asked for comment, but please be aware I am with you 100%, I will try to be objective:-
are you serious about the way you have presented this or is there a (sarc) element here. I ask this because I think the take up of this session is going to be next to nil.
I would love this to be taken up by the uk school where my 16 year old is, he is studying geography, maths, ICT and economics for his A-levels. To get this onto the curriculum there it would have to be presenting a warmist ideal. He thinks I am a complete nutter because I take a sceptical view and when a sea level rise series of questions were put to him for his homework I couldnt engage with him as he thought that my view would contaminate the accepted view and get him a lower mark. The sad thing is I think he was probably right – I just wish he was in an environment where scientific debate was encouraged rather than frowned upon – he should receive an equal mark for presenting a factual sceptical view or a factual warmist one, best of all his own uncontaminated one – that would be good education.
Now I dont know how you go about what I am going to suggest here but I think the only way you can present this and get a decent uptake is to present an overall warmist argument rather than what I have taken here to be an overall sceptical one. The knack being to encourage the students to have a sceptical view without it being presented to them. Difficult one I know, but I really think that great teaching aid you have presented here in our own little community wont get used unless you use a more subtle tactic.
I do enjoy your posts and reasoning Jim but I sincerely think you will need to throttle back a bit and try a little less stick and more carrot – I wont go on, I hope you know what I mean

mwh
Reply to  mwh
January 29, 2015 7:13 am

There was something else – session time is fairly limited for high school curriculums, have you tested the expected time this would take. I would say this would take in the region of 2 hours or 3 sessions to complete. If so how does that fit in and what would it cover within the programme. to attract uptake a session length and what items within the subject mater it covers would be vital information IMHO

rtj1211
January 29, 2015 8:41 am

‘The current pedagogy argues teachers should not be the “Sage on Stage“ spewing dogma for students to regurgitate. A teacher should be the “Guide on the Side,” alternating short bits of input followed by time slots (such as “Think, Pair and Share”) that allows students to investigate and discuss the issue with their peers.’
What happens to the freaks who no-one will pair and share with??
Or has America ‘written that bit out of the script too’ in its race to the politically correct nirvana on earth which fails to match up to the reality of far, far too many people born on this earth??

January 29, 2015 9:40 am

Where are the principles? A core idea? Is that what it has become? Without basic principles to work from, this “think-pair-share” exercise is, in my opinion, a weird exercise in scientific democracy…in other words, consensus-building. You cannot have critical thinking without an epistemological baseline that illustrates the necessity of that process. You cannot ‘foster’ critical thinking, you have to demonstrate how it is a necessary part of any scientific process.
“The actual doing of science or engineering can pique students’ curiosity, capture their interest, and motivate their continued study.”
The actual doing of science REQUIRES critical thinking, scepticism, and inclusion of all data in the explanation. You cannot “do” science without curiosity and interest…and the motivation that those fuel. The above quote basically puts the cart before the horse, much in the manner of the IPCC.
There, that’s my take on it. I think that this exercise is being presented as a propaganda antidote, not an exercise in climate literacy. Climate literacy, per se, does not replace a good grounding in SCIENCE…chemistry, physics, biology, geology….and the PRINCIPLES involved in that grounding. ALL of climate science is founded on the earlier principles the ‘core’ sciences teach.
To say “it’s OK to be sceptical” is to acknowledge that it has been increasingly NOT OK…for political reasons. Think-pair-share THAT.

Reply to  Mike Bromley the Kurd
January 29, 2015 10:27 am

“…this “think-pair-share” exercise is, in my opinion, a weird exercise in scientific democracy…”

We used to call them “lab partners” and was done in every science class I took (that I can remember) in the 1960’s and 1970’s. You had someone else to bounce ideas off and who did things a little differently. Not only could you learn from one another a lot of the “real” world requires working with other people.

Reply to  Mike Bromley the Kurd
January 29, 2015 11:06 am

Mike, the lessons are meant to be supplements to textbooks that will present the core ideas. Agreed if “think,pair share” is done with no grounding in the basic principles students will float aimlessly.

January 29, 2015 9:46 am

….and rooter, you miss the point entirely. Completely. Your ‘method’ outlined above, is a propagandist overprint on an already unprincipled exercise…which essentially creates a reciting robot out of the person doing the exercise…a robot who is seeking YOUR approval. I shudder to think where that approach would lead us.

Alan McIntire
January 29, 2015 10:06 am

Regarding that first paragraph:
“,…students need to understand all the factors causing climate change writing,”
Why should students waste their time learing about “climate change writing”?

mwh
Reply to  Alan McIntire
January 29, 2015 10:47 am

thats what I meant in my post above – if you want this to be taken up by students and teachers alike especially those without a sceptical view, then it has to be presented in an attractive manner or they wont go near it. Its great that one teacher above is considering it but only as a ‘spare time’ project. For it to be mainstream it would have to tick the right boxes and no matter how approving of the subject material we all are it will only rattle around its own echo chamber here and on Jims own site unless it covers off parts of the subject matter within the curriculum. Therefore it would have to be approached from the teaching point of view rather than a sceptical climatologist point of view. This article nearly gets it right in my opinion but it does rather say ‘look what thos enaughty scientists did with the data’ – if you say that for them then in any way then you are not allowing the students to think this for themselves.
To me it depends on Jims intent – is this to be a token attempt at educating those that are being brainwashed or is he really attempting to engage with students on a grander scale? Theres a lot of pulling the facts apart on this thread but a lot of it is just semantics. Yes the facts have to be correct and available from mainstream scientific sources but the main intention is to encourage discussion and true scepticism. If that is truly the aim then the topic has to fit full square into the teaching curriculum

Reply to  Alan McIntire
January 29, 2015 11:01 am

Teacher are going to be required to teach climate change. Indeed the great majority will be ill-prepared and seeking lessons that will allow both teacher and student to explore the topic and help the teach learn on the go.

MikeUK
January 29, 2015 10:52 am

A quick reaction is that the material is way too advanced for high school students. Even the first lesson would be tricky (even for some experienced data analysts) due to the problem of missing data. How do you deal with that in spreadsheets, and in calculating averages?
My suggestion would be to forget about climate CHANGE, since to a very good approximation it doesn’t, despite all the hype. I’d focus on teaching climate DIFFERENCES, and the fact that such differences should change even less than the absolute climate, that is the key fact that enables homogenisation of “faulty” data.
Just making basic sense of climate data would be a tough assignment, but I very much like the general idea. Maybe it would be best to pick the local data for each school.

Alan McIntire
Reply to  MikeUK
January 29, 2015 2:07 pm

I agree completely. Students should be taught some astronomy- learn about months and seasons, some biology, geography, and simple physics- like measuring velocity and temperature changes. Actually studying climatology seriously would be like trying to teach nuclear physics- not practical to teach and very difficult to cover in a reasoble manner. Current proposed curricula are more political science propaganda than science.

comose
January 29, 2015 11:08 am

This is extremely significant as it shows the bias of data by USHCN. At the USHCN site I examined the data for one rural station at Davenport WA (452007), population 2500, where urban heat island effect is v. unlikely. It has a continuous record since 1910. I have lived there for 70 years. This town has had the same population for 60-70+ years and likely decreased since the 1920s. Looking at the temp. one can see high temps. in the 1930s, 1940s then a downward trend in the Traw. I then charted the Tmean (adjusted) and the Traw (unadjusted) in Excel. I found a negative least squares trend in the Traw of -0.0095x and a positive least squares trend in the Tmean of +0.0056x. I then subtracted Traw from Tmean and found SIGNIFICANT adjustment bias in this difference plot of +0.1510x (1.5 deg F per century), which confirms the adjustment (OH Sorry, in government-speak its “Homogenization” is “bias” or “adjustment” to the rest of us). So a cooling trend is blatantely adjusted to show a warming trend where its “impossible”.

knr
Reply to  comose
January 29, 2015 1:28 pm

Adjustments on their own are not a problem , these are sometimes needed . However three things are needed.
What adjustments are made is made clear
Why the adjustments were made is justified
And the old unadjusted data is retained .
In climate ‘science; they often fail all three of these and when you add it that those making these adjustments have strong personal/professional interest in getting the ‘right ‘ , if not honest , results . You can understand why people view this area with such concern .

rooter
Reply to  comose
January 29, 2015 1:35 pm
comose
January 29, 2015 11:34 am

The book “Whole Story of Climate” provides a lay treatment of climate change for the last 1,000 to 2,000,000 years. Its treatment is appropriate for high school, as the “Homogenization” exercise may be a too tall order, and others just wanting interesting history of earth to explain the other side of climate that we never hear. It gives numerous examples, using a football field as a clever measure of time, of how and when climate has changed in the last thousand to million years with lots of details. Author is E. Kirsten Peters, Professor of Geology at Washington State University. As a geologist, I read it, bought extra copies, and tried to promote it within politician circles, a waste of my time. Amazon for $18 and kindle for about $9. I give it 5 stars for information, quality and enlightenment.
Dr. Peters is also the “Rock Doc”, where she writes insightful weekly columns about the world around, not always geology.
http://www.amazon.com/Whole-Story-Climate-Science-Reveals/dp/1616146729
Correction on my last post: 0.151x should be 0.0151x (1.5 deg F per century)

Reply to  comose
January 29, 2015 4:45 pm

Thanks for the good tip. I put it in my wish list.

jeo
January 29, 2015 12:28 pm

Don’t use the term “sexual preference.” That’s a bit passe’. Sexual orientation is more accurate (and accepted).

January 29, 2015 1:22 pm

“Instruct students to visit the USHCN website and download the raw data for the 3 stations into a spreadsheet like EXCEL.”
If this is to teach someone new, rather than an exercise for someone already experienced, there are a dozen steps (maybe three dozen steps?) missing. I suspect that by spending a few hours I could figure out what data is under what labels, and what I need to do to get it, but this is far from obvious when one reaches the web page and attempt to execute a few of the actions that are possible there.
I think this is a poor approach for high school students. The exercise is about the data, about what they will do with the data and learn from the data. They should not have to spend extensive class time just figuring out how to get the data.
I have written user documentation quite a few times, mostly for office workers who are not familiar with (the usually quite new) procedures I am presenting. I’ve learned that no smallest detail step of how to do the thing should be missing from the documentation, regardless of how obvious it seems to me, the programmer who created the product. Everything should be there, in sequence, with the identical labels, icons, etc that the user will actually see when doing the task. It can also be helpful to include comments, in the appropriate places, about the meaning of, and reason for, what is being done at particular steps.
This approach allows everyone to participate and actually get results. If the task will be undertaken often, almost everyone will learn and, sooner or later, no longer need the documentation, but even the slowest won’t be left out, feeling too confused about step A to ever start to get their mind around steps B, C, and D.

Reply to  AndyH
January 29, 2015 2:03 pm

Andy that’s a good point. Although downloading the data to EXCEL and creating graphs is really very simple exercise once you have been around the block once, I shouldnt assume the teacher or students have any such experience. So I will add an addendum creating a step by step instructions. Once a student has been led through the steps, an 8th grader could create an anomaly graph comparing 2 stations in less than 30 minutes.

1sky1
January 29, 2015 2:13 pm

Very few, if any, of the global index manufacturers ever conducted any intelligent analysis of regional data, preferring to homogenize relative to urban records.

January 29, 2015 4:56 pm

Thanks, Dr. Steele, for a valiant effort.
I think the mechanics of the data-gathering stage should by simplified by the use of downloadable spreadsheets in LibreOffice format.

Reply to  Andres Valencia
January 29, 2015 6:31 pm

Sorry, I meant “be simplified”.
And these downloadable spreadsheets should contain the data.

Bill Illis
January 30, 2015 4:38 am

A few weeks ago, I showed the adjustments done to Reyjavik Iceland over the past year. Checked it again, and guess what, they have adjusted it yet again in just the last two weeks.
Quality controlled station temperature trend since 1900 (Iceland Met says nothing should be done to this data) —> +0.2C of warming.
NCDC adjusted trend as of February 21, 2014 —> +1.4C of warming.
NCDC adjusted trend as of January 17, 2015 —> +2.0C of warming.
NCDC adjusted trend as of January 29, 2015 —> +2.3C of warming.
What happened in the last two weeks that made 1900 temperatures 0.3C colder than they were as of just two weeks ago. Nothing. What changed in the last few years that made Rekyavik 1.1C colder in 1900 than the records indicate and 1.0C warmer today than the records of today indicate. Nothing.
These guys will not stop. They are getting away with it and are even being encouraged by their fellow climate scientists. We need a new law and a prosecutor or something.

Reply to  Bill Illis
January 30, 2015 11:15 am

Good catch Bill. We need to create a page that that compares all the homogeniztions over the past 3 years

Bill Illis
Reply to  Bill Illis
January 30, 2015 5:08 pm

I made a Gif animation of the changes.
The top right panel is the quality controlled temperatures from Iceland Met which they insist requires no further adjustment for any station moves, TOBs or polar bears or anything. The second right panel is the temperatures the NCDC reports to the public and the bottom right panel is the adjustment they apply to each year. I mean “reports to the public” is a government agency supplying temperature data to the whole world. 75% of the stations from the NCDC have this same pattern.
Note there is only 13 months of change recorded here and only 10 days in the last two parts of the animation. We need to stop these people now.
http://www.loogix.com//img/res/1/4/2/2/6/3/14226332222823775.gif

Mervyn
January 31, 2015 5:34 am

“Temperature Homogenization Activity” … that’s NASA/NOAA speak for “fudging the data”!
If the figures of a corporation are “massaged” and “cooked”, it’s called fraud.

Reply to  Mervyn
January 31, 2015 8:03 am

“If the figures of a corporation are “massaged” and “cooked”, it’s called fraud.”
In climatology it gets you grants!

David Wojick
February 1, 2015 11:51 am

The overall concept is good but strictly college level. There are far too many advanced concepts (anomaly, micro-climate, inhomogenieties, USHCN, NOAA, etc.) for this to be done in high school. Moreover it would take several class sessions to get through this all. The average high school science class meets only about 60 hours a year, and has to cover everything called for in the state standards, so time is precious.
These are the standard problems with most advocacy lessons. They are far too advanced and way too long. A good lesson teaches just one basic concept in 35 minutes or less. I have designed a number of skeptical lessons that do this, but I had the help of high school teachers doing it. Plus I have my catalog of all the technical concepts that are taught in high school as a guide. One major concept in 35 minutes should be the standard.
David
http://www.stemed.info/index.html

David Wojick
Reply to  David Wojick
February 2, 2015 9:23 am

One of my high school/middle school lessons, on solar activity, is here: http://climatescienceinternational.org/images/pdf/lesson_on_solar_activity_and_gw-20-06-2013.pdf My view is that the scientific debate is too technical for K-12 so I show them the debate rather than involving them in it.

David Wojick
February 2, 2015 5:45 am

Just to amplify what I said above, if you use any technical term or procedure that the students have not already been taught than you must teach it. Jim’s lesson probably uses dozens of terms and procedures that are not normally taught in high school. I suggest looking carefully at the standards to see what has been taught. This lesson might work for a sophomore or junior college course in climate science, certainly not in high school.

David Wojick
Reply to  David Wojick
February 2, 2015 8:25 am

If anyone is interested in developing skeptical teaching materials for high school (or middle school) I will be happy to advise them. Email me at . My team did a $600K project for DOE in which we cataloged the technical terms used in K-12 and estimated the average grade in which each is taught. Our catalog is used to rank teaching materials by grade level in the search engine on http://www.scienceeducation.gov/. We call it grade level stratification.
The web is larded with alarmist K-12 teaching materials, many Federally funded. Skeptics have next to nothing and I would like to change that. But the materials have to be grade level appropriate.