Guest essay by Jim Steele
In 2012 the National Academies of Science published A Framework For K-12 Science Education Practices, Crosscutting Concepts, and Core Ideas. Although the framework often characterizes climate change as a negative and disruptive phenomenon, the framework clearly states students need to understand all the factors causing climate change writing, “Natural factors that cause climate changes over human time scales (tens or hundreds of years) include variations in the sun’s energy output, ocean circulation patterns, atmospheric composition, and volcanic activity.”
However instead of promoting textbooks that critically analyze and debate the relative contributions of a diverse array of climate factors, the American Association for the Advancement of Science (AAAS) has been attacking any state that wants to adopt textbooks promoting climate debate. Alan Leshner, a psychologist and CEO for the American Association for the Advancement of Science and Camille Parmesan (whose debunked climate claims have already been published in text books) argued, “From the scientific perspective, there are simply no longer “two sides” to the climate-change story: The debate is over. The jury is in, and humans are the culprit.”
Whatever the outcome of that political battle, the science framework is absolutely correct to state, “The actual doing of science or engineering can pique students’ curiosity, capture their interest, and motivate their continued study.” So I am creating a series of activities (that educators can download for free from my website) to supplement any Climate Literacy science text. Hopefully these lessons will engage students in critical thinking, simulate how NOAA/NASA climate scientists think and “do science” as currently executed.
The current pedagogy argues teachers should not be the “Sage on Stage“ spewing dogma for students to regurgitate. A teacher should be the “Guide on the Side,” alternating short bits of input followed by time slots (such as “Think, Pair and Share”) that allows students to investigate and discuss the issue with their peers. The first supplemental piece is my beta version of the “Temperature Homogenization Activity” and I would appreciate any comments and typo corrections) to help improve this and subsequent lessons before I post it to my website. If you know of any high school or college educators that might give this lesson a trial run, please pass it on.
“Temperature Homogenization Activity.”
Teaching Objective I: Understanding Trends and Anomalies
Students must be able to read and create quality graphs.
Teacher input: Give the students the graph “Maximum Temperature USHCN Raw Data” illustrating trends from 3 quality weather stations in the US Historical Climate Network, all located at a similar latitude in southern California. In pairs or small groups, have them discuss the following questions and formulate questions of their own.
Think, Pair Share:
1) What factors might make Brawley so much warmer than the other stations?
2) Which station(s) experienced the warmest temperatures between 1930 and 1950?
3) Which station(s) experienced the most similar climate trend from 1900 to 2010?
4) Is climate change affecting all stations equally?
Teacher input: Brawley is further inland and unlike stations closer to the coast doesn’t experience the ocean’s cooling effect. Drier desert conditions with few clouds and little vegetation creates a microclimate that heats up much more quickly than the other stations. Brawley and Cuyamaca shared the most similar trends but may have been difficult to see due to micro-climate differences. To better extract climate trends that can be compared between stations experiencing varied micro-climates, scientists graph anomalies.
Instruct students to visit the USHCN website and download the raw data for the 3 stations into a spreadsheet like EXCEL. To determine anomalies relative to the 1951-1980 period, calculate the average temperature for each station for that time period, then subtract the station’s average from the raw data for each and every year. This will produce negative temperatures representing cooler years and positive temperatures representing warmer years than average. Have students create their own anomaly graph, and compare their charts with the anomaly graph below.
(Teacher note: Do not use an average from years later than 1980. During the mid 1980s there was a massive change in equipment that also required relocations that brought the weather stations closer to buildings. )
Think, Pair, Share:
Have students discuss the value of using anomalies to extract regional trends.
Brainstorm about what factors could cause only Chula Vista’s micro‑climate to suddenly warm relative to the other stations?
Teaching Objective II: Understanding Artificial Inhomogeneities
Teacher input: Because each weather station exists in a unique micro‑climate, individual differences cause each station to exhibit small cooling or warming trends that might not be seen at the other stations. For example, changes in vegetation, sheltering or waste heat from various building configurations, or a difference in topography that funnels wind differently, all can affect short term temperature trends. Changes to the landscape such as removal of trees, a fire, or changes such as increased pavement affect how the soil holds the moisture and how much moisture is transpired into the air. The resulting differences between weather stations are called inhomogeneities. Natural inhomogeneities are expected and an integral part of local climate change. However, scientists must eliminate any artificial inhomogeneities caused by a growing population that alters the surface and add waste heat, or when stations relocate to a different micro-climate. All 3 stations exhibited trends that were reasonably similar until 1982. So what caused Chula Vista to artificially warm, or conversely did Brawley and Cuyamaca suddenly cool?
To answer that problem, instruct students to first visit the USHCN website and acquire the ID# for each station. Then have them visit NOAA’s Historical Observing Metadata Repository, plug in the ID# and look for information regarding any changes at that station and the year in which those changes occurred.
Think, Pair, Share:
Which station(s) moved and in which year? Compare the changes in temperatures from any station that moved to the temperature changes for stations that did not move. Then determine if the re-location caused a warming or cooling. How did the re‑location affect the temperature trend?
Teacher input: Confirm the students’ research. According to the Historical Observing Metadata Repository, Chula Vista moved 2.5 miles in 1982, from a location situated along salt evaporation ponds to an urban setting surrounded by buildings. In 1985, new instruments were installed that required new cable connections. So the weather station was moved 190 feet, presumably closer to a building.
After the 1982 relocation, the temperature at Chula Vista rose by 6°F, in contrast to a drop in temperatures at the other 2 stations, so Chula Vista’s move very likely caused its temperature to rise artificially. An interpretation of artificial warming is also consistent with the relocation to a warmer urban setting.
There was no verifiable relocations or change of instrumentation at the other 2 stations. However 7 months of temperature data in 1992 for Brawley were reported via a Hygrothermograph (HTG) but that would not affect the 1982 comparisons.
Teaching Objective III: Homogenizing Data to Create Meaningful Regional Climate Trends
Teacher input: Good scientists do not blindly accept raw data. Data must undergo quality control analyses that adjust data for documented changes known to create changes unrelated to climate change. After accounting for artificial inhomogeneities, scientists adjust the data to create what they believe is a more realistic regional trend.
Based on what they have learned so far, ask the students to create a graph that best exemplifies southern California’s regional climate change. Simplify their task by using the graph (below) for just Chula Vista and Cuyamaca (which are only 15 miles apart). Students are free to adjust the data in whatever manner they feel best represents real climate change and corrects for artificial inhomogeneities.
Teacher input: After students have graphed their own temperature trends, have them compare their results with the graph below illustrating how USHCN climate experts actually homogenize the data. (The comparison should promote lively discussions as most students will create trends for both stations that resemble Cuyamaca.)
Think, Pair, Share: Discuss why climate experts created such different trends. Why did scientists lower the high temperatures at Cuyamaca during 1930s to 1950s by 3 to 5°F? What other concerns may affect scientists’ expectations about how best to homogenize data.
Teacher input: Clearly the data was adjusted for other reasons than can be explained by Chula Vista’s relocation. Adjusting data for unknown reasons is different from quality control adjustments and is called homogenization. The use of homogenization is contentious because a change in a station’s trend is often assumed to be caused by unknown “artificial” causes. However the natural climate is always changing due to cycles of the sun, ocean oscillations like El Nino and the Atlantic Multidecadal Oscillation that alter the direction and strength of the winds, or natural landscape successions. So how can scientists reliably separate natural climate changes from undocumented “artificial” changes?
One method suggests comparing the data from more reliable weather stations that have undergone the least amount of known artificial changes to determine a “regional expectation.” That regional expectation can serve as a guide when adjusting trends at other les reliable stations. However as we have seen the most reliable stations can undergo the greatest adjustments. So what other factors are in play?
Many scientists working for NOAA and NASA believe that rising CO2 explains recent temperature trends. In addition, many scientists suggest that the proximate cause of regional climate change is driven more by natural changes in ocean circulation. In 2014 climate scientists published a peer-reviewed paper (Johnstone 2014) suggesting that climate change along the coast of North America could be best explained by natural cycles of Pacific Decadal Oscillation (PDO) due to its affects on sea surface temperatures in the eastern Pacific. Give the students the graph below from Johnstone 2014 and ask them to compare changes in sea surface temperatures (SST in red) with the raw and recently homogenized temperature data from southern California.
Think, Pair, Share: Which data sets (raw or homogenized trends) best agrees with the hypothesis that ocean temperatures drive regional warming trends? Which data sets best agrees with the hypothesis that rising CO2 drives regional warming trends? Could a belief in different hypotheses affect how temperature trends are homogenized?
Teacher input: Have students compare the temperatures trends for the northern hemisphere (below; created by the Japanese Meteorological Society and published by the National Academy of Science in 1977) with the new global trends presented by NASA’s Gavin Schmidt who argues 2014 was the warmest year on record. Point out that earlier scientific records suggested temperatures dropped by 0.6°C (1.1°F) between 1940 and 1980, with the 1980 temperatures similar to 1910. Compare those temperatures with Schmidt’s 2014 graph that suggests 1980s temperature anomalies were 0.5°C higher than the 1910s.
Think, Pair, Share: Why does the period between 1940 and 1980 in the 2 graphs disagree so dramatically? Does the new graph by NASA’s Gavin Schmidt’s represent real climate change or an artifact of homogenization? If the difference was due to homogenization, is that a valid reasons to alter older trends. If Gavin Schmidt’s starting point for the temperature data from 1980 to 2014 was lowered, so that 1980 temperatures were still similar to 1910 as suggested by earlier research, how much higher than the 1940s would the 2014 global temperature be?
Teaching Objective IV: In‑filling
Teacher input: As seen for the USHCN weather stations, raw data is often missing. Furthermore extensive regions around the world lack any weather stations at all. To create a global temperature climate scientists must engage in the art of infilling. A recent scientific paper, Cowtan and Way (2014) used in-filling to contradict other peer-reviewed research that determined a pause to global warming for the past 15 years or more. By in-filling, these scientists argued that there was no hiatus and warming trend continued.
Give the students a map of the world and an amber colored pencil. Instruct them to lightly shade all the continents to show they have all warmed. Now provide the map (below) of Global Historical Climate Network Stations showing the station locations. Instruct students to simulate infilling, by darkening the regions on all the continents where ever weather stations are sparse (the whitest areas). Then give them the NOAA’s 1950-2014 map modeling the continent’s warmest regions
Think, Pair, Share: Can in-filling reliably represent local temperatures trends? The warmest regions appear to be related to infilling. What other reasons would cause greater warming in the in-filled regions? Should regions without a sufficient density of weather stations be included in a global average temperature?
Extended Lesson Activities:
Have students pick a group of stations within a 500‑mile area within similar ecosystems (i.e. the Great Plains, or Southeastern forests, New England Forest) and examine the differences between raw and homogenized temperature data for all those stations. (See an example of such a comparison for Massachusetts in an analysis of moose migrating southwards) Use the metadata for those
I suggest looking at just the maximum temperatures in this extensions activity because minimum temperatures are much more sensitive to landscape changes and other microclimate changes. The next activity will examine differences between maximum and minimum temperatures and the effects of the landscape on temperature trends.
Related Teaching moment: Tolerance and Respectful Debate
Have students write an essay about how our nation is fostering greater tolerance and compassion toward different ethnicities, races, religions and people with different sexual preferences. Then contrast that tolerance with the condoned hatred that’s been hurled at people for having different climate beliefs. Often those very different beliefs are simply a result of trusting different virtual realities created by statistical temperature trends. Would more respectful debate, between scientists who trust homogenized trends and those who don’t, help the public better understand climate change?
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Jim, I may have missed it but you should add a link to surfacestation main and surfacestation gallery.
old construction worker,
You didn’t miss it, but I hope to make that link for the next activity looking more closely at the effects of micro-climates, but the link to the gallery doesn’t appear to be working.
It seems to me the way forward on a long march to critical reasoning. Thanks for this antidote which is a welcome contribution to common sense.
The Climate Inquisition is not going to like this, you must not question the adjusted numbers – you must have faith, trust the models not the facts.
The Lumpen Proletariat repeatedly have to be told the same consistent story of: “We are doomed because CO2 is such an evil gas. We must repent, destroy our economies and return to the more enlightened lifestyles of medieval times.”
There is to be no more debate, the science is settled and as one of the grand inquisitors states “Homogenisation makes very little difference.”
And if you believe that, I have a bridge to sell you.
Nick Stokes:
“Homogenisation makes very little difference”.
“…alternative methods to GISS etc gave the same result. With or without homogenisation”.
“Some are adjusted up, some down. It evens out.”.
Well, the obvious conclusion is : don’t homogenise. It adds nothing but error and distrust. It clearly isn’t needed (because Nick Stokes was able to get a result without homogenising).
Well, I don’t. But there are genuine inhomogeneities, and they can be detected. They may have cancelling effect, and seem to (so no difference), but you can’t be sure. That is why they correct.
Correcting data from years ago because that data fails to support the idea your selling is not ‘correcting ‘ in any real sense. The trouble is not that past measurements have have had issues , as indeed do current ones , but the notion that by some form of magic you can offset them when you don’t even know their magnitude nor direction. And when person doing the ‘correcting’ has strong vested interest in getting the right results , that is right for them not right has in honest, they you can see why this area has so many problems.
At the bottom of this it the old problem that has very recently been seen again ,our ability to predict weather or fully understand it is not where near has good has some claim it to be. In the past this was accepted has a reality , but with AGW we seen the raise of the skyscrapers of the ‘unprecedented’ and the ‘settled ‘ built on the same swamp of the unknown has our past inabilities in this areas. Therefore its hardly a surprise to find these skyscrapers often have so many issues with ‘settling ‘
“but the notion that by some form of magic you can offset them when you don’t even know their magnitude nor direction”
The Chula Vista correction shown above is a clear case. Menne’s algorithm will compare that with neighbors and tell you that there is a discontinuity, of clear amount. And yes, the record tells that there was a 2.5 mile station shift at that time.
That is why they distort. By your own admission, they even out. So they are useless for anything but hiding bad science. Science relies on raw data – not manufactured data. By destroying raw data, you destroy the ability for new insights to be gleaned from it.
“philjourdan
January 29, 2015 at 3:59 am
That is why they distort. By your own admission, they even out. So they are useless for anything but hiding bad science. Science relies on raw data – not manufactured data. By destroying raw data, you destroy the ability for new insights to be gleaned from it.”
I was thinking the same thing. We don’t need a process that “evens out” data. If the process doesn’t accurately reflect what the data should be, then the process is not beneficial.
NIck says, “there are genuine inhomogeneities, and they can be detected. They may have cancelling effect, and seem to (so no difference), but you can’t be sure. That is why they correct.”
There in lies the problem. A process that creates more errors but happen to cancel out, only obscures the real underlying data, and leads to incorrect diagnoses. You remind me of the joke about the man with his head in the oven and feet in the freezer. The doctor attending to his pain, suggested his problem was psychological because on average his temperature was just right.
Nick,
If you can’t decide what you think, how are we supposed to make sense of your opinion?
How about we just lock in the historical records now Nick. Declare victory, call it a day.
No more adjustments to any pre-2010 records. They are all supposed to be fixed now. There have been at least 5 different processes used to incrementally fix the historic records and that should be all finished now.
Lock it in.
I get the feeling this would not be acceptable to those in control of the adjustment processes because then, if temperatures continue to rise far less than predicted, at some point they will have to admit the theory is not fully correct.
By continuing the temperature adjustment processes, they get to delay that day as far as needed into the future because the science finds this delay process to be perfectly acceptable.
To me, its like saying unemployment in the 1930s depression was only 3.2%, or in other words, full employment. There was no depression, there was no WWII, Communism collapsed because it wasn’t communist enough. Rewriting history works against the interests of mankind because we do not know what works, what doesn’t, what is right and what is wrong. Humans incrementally make society better by learning what works, what is the truth about the nature of the universe, we go backwards when history is rewritten.
Make your own index with unadjusted data.
Like Stokes and others have done. Did you miss that part?
I’m saying that the effect of the inhomogeneities will often cancel out. Some stations move to warmer places, some to cool, etc. That’s what I find when I do the calc without homogenisation. It doesn’t make much difference.
But you don’t know that until you’ve tried (homogenising).
Some stations move to warmer places, some to cool, etc. and how do you know this given you don’t have an accurate measure of the situation the station of have moved into before the move otherwise why move a station into , how can you correct for that which you do not know ?
Its guess work , no matter how much maths you throw at it , your still making assumption based on facts you do not know that well . Its frankly hilarious to instrument people that others claim levels of precision not actually possible form the very means used to take the measurement , and that is without any error bars for the data .
You want to make a good claim to three decimal point precision , the make sure you can measures to four decimal points . Which no tree ring , no matter how magic , is going to give you . It may well be that in the land of models you can have has high degree of precision has you like , but in the real world that does not work like that .
That is proven to be completely incorrect Nick because the adjustments have “increased” the trend by 0.6C or more. I believe about 0.3C of this is unjustified padding.
Illis:
“That is proven to be completely incorrect Nick because the adjustments have “increased” the trend by 0.6C or more. I believe about 0.3C of this is unjustified padding.”
Perhaps you could tell how do you know this? Let me guess; the world consists of continental US only and it makes no difference at what time of day recording of measurements are done.
rooter
Your thread bombing is annoying.
One of your posts which wastes space on the thread responds to Bill Illis having said
by saying
WHEN YOU DON’T KNOW THEN ASK; DON’T “GUESS”.
You have already displayed too much of your ignorance on WUWT.
This shows you the global changes for GISS with one mouse click.
And, assuming you can read at a level above ‘Noddy Goes to Toytown’, then read all of this for a discussion covering the frequent changes to all the global temperature data sets.
Richard
rooter
My second link did not work. It is this
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
Richard
richardscourtney:
Your link is to graphs with land-only measuremnts mostly in the northern hemisphere from 1980. It was even before gistemp. The second is gistemp met stations only. The third is land-ocean index. Different type of data. Different number of observations. Different coverage.
But of course: You did know that. Or did you not? Why are they not more different is a better question.
just some typos:
should be “less”
and
I’d go with “wherever”
Pedagogically – geez I dislike that word – I’d provide the data sets with source info so they can get right to it. And I think an intro might be good – “many are concerned that our world may be warming due to our actions and influences upon it (industry, technology, population, emissions . . . ) well you get that.
I’d also pose some other questions (softly) toward the end – why is there such disagreement, passion about all this? What differences could this all make? – in your life, your home, your school???
Best of luck.
This is misleading because it’s two against one. At first glance two is greater than one therefore the warming is the spurious phenomenon not the potential cooling.
It would be better if the teacher’s notes included a discussion of why that logic is false.
Are the two independent? Or are they connected in some way and subject to the same events or errors? They are both coastal, of course.
This would also introduce another part of investigating the raw data.
Are the raw data independent of each other?
This teacher input is confusing:
“Give the students the graph below from Johnstone 2014 and ask them to compare changes in sea surface temperatures (SST in red) with the raw and recently homogenized temperature data from southern California.”
And goes on to showing a comparison between SST and homogenized temperature data. There are no raw data in that graph.
And just to confuse the students even more he says:
“Which data sets (raw or homogenized trends) best agrees with the hypothesis that ocean temperatures drive regional warming trends? Which data sets best agrees with the hypothesis that rising CO2 drives regional warming trends? Could a belief in different hypotheses affect how temperature trends are homogenized?”
What is the teacher trying to achieve here? Trying to demonstrate his own fallibility? There still are no non-homogenized data in that graph.
Tremendous stuff! Well done Jim Steele for this really positive contribution for schools and teachers. The awful impact of climate schreechers and their absurdly agitated followers has gotten into schools where it has surely been adding to the psychological and educational harm being brought to children by the widespread promotion of a narrow, one-sided, politically-loaded, emotionally unhinged view of the climate system and our impact on it.
This lesson plan would require skilled and talented teachers and advanced students. Which is fine not all classes are designed equally.
It is interesting that it is using climate studies to teach critical thinking often lacking in climate studies. At any rate I see critical thinking and the practice of basic scientific principles as the primary benefit of these lesson plans.
In that regard, this training should be required at NASA and NOAA
An unusual winter temperatures in the stratosphere over the polar circle.
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_MEAN_JFM_NH_2015.gif
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_MEAN_ALL_NH_2014.gif
when I was a child aliens would visit me as I slept and explain stuff, which is why I was always lucky because I had that understanding and I would never feel a requirement to tell a lie. I would say ” why would I need to lie to my inferiors ? “
Why did they stop?
As my Dad used to say to me “I’ve taught you all I know and still you know nothing”.
Jim–I am not sure what ages/grades this is aimed at, but as a parent of recently graduated high school students, this seems to me to be too much in a short period of time. I would peg this teaching program at 1-2 months of time minimum, not the sort of thing to be done in a week or a class. And while kids are better at computers than me–a huge understatement–you assume programming knowledge and math skills that are pretty advanced. Finally, you assume teacher skills that are pretty advanced as well.
Other than that, I love the concept. Teach thinking, not just raw facts.
A great post Mr. Steele, thank you. This is the approach I favor for exposing the political and religious agenda behind the global warming movement. If we teach children how science is done and create a love for science and math in them, these children will be well equipped to defend themselves and their country against the con men and witch doctors that promote these false religions.
Section in-filing:
“Can in-filling reliably represent local temperatures trends? The warmest regions appear to be related to infilling. What other reasons would cause greater warming in the in-filled regions? Should regions without a sufficient density of weather stations be included in a global average temperature?”
The question should rather be: What method of infilling will best match the local temperature and temperature trends? Infilling from neighboring areas or infilling with global or hemispheric averages? Those are the alternatives. If you ignore areas where there are no measurements those areas will not influence the global or hemispheric averages. Then those areas are in effect infilled with global or hemisperic averages.
The answer could in this case very well be found in the very same paper from where the first figure was copied.
http://en.climatechange.cn/EN/article/downloadArticleFile.do?attachType=PDF&id=8468
There they increase the number of stations and reduce the areas with no stations.
http://i.imgur.com/1vKT8Bj.png
They are doing infilling by adding stations. The result is this:
http://i.imgur.com/V7xd1iz.png
See what happens during the last period for the integrated data set and crutemp3. Compare to this:
http://woodfortrees.org/graph/crutem3vgl/compress:12/plot/crutem4vgl/compress:12
Same story.
But do not show this to the students. It might give them the impression that infilling with help of adjacent areas is better than infilling with global or hemispheric averages. We don’t want that to happen do we?
Rooter your graph does not match your narrative.Your Figure 3. shows a decline in the number of stations since 1970, which would mean more infilling is now needed.
Jim Steele did not get it. It is Crutemp3 that has the lowest number of stations during the latest years. GHCNv3 has more stations and the integrated data has even more. The result: The index with lowest number of stations has the lowest temperature during those years. Increase the number of stations and coverage and the result is higher trend.
A comparison in coverage for Crutemp3 and Crutemp4 2000-2010:
http://i.imgur.com/2VJGWjX.png
For some reason not understood by some the temperature trend of those gridcells with missing observations in Crutemp3 have trends that are close to the trends of the neighbouring gridcells.
Again: exactly the opposite of your claims. Infilling with actual measurements increases the trend. You will get result that closer to that result by doing statistical infilling with values from the nearest regions of missing observations. Your solution with infilling with the hemispheric or global averages is the method is the most wrong method.
Rooter, You bait and switch again. You first show a graph with a decline in stations since the 70s and then use maps from 2000-2010 during a period when there has been no warming.
What Rooter fails to understand is the the question is how reliable is infilling? and why do it at all?
Studies in Yosemite (that measured trends between areas far closer than neighboring stations in Eurasia and other infilled areas), revealed that changes in the wind caused one area to warm, one to cool and one with no trend. Assuming that similar issues apply to Eurasia, depending on the station I could be infilling very different trends. Furthermore how much of the warming in Eurasia is due to advection of heat ventilating from a cooling Arctic Ocean. An anomalously warm infilled Eurasia tells us nothing about sensitivity to rising CO2, but only creates the illusion of that attribution
Jim Steele is saying that there was no warming 2000-2010 for land stations.
How does he know that? He does not. There is no warming when the infilling is done with hemispheric averages. When measurements are supplemented that changes. Easy to see why. Just compare the coverage in Crutem3 vs Crutem4.
Steele asks why do infilling. He fails to understand that the choice is between doing infilling with values from the nearest areas vs infilling with hemispheric averages. The difference between Crutem3 and Crutem4 illustrates this.
Rooter, Satellite data shows no warming !
Jim Steel says:
“Satellite data shows no warming !”
That is 2000-2010. This time Steele did not understand that troposphere is different from surface. That comparison is not unproblematic. Even when comparing lower troposphere over land and surface. But we can try. We can compare lower troposphere over land and land surface:
Trends 2000 – 2010:
Crutem4: 0.24 deg C/dec
Gisstemp land masked: 0.26 C/dec
BEST (land): 0.27 C/dec
RSS over land: 0.12 C/dec
UAH over land: 0.36 C/dec
Did Steele not check before making his claim? Also watch the closer match for the surface series than for the lower troposphere series.
Rooter, Jim Steele writes, “Can in-filling reliably represent local temperatures trends?”, and you respond, “The question should rather be: What method of infilling will best match the local temperature and temperature trends? Infilling from neighboring areas or infilling with global or hemispheric averages? Those are the alternatives.”
Your re-framing of the question is quite substantial. I think that we can all agree that the absence of decent historical global temperature coverage is a problem, and that a good global average temperature plot would be good thing to have. However, those two facts do not oblige us to simply accede to one or the other of your pair of alternatives.
What if both of the proposed infilling methods are badly inaccurate? Note that in either case, the temperature records for vast regions of the globe are in effect being modeled rather than being the result of observation. How accurate really, are the infilling models?
Whether “global averages” are used, or “neighboring areas” are used to guide the infilling model, the possibility of systemic errors are simply rife. For instance, one suspects that the records that do exist are biased towards lower latitudes than are the low coverage areas. Consequently, infilling using the low latitude records will have a warm bias. Also, with skimpy data, UHI effects seem likely to be exaggerated.
The point is that Jim Steele’s question at top is not answered by merely insisting that we simply accept one of your two choices. If both of your alternatives are bad, then the answer produced by those alternatives is bad.
We know the answer to what method is best. Methods with infilling (gistemp, Best) matches better Crutem4 than Crutem3. Crutem4 vs Crutem3 is a demonstration of the effect of increasing coverage. Crutem3 differs from the infilling methods much more than Crutem4 does. See my answer to Steele above with trends 2000 – 2010.
@rooty
“We know the answer to what method is best. Methods with infilling (gistemp, Best)”
Are you dense? The best method is to actually measure local temps, making it up by infilling is not better.
Mi Cro:
So you think it best with actual measurements.
Totally agree. That is why we can check which infilling method is best. Infilling with hemispheric means where there is large measurement gaps like in Crutem3 vs interpolating for doing the infilling as in Gistemp. Gistemp ends up being close to Crutem4 where there are more measurements and better coverage (actual measurements). BEST has both more measurements (more than Crutem4) and local infilling.
If you has some other evidence for why making up the infilling with hemispheric average is best, please tell.
If you live some place where the weather is most always the same, infilling so you can determine climate might make sense. If you live some place where the weather changes all the time, infilling makes little sense, as the space between stations can contain significant weather that just disappears. How do you include weather you don’t measure into your climate and think your climate is even close to right.
All we know is what the stations register.
Mi Cro:
Hard to make sense of your local changes etc.
Make it simple. Would you prefer Crutem3 or Crutem4?
http://i.imgur.com/2VJGWjX.png
For the trend 2000-2010 Gistemps trend is closest to Crutem4
Is infilling with hemispheric averages (Crutem3) or infilling from nearest areas (Gistemp) most like the trend from a series with more actual measurements (Crutem4 or BEST)?
It’s easy, they are both wrong.
Rooter,
Your reply was entirely non-responsive to my point. You try to insist at the starting gate that we MUST choose from one of your two alternatives. You permit no other option. That is my objection.
What if BOTH models of the missing data are lousy models? You’ve proved nothing by arguing that one modeled set of “observations” is better than the other.
That is only evidence that one may be less lousy than the other, but it certainly does not prevent both of them from sucking. Even if recent years, where more complete data is available, shows a better fit for one infilling method than the other, that is in no way a proof that that same method provides accurate results for earlier decades.
The data is missing, and models of that missing data certainly contain error. When we then also note that the “adjustments” and “homogenization” always have the effect of cooling the past and warming the present, lots of suspicion is justified.
Mi Cro’s last resort is of course “they are both wrong”
Well Mi Cro. 100% right will never happen with observational data. That is so.
But how do you know that they are both wrong? Do you know the right answer?
Of course not we haven’t measured it to know what temp all the bits are.
Ignoring TYoke’s “what if the Moon is a cheese” argumentation, TYoke seems to be capable of saying that infilling from nearby areas is less wrong. But he complains that there must be other alternatives.
Obviously there are different methods of infilling from nearby areas and those are being used. And ends up producing results in agreement. But of course one method can be better and preferred.
But if TYoke has even a better method, that is excellent. Show us that better method.
TYoke’s claim that adjustments and homogenization always cool the past and warm the present is just wrong. There is a massive warming adjustment of past sea surface data. And even after 2000 the NCDC’s met station homogenization method reduces the warming:
http://www-users.york.ac.uk/~kdc3/papers/coverage2013/update.140404.pdf
Mi Cro says:
“Of course not we haven’t measured it to know what temp all the bits are.”
And stops there. The question Mi Cro avoids is what kind of values are most likely to be closest to the temperature in those areas. The hemispheric average or estimation of the temperature from nearby areas.
If temperature was spatially linear you could approximate it, but it’s not.
City A is 200 miles west of city B, city A is 60F, city B is 70F, so what is the average temp between city A and city B?
We don’t know what the average is, and without any measurements we will never know what the average is as it could be anywhere from maybe 75 to 55F, 65F is just as likely wrong as anything else.
Mi Cro is trying this:
“We don’t know what the average is, and without any measurements we will never know what the average is as it could be anywhere from maybe 75 to 55F, 65F is just as likely wrong as anything else.”
Still has not grasped the concept of anomalies.The anomaly between those cities is certainly closer to an interpolated value between those cities than the hemispheric average. And of course, that can be checked empirically.
“Still has not grasped the concept of anomalies.”
I prefer the derivative of an anomaly based on the stations previous day’s measurement.
” And of course, that can be checked empirically.
Not if there isn’t a station there! That’s the problem, there isn’t.
“Not if there isn’t a station there! That’s the problem, there isn’t.”
Easy. Pick two stations where there is a station between those two. Estimate the anomaly of that mid-station from the two other stations. Check with the result. Repeat so much as you can there and elsewhere. And then see which infilling method is best.
rooter commented
Neither method is acceptable.
There are whole countries without any stations, most of the amazon, greenland, both poles aren’t measured, Mosh even said that as they add more stations the temp changes, if they are doing a good job estimating it wouldn’t change at all.
The station in the GSoD and CRU datasets.
http://maps.google.com/maps/ms?msid=218403362436074259636.0004aab6c505418fa54c7&msa=0
“Neither method is acceptable.”
Says Mi Cro in response to how to check which infilling method is closest to actual measurements.
That is: It is not acceptable to to use actual measurements!
But as Mi Cro is digging something for himseelf, Mi Cro can check this again:
http://i.imgur.com/2VJGWjX.png
Look at gridcells without measurements in Crutem3 and look at the same gridcells in Crutem4. Do the trend in those gridcells look like the trends in the nearest gridcells or do they look like the hemispheric average.
Of course the results will change with added observations! And some complain about that and call that “adjustments”. The results changes most when you use global or hemispheric averages for the infilling. Changes less with local interpolation.
Which of course has been the point all along. Gives a better answer.
Agreed?
Re the last paragraph of the lesson and “different climate beliefs”:
Aside from the conviction that the scientific method is a valid way of enhancing our understanding of the natural world, the term ‘belief’ has no place in science. When a conclusion becomes a matter of belief, and doubters are condemned as heretics, it is a sure sign that a line of inquiry has departed from the scientific method.
/Mr Lynn
Jim – you asked for comment, but please be aware I am with you 100%, I will try to be objective:-
are you serious about the way you have presented this or is there a (sarc) element here. I ask this because I think the take up of this session is going to be next to nil.
I would love this to be taken up by the uk school where my 16 year old is, he is studying geography, maths, ICT and economics for his A-levels. To get this onto the curriculum there it would have to be presenting a warmist ideal. He thinks I am a complete nutter because I take a sceptical view and when a sea level rise series of questions were put to him for his homework I couldnt engage with him as he thought that my view would contaminate the accepted view and get him a lower mark. The sad thing is I think he was probably right – I just wish he was in an environment where scientific debate was encouraged rather than frowned upon – he should receive an equal mark for presenting a factual sceptical view or a factual warmist one, best of all his own uncontaminated one – that would be good education.
Now I dont know how you go about what I am going to suggest here but I think the only way you can present this and get a decent uptake is to present an overall warmist argument rather than what I have taken here to be an overall sceptical one. The knack being to encourage the students to have a sceptical view without it being presented to them. Difficult one I know, but I really think that great teaching aid you have presented here in our own little community wont get used unless you use a more subtle tactic.
I do enjoy your posts and reasoning Jim but I sincerely think you will need to throttle back a bit and try a little less stick and more carrot – I wont go on, I hope you know what I mean
There was something else – session time is fairly limited for high school curriculums, have you tested the expected time this would take. I would say this would take in the region of 2 hours or 3 sessions to complete. If so how does that fit in and what would it cover within the programme. to attract uptake a session length and what items within the subject mater it covers would be vital information IMHO
‘The current pedagogy argues teachers should not be the “Sage on Stage“ spewing dogma for students to regurgitate. A teacher should be the “Guide on the Side,” alternating short bits of input followed by time slots (such as “Think, Pair and Share”) that allows students to investigate and discuss the issue with their peers.’
What happens to the freaks who no-one will pair and share with??
Or has America ‘written that bit out of the script too’ in its race to the politically correct nirvana on earth which fails to match up to the reality of far, far too many people born on this earth??
Where are the principles? A core idea? Is that what it has become? Without basic principles to work from, this “think-pair-share” exercise is, in my opinion, a weird exercise in scientific democracy…in other words, consensus-building. You cannot have critical thinking without an epistemological baseline that illustrates the necessity of that process. You cannot ‘foster’ critical thinking, you have to demonstrate how it is a necessary part of any scientific process.
“The actual doing of science or engineering can pique students’ curiosity, capture their interest, and motivate their continued study.”
The actual doing of science REQUIRES critical thinking, scepticism, and inclusion of all data in the explanation. You cannot “do” science without curiosity and interest…and the motivation that those fuel. The above quote basically puts the cart before the horse, much in the manner of the IPCC.
There, that’s my take on it. I think that this exercise is being presented as a propaganda antidote, not an exercise in climate literacy. Climate literacy, per se, does not replace a good grounding in SCIENCE…chemistry, physics, biology, geology….and the PRINCIPLES involved in that grounding. ALL of climate science is founded on the earlier principles the ‘core’ sciences teach.
To say “it’s OK to be sceptical” is to acknowledge that it has been increasingly NOT OK…for political reasons. Think-pair-share THAT.
We used to call them “lab partners” and was done in every science class I took (that I can remember) in the 1960’s and 1970’s. You had someone else to bounce ideas off and who did things a little differently. Not only could you learn from one another a lot of the “real” world requires working with other people.
Mike, the lessons are meant to be supplements to textbooks that will present the core ideas. Agreed if “think,pair share” is done with no grounding in the basic principles students will float aimlessly.
….and rooter, you miss the point entirely. Completely. Your ‘method’ outlined above, is a propagandist overprint on an already unprincipled exercise…which essentially creates a reciting robot out of the person doing the exercise…a robot who is seeking YOUR approval. I shudder to think where that approach would lead us.
Regarding that first paragraph:
“,…students need to understand all the factors causing climate change writing,”
Why should students waste their time learing about “climate change writing”?
thats what I meant in my post above – if you want this to be taken up by students and teachers alike especially those without a sceptical view, then it has to be presented in an attractive manner or they wont go near it. Its great that one teacher above is considering it but only as a ‘spare time’ project. For it to be mainstream it would have to tick the right boxes and no matter how approving of the subject material we all are it will only rattle around its own echo chamber here and on Jims own site unless it covers off parts of the subject matter within the curriculum. Therefore it would have to be approached from the teaching point of view rather than a sceptical climatologist point of view. This article nearly gets it right in my opinion but it does rather say ‘look what thos enaughty scientists did with the data’ – if you say that for them then in any way then you are not allowing the students to think this for themselves.
To me it depends on Jims intent – is this to be a token attempt at educating those that are being brainwashed or is he really attempting to engage with students on a grander scale? Theres a lot of pulling the facts apart on this thread but a lot of it is just semantics. Yes the facts have to be correct and available from mainstream scientific sources but the main intention is to encourage discussion and true scepticism. If that is truly the aim then the topic has to fit full square into the teaching curriculum
Teacher are going to be required to teach climate change. Indeed the great majority will be ill-prepared and seeking lessons that will allow both teacher and student to explore the topic and help the teach learn on the go.
A quick reaction is that the material is way too advanced for high school students. Even the first lesson would be tricky (even for some experienced data analysts) due to the problem of missing data. How do you deal with that in spreadsheets, and in calculating averages?
My suggestion would be to forget about climate CHANGE, since to a very good approximation it doesn’t, despite all the hype. I’d focus on teaching climate DIFFERENCES, and the fact that such differences should change even less than the absolute climate, that is the key fact that enables homogenisation of “faulty” data.
Just making basic sense of climate data would be a tough assignment, but I very much like the general idea. Maybe it would be best to pick the local data for each school.
I agree completely. Students should be taught some astronomy- learn about months and seasons, some biology, geography, and simple physics- like measuring velocity and temperature changes. Actually studying climatology seriously would be like trying to teach nuclear physics- not practical to teach and very difficult to cover in a reasoble manner. Current proposed curricula are more political science propaganda than science.
This is extremely significant as it shows the bias of data by USHCN. At the USHCN site I examined the data for one rural station at Davenport WA (452007), population 2500, where urban heat island effect is v. unlikely. It has a continuous record since 1910. I have lived there for 70 years. This town has had the same population for 60-70+ years and likely decreased since the 1920s. Looking at the temp. one can see high temps. in the 1930s, 1940s then a downward trend in the Traw. I then charted the Tmean (adjusted) and the Traw (unadjusted) in Excel. I found a negative least squares trend in the Traw of -0.0095x and a positive least squares trend in the Tmean of +0.0056x. I then subtracted Traw from Tmean and found SIGNIFICANT adjustment bias in this difference plot of +0.1510x (1.5 deg F per century), which confirms the adjustment (OH Sorry, in government-speak its “Homogenization” is “bias” or “adjustment” to the rest of us). So a cooling trend is blatantely adjusted to show a warming trend where its “impossible”.
Adjustments on their own are not a problem , these are sometimes needed . However three things are needed.
What adjustments are made is made clear
Why the adjustments were made is justified
And the old unadjusted data is retained .
In climate ‘science; they often fail all three of these and when you add it that those making these adjustments have strong personal/professional interest in getting the ‘right ‘ , if not honest , results . You can understand why people view this area with such concern .
Many station moves:
http://www.ncdc.noaa.gov/homr/#ncdcstnid=20027768&tab=MSHR
http://berkeleyearth.lbl.gov/stations/42407
With corresponding breaks.
The book “Whole Story of Climate” provides a lay treatment of climate change for the last 1,000 to 2,000,000 years. Its treatment is appropriate for high school, as the “Homogenization” exercise may be a too tall order, and others just wanting interesting history of earth to explain the other side of climate that we never hear. It gives numerous examples, using a football field as a clever measure of time, of how and when climate has changed in the last thousand to million years with lots of details. Author is E. Kirsten Peters, Professor of Geology at Washington State University. As a geologist, I read it, bought extra copies, and tried to promote it within politician circles, a waste of my time. Amazon for $18 and kindle for about $9. I give it 5 stars for information, quality and enlightenment.
Dr. Peters is also the “Rock Doc”, where she writes insightful weekly columns about the world around, not always geology.
http://www.amazon.com/Whole-Story-Climate-Science-Reveals/dp/1616146729
Correction on my last post: 0.151x should be 0.0151x (1.5 deg F per century)
Thanks for the good tip. I put it in my wish list.
Don’t use the term “sexual preference.” That’s a bit passe’. Sexual orientation is more accurate (and accepted).
“Instruct students to visit the USHCN website and download the raw data for the 3 stations into a spreadsheet like EXCEL.”
If this is to teach someone new, rather than an exercise for someone already experienced, there are a dozen steps (maybe three dozen steps?) missing. I suspect that by spending a few hours I could figure out what data is under what labels, and what I need to do to get it, but this is far from obvious when one reaches the web page and attempt to execute a few of the actions that are possible there.
I think this is a poor approach for high school students. The exercise is about the data, about what they will do with the data and learn from the data. They should not have to spend extensive class time just figuring out how to get the data.
I have written user documentation quite a few times, mostly for office workers who are not familiar with (the usually quite new) procedures I am presenting. I’ve learned that no smallest detail step of how to do the thing should be missing from the documentation, regardless of how obvious it seems to me, the programmer who created the product. Everything should be there, in sequence, with the identical labels, icons, etc that the user will actually see when doing the task. It can also be helpful to include comments, in the appropriate places, about the meaning of, and reason for, what is being done at particular steps.
This approach allows everyone to participate and actually get results. If the task will be undertaken often, almost everyone will learn and, sooner or later, no longer need the documentation, but even the slowest won’t be left out, feeling too confused about step A to ever start to get their mind around steps B, C, and D.
Andy that’s a good point. Although downloading the data to EXCEL and creating graphs is really very simple exercise once you have been around the block once, I shouldnt assume the teacher or students have any such experience. So I will add an addendum creating a step by step instructions. Once a student has been led through the steps, an 8th grader could create an anomaly graph comparing 2 stations in less than 30 minutes.