# Test

SMPTE color bars – Click for your own test pattern kit

This page is for posters to test comments prior to submitting them to WUWT. Your tests will be deleted in a while, though especially interesting tests, examples, hints, and cool stuff will remain for quite a while longer.

Some things that don’t seem to work any more, or perhaps never did, are kept in Ric Werme’s Guide to WUWT.

WordPress does not provide much documentation for the HTML formatting permitted in comments. There are only a few commands that are useful, and a few more that are pretty much useless.

A typical HTML formatting command has the general form of <name>text to be formatted</name>. A common mistake is to forget the end command. Until WordPress gets a preview function, we have to live with it.

N.B. WordPress handles some formatting very differently than web browsers do. A post of mine shows these and less useful commands in action at WUWT.

N.B. You may notice that the underline command, <u>, is missing. WordPress seems to suppress for almost all users, so I’m not including it here. Feel free to try it, don’t expect it to work.

Name Sample Result
b (bold) This is <b>bold</b> text This is bold text
Command strong also does bolding.
i (italics) This is <i>italicized</i> text This is italicized text
Command em (emphasize) also does italics.
A URL by itself (with a space on either side) is often adequate in WordPress. It will make a link to that URL and display the URL, e.g. See http://wermenh.com.

Some source on the web is presenting anchor commands with other parameters beyond href, e.g. rel=nofollow. In general, use just href=url and don’t forget the text to display to the reader.

blockquote (indent text) My text

<blockquote>quoted text</blockquote>

More of my text

My text

quoted text

More of my text

Quoted text can be many paragraphs long.

WordPress italicizes quoted text (and the <i> command enters normal text).

strike This is <strike>text with strike</strike> This is text with strike
pre (“preformatted” – use for monospace display) <pre>These lines are bracketed<br>with &lt;pre> and &lt;/pre>
These lines are bracketed

with <pre> and </pre>
Preformatted text, generally done right. Use it when you have a table or something else that will look best in monospace. Each space is displayed, something that <code> (next) doesn’t do.
code (use for monospace display) <code>Wordpress handles this very differently</code> WordPress handles this very differently
See http://wattsupwiththat.com/resources/#comment-65319 to see what this really does.

Using the URL for a YouTube video creates a link like any other URL. However, WordPress accepts the HTML for “embedded” videos. From the YouTube page after the video finishes, click on the “embed” button and it will suggest HTML like:

<iframe width="560" height="315"

frameborder="0" allowfullscreen>

</iframe>



WordPress will convert this into an internal square bracket command, changing the URL and ignoring the dimension. You can use this command yourself, and use its options for dimensions. WordPress converts the above into something like:

[youtube https://www.youtube.com/watch?v=yaBNjTtCxd4&w=640&h=480]

Use this form and change the w and h options to suit your interests.

If WordPress thinks a URL refers to an image, it will display the image

instead of creating a link to it. The following rules may be a bit excessive,

but they should work:

1. The URL must end with .jpg, .gif, or .png. (Maybe others.)
2. The URL must be the only thing on the line.
3. This means you don’t use <img>, which WordPress ignores and displays nothing.
4. This means WordPress controls the image size.
5. <iframe> doesn’t work either, it just displays a link to the image.

If you have an image whose URL doesn’t end with the right kind of prefix, there may be two options if the url includes attributes, i.e. if it has a question mark followed by attribute=value pairs separated by ampersands.

Often the attributes just provide information to the server about the source of the URL. In that case, you may be able to just delete everything from the question mark to the end.

For some URLs, e.g. many from FaceBook, the attributes provide lookup information to the server and it can’t be deleted. Most servers don’t bother to check for unfamiliar attributes, so try appending “&xxx=foo.jpg”. This will give you a URL with one of the extensions WordPress will accept.

WordPress will usually scale images to fit the horizontal space available for text. One place it doesn’t is in blockquoted text, there it seems to display fullsize and large images overwrite the rightside nav bar text.

Those of us who remember acceptance of ASCII-68 (a specification released in 1968) are often not clever enough to figure out all the nuances of today’s international character sets. Besides, most keyboards lack the keys for those characters, and that’s the real problem. Even if you use a non-ASCII but useful character like ° (as in 23°C) some optical character recognition software or cut and paste operation is likely to change it to 23oC or worse, 230C.

Nevertheless, there are very useful characters that are most reliably entered as HTML character entities:

Type this To get Notes
&amp; & Ampersand
&lt; < Less than sign

Left angle bracket

&bull; Bullet
&deg; ° Degree (Use with C and F, but not K (kelvins))
&#8304;

&#185;

&#178;

&#179;

&#8308;

¹

²

³

Superscripts (use 8304, 185, 178-179, 8308-8313 for superscript digits 0-9)
&#8320;

&#8321;

&#8322;

&#8323;

Subscripts (use 8320-8329 for subscript digits 0-9)
&pound; £ British pound
&ntilde; ñ For La Niña & El Niño
&micro; µ Mu, micro
&plusmn; ± Plus or minus
&times; × Times
&divide; ÷ Divide
&ne; Not equals
&nbsp; Like a space, with no special processing (i.e. word wrapping or multiple space discarding)
&gt; > Greater than sign

Right angle bracket

Generally not needed

Various operating systems and applications have mechanisms to let you directly enter character codes. For example, on Microsoft Windows, holding down ALT and typing 248 on the numeric keypad may generate the degree symbol. I may extend the table above to include these some day, but the character entity names are easier to remember, so I recommend them.

## Latex markup

WordPress supports Latex. To use it, do something like:

$latex P = e\sigma AT^{4}$     (Stefan-Boltzmann's law)

$latex \mathscr{L}\{f(t)\}=F(s)$

to produce

$P = e\sigma AT^{4}$     (Stefan-Boltzmann’s law)

$\mathscr{L}\{f(t)\}=F(s)$

Each comment has a URL that links to the start of that comment. This is usually the best way to refer to comment a different post. The URL is “hidden” under the timestamp for that comment. While details vary with operating system and browser, the best way to copy it is to right click on the time stamp near the start of the comment, choose “Copy link location” from the pop-up menu, and paste it into the comment you’re writing. You should see something like http://wattsupwiththat.com/2013/07/15/central-park-in-ushcnv2-5-october-2012-magically-becomes-cooler-in-july-in-the-dust-bowl-years/#comment-1364445.

The “#<label>” at the end of the URL tells a browser where to start the page view. It reads the page from the Web, searches for the label and starts the page view there. As noted above, WordPress will create a link for you, you don’t need to add an <a> command around it.

## One way to avoid the moderation queue.

Several keywords doom your comment to the moderation queue. One word, “Anthony,” is caught so that people trying to send a note to Anthony will be intercepted and Anthony should see the message pretty quickly.

If you enter Anthony as An<u>th</u>ony, it appears to not be caught,

so apparently the comparison uses the name with the HTML within it and

sees a mismatch.

## 487 thoughts on “Test”

I think I discovered that if I could get around the automatic spam trap by writing Anthony with an empty HTML command inside, e.g. Ant<b></b>hony .
What happens when I try that with underline?
Apologies in advance to the long-suffering mods, at least one of these comments may get caught by the spam trap.

2. Wun Hung Lo says:

I’m giving up on this
But the above code works at JSFIDDLE Code testing shop
see for yourself – http://jsfiddle.net/804j6fmd/
Why no work here – it’s nuts !

• LOVE that JSFIDDLE Code testing shop !!! – thank you

• Rick says:

I remember seeing this test pattern on TV late at night after the National Anthem and before the local station broadcast came on early in the morning while the biscuits, bacon and oatmeal were still cooking. The first show after a weather report was “Dialing For Dollars” and you had better know the count when your phone rang…. 1 up and 3 down… to get the cash.

3. I have been looking for a way to create a table.
How did you do it?

• He used the <pre> command, it’s described in the main article. Pre is for preformatted text and displays in monospace and with all the spaces preserved.

• Sasha says:
• Sasha says:
• Sasha says:
• Sasha says:
4. Owen in GA says:

$m_{H2O} \propto A_{surface}$
Is there something wrong with latex support on the test page?

• Owen in GA says:
• Owen in GA says:

Error in the third line can’t use \\ in the latex code.
$m_{H2O} \propto A_{surface}$
$E_{total} \propto \int_{A_{surface}}FdA \mbox{(where } F \mbox{ is the flux in watts per square meter)}$
$dT \propto \frac {E_{total}}{m_{H2O}}$

• Owen in GA says:

$E_{total} \propto \int_{A_{surface}}FdA \mbox{(where } F \mbox{ is the flux in watts per square meter)}$
a mistake in this line maybe?

• Owen in GA says:

The first two lines
$m_{H2O} \propto A_{surface}$
$E_{total} \propto \int_{A_{surface}}FdA \mbox{(where } F \mbox{ is the flux in watts per square meter)}$
Will they show?

• Owen in GA says:

$\frac{\partial T}{\partial t} = \frac{\int_{SA}FdA}{SA \times d \times \rho} \times \frac{\partial T}{\partial Q} =\frac{F \times SA}{SA \times d \times \rho} \times \frac{\partial T}{\partial Q} =\frac{F}{d \times \rho} \times \frac{\partial T}{\partial Q}$

• Sasha says:
5. Kip Hansen says:

test strong
test bold

• Kip Hansen says:

Reply to Ric W ==> Thanks — I was fielding comments on an essay using an unfamiliar tablet, and wasn’t sure which and/or both were part of HTML5. I usually use the old ClimateAudit comment Greasemonkey tool, even though its formatting is funky these days, for the tags. Don’t suppose you could update that add-in?

• IIRC, Greasemonkey was written for CA, which uses a different theme that does WUWT.
I don’t have the time to figure out the JavaScript code or whatever it’s written in, and I don’t have the ability to make changes that deep in WUWT.
Instead of Greasemonkey, I often use https://addons.mozilla.org/en-US/firefox/addon/its-all-text/ . It can open up an external editor, so it has saved my butt a few times when WP loses a post I was making.

6. Hey, what happened to the old smiley face?? When I tried to post it, this appeared:

I wonder if WordPress changed any others?
☹ ☻
The old smiley was more subtle; less in-your-face. The new one is way too garish.
If WP keeps that up, I’ll just have to use this lame replacement:
🙂
Or even worse:
😉

• The old ways are the best ways! 🙂

7. John F. Hultquist says:

This text has been underlined

• Ah, some buglets appear to be invariant over years.

• GoatGuy says:

Wait what? (I’m trying underline) … but if it doesn’t work, how didi you?

8. Hugs says:

$latex \frac{100}{20} = 5$

• Hugs says:

$P = e\sigma AT^{4}$

• Hugs says:

I’m at loss, why that works.
${10}^{4}$

• Hugs says:

$latex {10}^{-7}$ Finally right way to get an exponent?
$latex {1010}_{2} = {10}$

• Hugs says:

$latex {10}^{7}$

• Hugs says:

$latex {10}^{4}$

• Hugs says:

$latex {10}^{-7}$ Space added before ending dollar sign

• Hugs says:

$latex {10}^{7}$

• Hugs says:

$latex {10}^{4}$ 10⁴ 10⁷

• Hugs says:

Am I daft or why I didn’t get this already. Does it depend on newlines?
$latex {10}^{7}$

• Hugs says:

One last try, then I’ll go and take a beer.
$latex \frac{100}{20} = {5}$

• GoatGuy says:

$latex \frac{100}{20} = 5 9. John Ridgway says: Test quote End test 10. William Ward says: Source Energy (J) Normalized  Atmosphere: 1.45x10^22 J 1  Oceans 1.68x10^25 J 1,157 11. William Ward says: Source Energy (J) Normalized Atmosphere: 1.45x10^22 J 1 Ice 1.36x10^25 J 935 Oceans 1.68x10^25 J 1,157 12. William Ward says: Source Energy (J) Normalized Atmosphere: 1.45x10^22 J 1 Ice: 1.36x10^25 J 935 Oceans: 1.68x10^25 J 1,157 13. William Ward says: Source Energy (J) Normalized (E) Atmosphere: 1.45x10^22 J 1 J Ice: 1.36x10^25 J 935 J Oceans: 1.68x10^25 J 1,157 J 14. William Ward says: In my previous post I use the example of the following over the next 100 years: 3 units of new energy goes to the oceans and 1 unit to the atmosphere – with all 4 units being equal in Joules. 1 unit raises the average temperature of the atmosphere by 4C or the average temperature of the oceans by 0.0003C. In this example the atmosphere warms by 4C and the oceans warm by 4 x 0.0003C or 0.0012C. It is exactly the higher heat capacity you mention that allows the heat energy to be absorbed with less movement of temperature. At the detail level maybe the top 2 inches of water gets much hotter and this will then support the physics of the more complex mechanisms you mention. But the beauty of this approach (I think – and hope) is that it doesn’t really matter how the energy gets distributed in the water with its corresponding temperature effect. Determine the mass of the ocean water you want to see affected in this model and apply the energy to it to get the temperature you would expect. 15. Kip Hansen says: Is anyone using CA Assistant? I was using it before the migration — doesn’t work in current version and I can’t figure out why. • Ric Werme says: IIRC, I think it was written for Climate Audit and only accidentally worked here. It may be broken for good. The ItsAllText add-on may also be broken in newer Firefoxes. • Kip Hansen says: Ric ==> CAsst had code that allowed it to function on Climate Etc, Climate Audit, WUWT and several others. The code was editable by the end-user to add additional sites using the standard WP format. Still works on Judith’s site. It is the shift to the new WP structure that has broken it. Any hot coders out there? CA Asst is editable in Firefox with GreaseMonkey. 16. Sam C Cogar says: How come this FAQ doesn’t work for me? Subject: Linking to past comments Each comment has a URL that links to the start of that comment. ….. the best way to copy it is to right click on the time stamp near the start of the comment, choose “Copy link location” from the pop-up menu, and paste it into the comment Is it because the “time stamp” is located at the end of the comment (at lower right-hand corner)? Sam C • Ric Werme says: Things have changed, click on the link icon way to the right of your name to see the URL. I’ll update the main post in a bit. 17. Gunga Din says: Testing “pre” in the new comment system  7/10/2012 4/18/2012 High TieH Low TieL High TieH Low TieL 1998 7 0 1998 7 0 1999 3 0 1999 3 0 2000 3 0 2000 3 2 2001 4 1 2001 4 1 2002 3 2 2002 4 2 2003 1 0 2003 2 0 2004 0 1 2004 0 1 2005 2 1 2005 2 1 2006 0 0 2006 0 0 2007 8 2 0 2007 10 0 2008 4 0 2008 3 0 2009 2 0 2009 0 0 2010 8 0 2010 1 0 2011 2 0 2011 0 0 2012 1 0 2012 0 0 48 2 5 0 39 0 7 0  • Gunga Din says: It worked. PS Those are the number of Columbus Ohio record highs and lows set for each of the years according to the list from 4/18/2012 compared with the list from 7/10/2012, a bit less than 3 months later. Notice how, somehow, 7 additional record highs were set in 2010. 18. clipe says: • steve case says: [??] comment image comes up when I post a url ending in .jpg • Ric Werme says: I see the URL http…ps.jpg, I don’t see the “comment image” link. 19. Sasha says: 20. Joe Born says: Attempting to get my display name shown correctly. • Joe Born says: It still doesn’t work. 21. Dr. Strangelove says: • Jan Kjetil Andersen says: test table kol1 col2 col3 v1 v2 v3  • Jan Kjetil Andersen says: Car deaths per citizen for some counties: Country Fatalities Population Fatalities per million citizens US: 37 461 325 mill 115 UK: 1 792 66 mill 26 Germany: 3 214 83 mill 39 Sweden: 263 10 mill 26 France: 3 469 67 mill 51  • Jan Kjetil Andersen says: Car deaths per citizen for some counties: Country Fatalities Population Fatalities per million citizens US: 37 461 325 mill 115 UK: 1 792 66 mill 26 Germany: 3 214 83 mill 39 Sweden: 263 10 mill 26 France: 3 469 67 mill 51  22. Sasha says: Dutch filmmaker “shunned” for questioning climate change https://www.youtube.com/watch?v=cu3PqSD9OB4 The real climate crises is the West’s energy policy, and the complete lack of debate about it. 23. Sasha says: 24. Sasha says: Why does this image keep disappearing? Why do YouTube videos keep disappearing? 25. Dr. Strangelove says: 26. Sasha says: Who is deleting my posts? Why are most of my images deleted? Why are all of my YouTube videos being deleted? Who is doing all this? • Ric Werme says: I maintain this this page, part of the task is to trim people’s tests when they are stale, and I’ve been greatly remiss about that this year. I’ve done a massive amount of cleanup in the last week or two, but I don’t think I had much to your posts before June 11. I’m catching up though! Next is to update the main post with current knowledge. • Ric Werme says: And why do you keep posting Jennifer+Love+Hewitt+04.jpg? • Sasha says: If I want to test posting and image, I will test an image worth posting. By the way, my questions were rhetorical. I was testing formatting code on images, text, and videos all of which kept disappearing then reappearing so I just typed out what I was thinking at that time. 27. Sasha says: • Sasha says: Once again, the picture has disappeared. Clicking on the link does not work (again). • Sasha says: 28. Red94ViperRT10 says: “… the gasoline you buy might trace its heritage to carbon dioxide pulled straight out of the sky… engineers … have demonstrated a scalable and cost-effective way to make deep cuts in the carbon footprint of transportation…” 1. So their machine can recognize the difference between a CO2 molecule produced by anything transportation related, and all other CO2 molecules? 2. By “cost-effective” I assume they mean they have a product that some willing buyer someplace is willing to pay them an amount that will be greater than the cost it takes them to produce, market and deliver that product? ‘Cuz iffen they don’t, it ain’t “cost-effective”. “…claim that realizing direct air capture on an impactful scale will cost roughly$94-$232 per ton of carbon dioxide captured…” 3. So what’s that in$/gal of gasoline? (See 2. above WRT “cost-effective”.) Will it have the same BTU/gal as gasoline?
So, yeah, other than that, Mrs. Lincoln, how did you like the play?

29. RicDre says:

[In walk the drones]

“Today we celebrate the first glorious anniversary of the Information Purification Directives.

[Apple’s hammer-thrower enters, pursued by storm troopers.]

We have created for the first time in all history a garden of pure ideology, where each worker may bloom, secure from the pests of any contradictory true thoughts.

Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth.

We are one people, with one will, one resolve, one cause.

Our enemies shall talk themselves to death and we will bury them with their own confusion.

[Hammer is thrown at the screen]

We shall prevail!

[Boom!]

30. A little late on the discussion, but this is one of the worst articles written by Dr. Ball I have read in years.

This fits the Mauna Loa trend very nicely, but the measurements and instrumentation used there are patented and controlled by the Keeling family, first the father and now the son.

While C.D. Keeling was the first to measure CO2 with an IR beam (NDIR), and smart enough to make himself an extremely accurate (gravimetric) device to calibrate any CO2 measuring device with extreme accurate calibration mixtures.
The Scripps institute where Keeling worked later provided all calibration mixtures for all devices worldwide. Since 1995, calibration and intercalibration of CO2 mixtures and measurements worldwide are done by the central lab of the WMO.
Ralph Keeling works at Scripps and has no infuence at all at the calibration work of the WMO, neither on the measurements at Mauna Loa, which are done by NOAA under Pieter Tans.

As Scripps lost its control position, they still take their own (flask) samples at Mauna Loa and still have their own calibration mixtures, independent of NOAA. Both Scripps and NOAA measurements are within +/- 0.2 ppmv for the same moment of sampling. If NOAA should manipulate the data, I am pretty sure Scripps/Keeling would get them…

Beyond that, there are about 70 “background” stations, managed by different organisations of different countries, measuring CO2 on as far as possible uncontaminated places, from the South Pole to near the North Pole (Barrow), which all show, besides seasonal changes, which are more explicit in the NH, the same trend: up at about half the rate of the yearly human injection and a lag of the SH, which points to the main source of the increase in the NH, where 90% of human emissions occurs.

Thus Dr. Ball, if you want to accuse somebody of manipulation, first have your facts right.

Then:
Where is the reflection of CO2 increase due to the dramatic ocean warming and temperature increase caused by El Nino?

There is, if you look at the yearly rate of increase at Mauna Loa:

http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em6.jpg

The 1998 and 2015 El Niño’s give a clear increase in yearly CO2 increase in the atmosphere. The 1992 Pinatubo explosion shows a huge dip in CO2 increase.

The reason, in part the ocean temperature in the tropics, but the dominant factor is (tropical) vegetation due to (too) high temperatures and drying out of the Amazon as the rain patterns change with an El Niño and increased photosynthesis after the Pinatubo injection of light scattering aerosols into the stratosphere:

http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_dco2_d13C_mlo.jpg

It is pretty clear that changes in temperature rate of change lead changes in CO2 rate of change with about 6 months. The interesting point is that the δ13C (that is the ratio between 13CO2 and 12CO2) rate of change changes in opposite direction. That is the case if the increase/decrease in CO2 rate of change is caused by decaying/growing vegetation. If the CO2 rate of change was caused by warming/cooling oceans, then the CO2 and δ13C rate of change changes would parallel each other.

Again Dr. Ball, a little more research would have shown that you were wrong in your accusation.

It is getting late here, more comment tomorrow…

31. Teerhuis says:

CO₂~- CO₂~-50°Cµm 50°Cµm

32. Sasha says:
33. Sasha says:

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons

34. Sasha says:
35. Sasha says:
36. Sasha says:
37. Sasha says:
38. Eric Worrall says:
39. Eric Worrall says:

40. Eric Worrall says:

41. Oh, Canada! While confirming the rumor that snowfall is predicted for northern Quebec on June 21, I also discovered Labrador fishing lodges can’t open because they’re still under 6 ft of snow. Clearly, the folks at weathernetwork.com think these “extreme weather events” are man-caused, as the website features stories like this one:

How can kids handle climate change? By throwing a tantrum!

https://s1.twnmm.com/thumb?src=//smedia.twnmm.com/storage.filemobile.com/storage/32996630/1462&w=690&h=388&scale=1&crop=1&foo=bar.jpg

Buy the book “The TANTRUM that SAVED the WORLD” and let Michael Mann and Megan Herbert indoctrinate your child into bad behavior!

Also, don’t miss:

CANADA IN 2030: Future of our water and changing coastlines

Antarctica lost 3 trillion tonnes of ice in blink of an eye

Covering Greenland in a blanket is one way to fight climate

Racism and climate change denial: Study delves into the link

Links to those articles and other balderdash are at:

42. Sasha says:

Re. Flooding from sea level rise threatens over 300,000 US coastal homes – study
https://www.theguardian.com/environment/2018/jun/17/sea-level-rise-impact-us-coastal-homes-study-climate-change

As described by Kristina Dahl, a senior climate scientist at the Union of Concerned Scientists (UCS), who should know better than to publish this load of junk science hysterical alarmism.

“Sea level rise driven by climate change is set to pose an existential crisis to many US coastal communities…”
No it is not. Do you know what the word “existential” means? And there is no connection between climate change, CO2 and sea levels.

“…Under this scenario, where planet-warming emissions are barely constrained and the seas rise by around 6.5ft globally by the end of the century…”
Absolute rubbish. The maximum projected increase is six INCHES by 2100.

“…The oceans are rising by around 3mm a year due to the thermal expansion of seawater that’s warming because of the burning of fossil fuels by humans…”
Where is the proof that “the burning of fossil fuels by humans” causes ANY sea level rise?

To the Guardian: You do love publishing this rubbish, don’t you?

There is nothing we can do about rising sea levels except to build better build dikes and sea walls a little bit higher. Sea level rise does not depend on ocean temperature, and certainly not on CO2. We can expect the sea to continue rising at about the present rate for the foreseeable future. By 2100 the seas will rise another 6 inches or so.

Failed serial doomcaster James Hansen’s sea level predictions have been trashed by real scientists.
Hansen claimed that sea level rise has been accelerating, from 0.6mm/year from 1900 to 1930, to 1.4mm/year from 1930 to 1992, and 2.6mm/year from 1993 to 2015.
Hansen cherry-picked the 1900-1930 trend as his data to try to show acceleration … because if he had used 1930-1960 instead, there would not be any acceleration to show.

According to the data, the rate of sea level rise:
• decelerated from the start of the C&W record until 1930
• accelerated rapidly until 1960
• decelerated for the next ten years
• stayed about the same from 1970 to 2000
• then started accelerating again. Until that time, making any statement about sea level acceleration is premature. One thing is clear: There is no simple relationship between CO2 levels and the rate of sea level rise.

If we assume that the trend prior to 1950 was natural (we really did not emit much CO2 into the atmosphere before then) and that the following increase in the trend since 1950 was 100% due to humans, we get a human influence of only about 0.3 inches per decade, or 1 inch every 30 years.

If an anthropogenic signal cannot be conspicuously connected to sea level rise (as scientists have noted*), then the greatest perceived existential threat promulgated by advocates of dangerous man-made global warming will no longer be regarded as even worth considering (except by the Guardian).

*4 New Papers: Anthropogenic Signal Not Detectable in Sea Level Rise
http://notrickszone.com/2016/08/01/all-natural-four-new-scientific-publications-show-no-detectable-sea-level-rise-signal/

43. Sasha says:
44. RicDre says:

Test Test

45. Sasha says:

Tessa Jowel died last month.

46. Joe Born says:

I tried a link before and it didn’t work.

47. Chuck in Houston says:

Hi all. I wasn’t sure where else to post this question. On the old site, if I left a comment or reply, I was prompted as to whether or not I wanted email updates on new comments etc….

Since moving to this site, I see no option for this after leaving a reply or comment. What have I done wrong?

Thanks

Chuck

• Chuck in Houston says:

Well, nevermind. I saw the field up on the right-hand side to follow WUWT. I think I unsubscribed to the old site in wordpress. We’ll see how this goes.

48. Jim Ross says:
49. Ron says:

Hello Anthony,

This may the first time I have disagreed with you since watts up with that started, but the “ship of fools comment was rather dismissive of what could be useful data collection.

When I want to stimulate my brain, I go to your website. Often the responses to articles posted are more stimulating than the articles themselves. I attribute that to the scientific literacy of most of your audience.

Thanks for providing a commons for unfettered debate, rather than the propaganda of most sites

Ron

50. I have some time to update the main content here, there are a number of things to update. One thing we never figured out at WordPress is why I could underline text but nearly everyone else could not. So, we need to experiment.

If you’re curious, please reply to this and paste in these HTML lines:

This is <b>bold</b>ed.
This is <i>italic</i>ized.
This is <u>underline</u>ed.
This is <b><i><u>everything</b></i></u>.

If you’re one of the folks completely mystified over this puzzle, feel free to create a top-level comment (text entry box is before the first comment). We saw some things where top level comments and replies were handled differently, and I expect to see that at Pressable.

BTW, what I get:

This is bolded.
This is italicized.
This is underlineed.
This is everything.

While I’m here, strong should work like bold.

• Oops, meant to end the “everything” with the reverse of b, i, and u. Ah well, seemed to work anyway on my Firefox.

51. Hugs says:

Test1

52. John Garrett says:
53. eyesonu says:
• eyesonu says:

Which way is that bad boy traveling relative to the camera?

54. Red94ViperRT10 says:

Now here is where my memory gets fuzzy, I think I picked the opening post of a random thread (50% confidence level, it might have been a reply to another comment), in which, about half-way down the page the author proclaimed (this is from memory, so may not be exact),

“We know Global Warming is happening, all the models say so, but we’re not seeing it in the records. So clearly, the records must be wrong. (italics mine, if they show up). But we have somebody working on that.”!!! (Exclamations mine)

.

Imagine that for a second, he’s admitting there is no Global Warming in the data, so he has assigned people to set about changing the data!!!

55. Janice Moore says:
56. Janice Moore says:

57. Janice Moore says:
58. Janice Moore says:
• Janice Moore says:

Great. I can’t post youtube videos anymore.

59. Janice Moore says:

One more try, using an “Enter” after the end of the pasted youtube link:

60. Janice Moore says:
• Janice Moore says:

THIS is what happens (and it was the same with each of several variations of youtube Options vis a vis embed code (which is now located inside “Share” button of youtube) when I copied in the youtube embed code. 🙁

• Janice Moore says:
• Janice Moore says:

Another try with an “Enter” (i.e., a line break) at the end of the embed code (I only tried today an “Enter” at the end of the regular youtube link (which ALWAYS WORKED IN THE PAST with NO enter needed inside the comment box). Grrr.

61. Red94ViperRT10 says:

“…we can hardly afford to double the carbon footprint that the USA and the EU already generate.

“We hope that this model proves to be useful for those seeking to intervene in efforts to avoid producing Western levels of environmental degradation [affluence] in these countries,” the authors conclude.
Just in case in of you doubted Walter Sobchak’s interpretation of the article.

62. Janice Moore says:
63. Dr. Strangelove says:
64. David L. Hagen says:

Reality Check: “Conventional Crude” peaked in 2005
See: ExxonMobil World Energy Outlook 2018 A View to 2040
ExxonMobil clearly shows conventional crude oil peaked in 2005 and has declined since then. Adding in Deepwater and Oil sands still shows declining production. ExxonMobil has to appeal to Tight Oil to show liquids growth that combination flattening out by 2040.
Growth prospects for conventional through tight oil appear so poor that Shell Oil and TOTAL have strategically shifted their major effort out of oil into natural gas. See:
Liquids Supply ExxonMobil 2018

http://cdn.exxonmobil.com/~/media/global/charts/energy-outlook/2018/2018_supply_liquids-demand-by-sector.png?as=1

• Sasha says:
• Sasha says:
65. David L. Hagen says:
• Sasha says:
• Sasha says:
66. markopanama says:
67. markopanama says:
68. markopanama says:
• Sasha says:
• Sasha says:
• Sasha says:
• Sasha says:
• Sasha says:
69. Sasha says:

Once again, the insane image linking has returned.
Pictures are once again appearing then disappearing.
Will it link an image or not? Who knows?
The added lunacy includes all my images being displayed together on Refresh, yet no other post.
Then sometimes all my images load but later only some of them, seemingly chosen at random.
Can anyone explain any of this?

70. Sasha says:
71. William Ward says:

Bold commands do not seem to work on the new site – at least not like the old site worked.

Trying use of characters: BOLD

72. RicDre says:

test

73. Gunga Din says:

Test for putting a photo from my PC up using the “pre” formatting. (The photo will be “inserted” into Excel.)

• Gunga Din says:

So much for that…

74. Joe Born says:

Do equations work? $\tau$

$\frac{\tau + 2}{2}$

$\begin{eqnarray*} p(x)&=&P_\mathrm{out}(x)+P_\mathrm{in}(x)\\ &=&P_0e^{-x}+\frac{1}{2}\int_0^xp(\xi)e^{\xi-x}d\xi+\frac{1}{2}\int_x^\tau p(\xi)e^{x-\xi}d\xi, \end{eqnarray*}$

75. Sasha says:
• Sasha says:

• Sasha says:

• Sasha says:
• Sasha says:
76. Sasha says:

77. Sasha says:

78. Sasha says:

79. Sasha says:

80. Sasha says:
81. Sasha says:
82. Sasha says:
• Sasha says:
83. Sasha says:
• Sasha says:
• Sasha says:

.video-container {
position: relative;
height: 0;
}
.video-container iframe, .video-container object, .video-container embed {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}

84. superscript test

⁰	0
?	1	¹
?	2	²
?	3	³
⁴	4
⁵	5
⁶	6
⁷	7
⁸	8
⁹	9

85. eyesonu says:
• Kip Hansen says:
• Kip Hansen says:

eyesonu ==> The url you are trying to use has an extra “.” in it. ends in ” .img.jpg” — not usual coding.

86. Kip Hansen says:

Block quote test.

87. Sasha says:

This is brilliant:

https://www.theguardian.com/commentisfree/2018/aug/02/bbc-climate-change-deniers-balance

“I won’t go on the BBC if it supplies climate change deniers as ‘balance’”

Here we have Rupert Read who teaches philosophy at the University of East Anglia and chairs the Green House think-tank, explaining why he refused an invitation to discuss climate change on the BBC because it was with a so-called “denier.”

The big joke here is that after refusing to go on air to put his point of view he is now making a formal complaint to the BBC “because the BBC cannot defend the practice of allowing a climate change denier to speak unopposed.”

This is the level of stupidity of the man-made climate hysterics.
This is their level of debating skills.
Still, what can we expect from the University of East Anglia?

88. dmacleo says:

img test

[img]http://i68.tinypic.com/2eedbgw.jpg[/img]

• Sasha says:
• Sasha says:
89. Eric Worrall says:
90. Eric Worrall says:
• Sasha says:
91. Sasha says:
• Sasha says:
92. Sasha says:
93. steve case says:
123456789
987654321 <pre> and </pre>
94. steve case says:
123456789
987654321 <>
95. steve case says:
123456789
987654321 <>

123456789
987654321
96. steve case says:
abcdefghijklmnopqrst
1234567890
97. steve case says:
ABCDEFGHIJKLMNOPQRSTUVWXYZ
abcdefghijklmnopqrstuvwxyz
123456789

ABCDEFGHIJKLMNOPQRSTUVWXYZ
abcdefghijklmnopqrstuvwxyz

98. steve case says:
ABCDEFGHIJKLMNOPQRSTUVWXYZ
abcdefghijklmnopqrstuvwxyz
99. Joe Born says:

As climatologist Roy Spencer has explained the climate models used to arrive at alarming values of equilibrium climate sensitivity don’t do so in the way Lord Monckton describes.

100. Sasha says:

Google’s Empire of Censorship Marches On>

More than 1,000 Google employees protest against plan for censored Chinese search engine.

The Google staff have signed letter calling on executives to review ethics and transparency and protesting against the company’s secretive plan to build a search engine that would comply with Chinese censorship. The letter’s contents were confirmed by a Google employee who helped organize it but wished to stay anonymous. It calls on executives to review the company’s ethics and transparency; says employees lack the information required “to make ethically informed decisions about our work”; and complains that most employees only found out through leaks and media reports about the project, nicknamed Dragonfly. “We urgently need more transparency, a seat at the table and a commitment to clear and open processes: Google employees need to know what we’re building,” says the document.

Google engineers are working on software that would block certain search terms and leave out content blacklisted by the Chinese government, so the company can re-enter the Chinese market. Google’s chief executive Sundar Pichai told a company-wide meeting that providing more services in the world’s most populous country fits with Google’s global mission. (and I bet you did not know that Google even had a “Global Mission”)

This is the first time the project has been mentioned by any Google executive since details about it were leaked.

Three former employees told Reuters that current leadership might think that offering limited search results in China is better than providing no information at all. The same rationale led Google to enter China in 2006. It left in 2010 over an escalating dispute with regulators that was capped by what security researchers identified as state-sponsored cyber attacks against Google and other large US firms. One former employees said they doubt the Chinese government will welcome back Google.

The Chinese human rights community said Google’s acquiescence to China’s censorship would be a “dark day for internet freedom.”

101. Kip Hansen says:

This method requires MS Word. It results in a new document (automatically created) which contains a simple list of all the hypertext links from your essay.
(see the end of The Fight Against Global Greening — Part 4.

1. Open the Word document which you want to copy the hyperlinks, and press Alt + F11 to open the Microsoft Visual Based Application Window.

2. Click Insert > Module, and copy the following VBA code into the Window.

‘Updateby20140214
Dim docCurrent As Document ‘current document
Dim docNew As Document ‘new document
Dim rngStory As StoryRanges
Set docCurrent = ActiveDocument
docNew.Activate
Selection.Paste
Selection.TypeParagraph
Next

Set docNew = Nothing
Set docCurrent = Nothing
End Sub

3. Click the Run button then RunSub/UserForm to run the VBA code. Then all the hyperlinks are copied to a new document. You can save the new document later.

***************

Notes:
1. This VBA only can run when all the hyperlinks are linked with word, if there are pictures with hyperlinks, this VBA code cannot work.
2. Using this will train you to make human-readable links: for instance, attaching the hyperlink to “the NY Times article” rather than “here”.
3. The links can be copied from the newly created word document into your Word copy of your essay. I place them at the end in a section called Quick Links. If any of the links don’t read right or communicate clearly, you can edit them in the Quick Links to be more readable, such as “The April 26th NY Times article”.

102. Kip Hansen says:

The new server system does not allow BLOCKQUOTES in comments. The following should appear as a blockquote but does not.

Epilogue:

• eyesonu says:

103. Dr. Strangelove says:
104. Dr. Strangelove says:
105. Dr. Strangelove says:
106. Dr. Strangelove says:
107. Dr. Strangelove says:
108. Dr. Strangelove says:
109. steve case says:
Year   Jan  Feb  Mar  Apr  May  Jun  Jul  Aug  Sep  Oct  Nov  Dec
1880   -30  -18  -11  -20  -12  -23  -21  -10  -16  -24  -20  -23
2018    78

1880   -29  -18  -11  -20  -12  -23  -21   -9  -16  -23  -20  -23
2018    78   78

1880   -29  -18  -12  -20  -12  -25  -21  -10  -17  -25  -20  -21
2018    77   79   89

1880   -28  -18  -12  -20  -12  -25  -22  -10  -18  -25  -20  -21
2018    75   80   88   86

1880   -29  -18  -11  -20  -12  -23  -21   -9  -16  -23  -20  -23
2018    77   80   90   85   82

1880   -30  -18  -11  -20  -12  -23  -21   -9  -16  -24  -20  -23
2018    78   81   91   87   83   77

1880   -29  -18  -11  -19  -11  -23  -20   -9  -15  -23  -20  -22
2018    77   81   91   87   82   76   78

Number of changes made in 2018
Year   Jan  Feb  Mar  Apr  May  Jun  Jul  Aug  Sep  Oct  Nov  Dec
1880     4    0    2    1    1    2    2    2    4    4    0    3
2018     5    3    3    2    1    1    
110. Joe Born says:

Let’s try an array:

$\begin{array}{lclll} T_\mathrm{eq2}&=&&&(T_\mathrm{E}+\Delta T_\mathrm{ref1}+\Delta T_\mathrm{ref2})/(1-f)\\ T_\mathrm{eq1}&=&&&(T_\mathrm{E}+\Delta T_\mathrm{ref1}\phantom{+\Delta T_\mathrm{ref2}})/(1-f)\\ T_\mathrm{eq2}-T_\mathrm{eq1}&=&\Delta T_\mathrm{eq2}&=&\phantom{(T_\mathrm{E}+\Delta T_\mathrm{ref1}+}\Delta T_\mathrm{ref2}/(1-f) \end{array}$

111. Joe Born says:

???? ???

112. Greg F says:
113. Joe Born says:
114. Joe Born says:

Attempt monospacing:

                                               Total
Absorbed from:  Surface L.Atm  U.Atm  Space    Absorbed

Absorbed by:
Surface          0.0000 1.0500 0.1500 1.0000 || 2.2000
Lower Atmosphere 1.6500 0.0000 0.4500 0.0000 || 2.1000
Upper Atmosphere 0.4125 0.7875 0.0000 0.0000 || 1.2000
Space            0.1375 0.2625 0.6000 0.0000 || 1.0000
------------------------------------------------
Total Emitted:   2.2000 2.1000 1.2000 1.0000


Done

115. MarkW says:

“climate policy milestones millstones”

Corrected it for you

116. Joe Born says:

9IOt8i4DzQgoptIZ0CxUefb6BDQLZ/TjSfT0ifnkF+sB3Vw83y/miLReJn/UZqG0mXqgEQHFVPoGNMler4Cmv5j/t+LaS9nD1QKqzbGr/6Kt/Ca/oP93AoodCCimMsIaaJrHfLfl7jXQZAXz/pMXb//jWeMaaPNV8YBGBBRTkd8Hmt4bLVibOXq1rSypFNDz/PD4n8/a9oFy9Cf6IqCYSmdAs2ilBSymwrPDjJoDGi/zpqjmrll47bFv4k14dUhAaRZeuwPYhYBiKp0BLY5HinuXX2/+7llnQOPSnhTrjLuOA9VO+Xy2Kh8H+jN9pOiOe+kpnOccxoRuBBRT6Q7o6uCTdvKPOuHy1+vkjjygTfsmo1/9p2da5vQzkdTPawGNf/0k2YSP/3v8NLIzkS70M5HUeVLbH+9XHEiPbgQUU+kM6H3t0KKtfkr7Pz/Tz1CvbVwn54X+rj9Iw7nwxa/rZ74nv1c7F16/gxVQdCKgmEr3LHyarb10Y/wmDerT8iVIGg8FLa2aNl6NSfv191k8j7OdnbWrMd3kRzLRT3QjoJjKjsOYbqPy3X+Z//SP9/vqkpzFcUabN2qzvrZUtWpZWi/Nrgea/i07MjT/9fg/3z++Lk4ETa8HWhxMtYmvGHr/+Jvo68cMEVC458cpk348CwSFgMI9l+kqjhxt/l5koAMBhXuOA7pWm/tqzp0j6GGIgMI9lwHVz31nBRSGCCjcc7r38Qtz7rBGQOGe2+mbH+oLQplzhw0CCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYImAAoAlAgoAlggoAFgioABgiYACgCUCCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYImAAoAlAgoAlggoAFgioABgiYACgCUCCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYImAAoAlAgoAlggoAFgioABgiYACgCUCCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYOn/AHGOt2Gtlg63AAAAAElFTkSuQmCC

117. Sasha says:

Ex-IPCC chief Rajendra Pachauri will stand trial on sexual harassment charges.

A Delhi court decided there is enough evidence to charge Pachauri with harassing a female colleague.

There is prime facie evidence to charge Rajendra Pachauri, 78, with sexual harassment and two offences of intending to outrage the modesty of a woman. Pachauri, who was head of the UN Intergovernmental Panel on Climate Change (IPCC) when it was awarded the Nobel prize in 2007, denies any wrongdoing. Pachauri resigned from the IPCC in 2015 when the complaint against him was registered.

The woman told police Pachauri had flooded her with offensive messages, emails and texts and made several “carnal and perverted” advances over the 16 months they worked together at the Energy and Resources Institute (Teri), a Delhi-based energy and environment research centre Pachauri led for over 30 years.

An investigation into the complaints questioned more than 50 employees and concluded the woman’s claims were valid. Pachauri claimed text messages and emails submitted by the woman to police had had been tampered with by unknown cyber criminals, but police last year found no evidence of tampering.

The complainant, who was 29 at the time of the alleged offences, said she was pleased the case would proceed to trial after so long.

Pachauri’s lawyer, Ashish Dixit, said the court had dropped four other charges, including stalking and intimidation: “The majority of the charges have been dropped by the court on it own, so it’s big step forward,” Dixit said.

118. Sasha says:

The Truth About “An Inconvenient Truth”

• Sasha says:

For some reason, this video does not show up but the other one did.
No idea why.

119. Sasha says:

Ah! There it is!
(At last.)

120. Jaakko Kateenkorva says:

Testing

E=mc²

[More accurately, Delta Energy = Delta mass x c^2. 8<) .mod]

121. Jurgen says:

This link is more specific about “acrylic polymers” being used by artists such as Andy Warhol, David Hockney, and Mark Rothko. This leaves me the task finding out more about this. I do remember an early batch of acrylic paint by Talens also called “polymers” they stopped producing. So I guess there is not just one kind of “acrylic”. The problem being of course the producers keep their secrets.

122. This is bold text

123. Test for Italics

124. Sasha says:

Ontario government to scrap Green Energy Act

The Green Energy Act aimed to bolster the province’s renewable energy industry. It will be scrapped in spring 2019. The Green Energy Act resulted in an increase in electricity costs and saw the province overpay for power it did not need.

Infrastructure Minister Monte McNaughton said repealing the law will ensure that municipalities regain planning authority over renewable projects, something that was removed under the act. Future renewable energy projects must first demonstrate need for the electricity they generate before being granted approval.

125. manalive says:

A hat can be quite palatable with seasoning:

126. Sasha says:
• Sasha says:
127. Sasha says:
• Sasha says:
128. Sasha says:
• Sasha says:
129. Kip Hansen says:
130. Joe Born says:

Trying a table:

$\begin{array}{lcccccc} &&&&&&\mathrm{Total}\\ \mathrm{Absorbed\,from:}&\mathrm{Surface}&\mathrm{L.Atm}&\mathrm{U.Atm}&\mathrm{Space}&&\mathrm{Absorbed}\\ &&&&&&\\ \mathrm{Absorbed\,by:}&&&&&\\ \mathrm{Surface}&0.0000&1.0500&0.1500&1.0000&||&2.2000\\ \mathrm{Lower Atmosphere}&1.6500&0.0000&0.4500&0.0000&||&2.1000\\ \mathrm{Upper Atmosphere}&0.4125&0.7875&0.0000&0.0000&||&1.2000\\ \mathrm{Space}&0.1375&0.2625&0.6000&0.0000&||&1.0000\\ &&&&&&\\ \mathrm{Total\,Emitted:}&2.2000&2.1000&1.2000&1.0000 \end{array}$

Done

• Okay, that’s definitely not <pre>
How’d you do that, Joe?

131. Having read this I believed it was very enlightening.
I appreciate you taking the time and energy to put this short article together.

I once again find myself spending a lot of time both reading and leaving comments.
But so what, it was still worth it!

132. William Ward says:

https://i.imgur.com/Tcr5CNo.png

Y-axis is temperature degrees C
X-axis is days

Chart shows (Tmax+Tmin)/2 error as compared to mean calculated using signal sampled above Nyquist rate.

Data from NOAA USCRN. Calculations are done by NOAA.

133. William Ward says:

This is the data from 11/11/2017.

Samples/day: 288 72 36 24 12 6 4 2 (Tmax+Tmin)/2
Tmean ( Deg C) -3.3 -3.2 -3.4 -3.4 -3.8 -4.1 -4.0 -4.0 -4.7

Test

134. William Ward says:

This is the data from 11/11/2017.

Samples/day Tmean (C)
288 -3.3
72 -3.2
36 -3.4
24 -3.4
12 -3.8
6 -4.1
4 -4.0
2 -4.0
Tmax+Tmin)/2 -4.7

Test

135. William Ward says:

Try again. Testing how to format a table.

Samples/day Tmean (C)
288 -3.3
72 -3.2
36 -3.4
24 -3.4
12 -3.8
6 -4.1
4 -4.0
2 -4.0
Tmax+Tmin)/2 -4.7

End of table

136. William Ward says:

I did this once before – lost the recipe.

Samples/day Tmean (C)
288 -3.3
72 -3.2
36 -3.4
24 -3.4
12 -3.8
6 -4.1
4 -4.0
2 -4.0
Tmax+Tmin)/2 -4.7

End of the table.

137. He Who Wants To Make Tables says:

Samples/day Tmean (C)
288 -3.3
72 -3.2
36 -3.4
24 -3.4
12 -3.8
6 -4.1
4 -4.0
2 -4.0
(Tmax+Tmin)/2 -4.7

[Good to see you using the Test page. Try the “pre” “/ pre” unformatted, column-like text style for tables. .mod]

138. Jaakko Kateenkorva says:
139. Jaakko Kateenkorva says:
140. 0 45 90 135 180 225 270 315 range
6 16.3 16.2 16.2 16.3 16.4 16.4 16.3 16.3 0.1
4 16.1 16.1 16.2 16.4 16.5 16.5 16.4 16.2 0.4
2 15.3 15.4 16.1 16.7 17.0 17.1 16.8 16.2 1.8

141. _____0___45___90__135__180__225__270__315_range
6 16.3 16.2 16.2 16.3 16.4 16.4 16.3 16.3 0.1
4 16.1 16.1 16.2 16.4 16.5 16.5 16.4 16.2 0.4
2 15.3 15.4 16.1 16.7 17.0 17.1 16.8 16.2 1.8

[USE “pre” and “/pre” (within html brackets) to get text in proper column alignment. .mod]

_____0___45___90__135__180__225__270__315_range
6 16.3 16.2 16.2 16.3 16.4 16.4 16.3 16.3   0.1
4 16.1 16.1 16.2 16.4 16.5 16.5 16.4 16.2   0.4
2 15.3 15.4 16.1 16.7 17.0 17.1 16.8 16.2   1.8


142. Jaakko Kateenkorva says:
143. BruceC says:
144. Dr. Strangelove says:
145. steve case says:

Thanks for the oven /freezer joke pointing out the problem with using averages. Another favorite quote in that regard:

Beware of averages. The average person has one breast and one testicle.
Dixie Lee Ray

146. Yirgach says:

Tying to post an image:
You have to use HTML code.

147. Scot says:

On the CRU web site page – https://crudata.uea.ac.uk/cru/data/temperature/crutem4/landstations.htm – under the heading:

Land Stations used by the Climatic Research Unit within CRUTEM4

there is a link to the station files (crutem4_asof020611_stns_used_hdr.dat)

In it you will find Apto Uto (Station No. 800890)

The file gives the locations and names of the stations used at some time (i.e. in the gridding that is used to produce CRUTEM4) during the period from 1850 to 2010. All these stations have sufficient data to calculate 30-year averages for 1961-90 as defined in Jones et al. (2012). In the file there are five pieces of information

John McLean said in the paper:

When constructing the CRUTEM4 dataset the CRU adopts a threshold for outliers of five standard deviations from the mean temperature and although calculating the long-term average temperatures from data over the 30-year period from 1961 to 1990 the standard deviations used for CRUTEM4 are calculated over a minimum of 15 years of data over the 50-year period from 1941 to 1990.
And

The analysis used in this section differs from the approach used to create the CRUTEM4 dataset but as noted in the previous chapter, these monthly mean temperatures were included when both the long-term average temperatures and standard deviations were calculated…

The charge that bad stations such as Apt Uto are not in use, is invalid in the context because though the exclusion of outliers is explicitly suggested, the inclusion of this station is implicitly noted – and listed – in the calculation of the means!

Perhaps “suggested” is the word we are all struggling with; I’m certainly finding it hard to see past the doublespeak of the CRU!

148. Scott Bennett says:

On the CRU web site page – https://crudata.uea.ac.uk/cru/data/temperature/crutem4/landstations.htm – under the heading:

Land Stations used by the Climatic Research Unit within CRUTEM4

there is a link to the station files (crutem4_asof020611_stns_used_hdr.dat)

In it you will find Apto Uto (Station No. 800890)

The file gives the locations and names of the stations used at some time (i.e. in the gridding that is used to produce CRUTEM4) during the period from 1850 to 2010. All these stations have sufficient data to calculate 30-year averages for 1961-90 as defined in Jones et al. (2012). In the file there are five pieces of information

John McLean said in the paper:

When constructing the CRUTEM4 dataset the CRU adopts a threshold for outliers of five standard deviations from the mean temperature and although calculating the long-term average temperatures from data over the 30-year period from 1961 to 1990 the standard deviations used for CRUTEM4 are calculated over a minimum of 15 years of data over the 50-year period from 1941 to 1990.
And

The analysis used in this section differs from the approach used to create the CRUTEM4 dataset but as noted in the previous chapter, these monthly mean temperatures were included when both the long-term average temperatures and standard deviations were calculated…

The charge that bad stations such as Apt Uto are not in use, is invalid in the context because though the exclusion of outliers is explicitly suggested, the inclusion of this station is implicitly noted – and listed – in the calculation of the means!

Perhaps “suggested” is the word we are all struggling with; I’m certainly finding it hard to see past the doublespeak of the CRU!

149. Scott Bennett says:

On the CRU web site page – https://crudata.uea.ac.uk/cru/data/temperature/crutem4/landstations.htm – under the heading:

Land Stations used by the Climatic Research Unit within CRUTEM4

there is a link to the station files (crutem4_asof020611_stns_used_hdr.dat)

In it you will find Apto Uto (Station No. 800890)

The file gives the locations and names of the stations used at some time (i.e. in the gridding that is used to produce CRUTEM4) during the period from 1850 to 2010. All these stations have sufficient data to calculate 30-year averages for 1961-90 as defined in Jones et al. (2012). In the file there are five pieces of information

150. Scott Bennett says:

John McLean said in the paper:

When constructing the CRUTEM4 dataset the CRU adopts a threshold for outliers of five standard deviations from the mean temperature and although calculating the long-term average temperatures from data over the 30-year period from 1961 to 1990 the standard deviations used for CRUTEM4 are calculated over a minimum of 15 years of data over the 50-year period from 1941 to 1990.
And

The analysis used in this section differs from the approach used to create the CRUTEM4 dataset but as noted in the previous chapter, these monthly mean temperatures were included when both the long-term average temperatures and standard deviations were calculated…

The charge that bad stations such as Apt Uto are not in use, is invalid in the context because though the exclusion of outliers is explicitly suggested, the inclusion of this station is implicitly noted – and listed – in the calculation of the means!

Perhaps “suggested” is the word we are all struggling with; I’m certainly finding it hard to see past the doublespeak of the CRU!

151. Larry says:

Steven Mosher – works with BEST

“No open data. no open code. no science.”
https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483888

“i check his apto uto station. its not used.”
https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483949

Here is a longer and more complex rebuttal:
https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483908

“Poor guy. 1 check and his Phd is toast. now some of you will pay for this report. But I wont because he failed the simple requirement of posting his data and code., And more importantly he points to data. THAT CRU DOESNT USE!! For fucks sake skeptics.

“CRU requires data in the period of 1950-1980. that is HOW the calculate an anomaly. and look. in 30 seconds I checked ONE one his claims. None of you checked. you spent money to get something that FIT YOUR WORLD VIEW. you could have checked. but no. gullible gullible gullible.”

Nick Stokes (retired, was a Principal Research Scientist with CSIRO)

“OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
3. I can’t find Golden Rock, either in older or current station listings.”

Has it ever been used?

“Well, that seems to be a question that John McLean, PhD, did not bother to investigate, nor his supervisor (nor any of his supporters here). But this 2011 post-QC data listing shows the station had its data truncated after 1970. And then, as Steven says, for use in a global anomaly calculation as in CRUTEM 4, the entire station failed to qualify because of lack of data in the anomaly base period. That is not exactly a QC decision, but doubly disqualifies it from HADCRUT 4.”

152. Larry says:

Steven Mosher – works with BEST

“No open data. no open code. no science.”

“i check his apto uto station. its not used.”

Here is a longer and more complex rebuttal:
https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483908

Another comment:

“Poor guy. 1 check and his Phd is toast. now some of you will pay for this report. But I wont because he failed the simple requirement of posting his data and code., And more importantly he points to data. THAT CRU DOESNT USE!! For fucks sake skeptics.

“CRU requires data in the period of 1950-1980. that is HOW the calculate an anomaly. and look. in 30 seconds I checked ONE one his claims. None of you checked. you spent money to get something that FIT YOUR WORLD VIEW. you could have checked. but no. gullible gullible gullible.”

Nick Stokes (retired, was a Principal Research Scientist with CSIRO)

First comment:

“OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
3. I can’t find Golden Rock, either in older or current station listings.”

Another comment:

Has it ever been used?

“Well, that seems to be a question that John McLean, PhD, did not bother to investigate, nor his supervisor (nor any of his supporters here). But this 2011 post-QC data listing shows the station had its data truncated after 1970. And then, as Steven says, for use in a global anomaly calculation as in CRUTEM 4, the entire station failed to qualify because of lack of data in the anomaly base period. That is not exactly a QC decision, but doubly disqualifies it from HADCRUT 4.”

153. Sasha says:
154. steve case says:

While I’m waiting, I’ll try html for an image

155. Red94ViperRT10 says:

This is the time to be honest with ourselves… Just a few days ago the oh-so-capable Kip Hansen wrote about those curious anomalies, https://wattsupwiththat.com/2018/09/25/the-trick-of-anomalous-temperature-anomalies/ A very good post, and very true. Now, let’s drag forward what we learned from there: A thermometer marked on every degree can be read only to that marking, anything in between those markings is not a significant digit, I don’t care what the recorder writes down. If I’m using a Fahrenheit thermometer in Phoenix Arizona, and I read 113°F that has three significant digits, but I argue that’s spurious, because if I’m using a Centigrade thermometer, that same reading is 45°C with only two significant digits.
There do exist liquid-in-glass (LIG) thermometers marked in tenths of a degree, but they are useful only for a relatively tight range of readings, which atmospheric temperature is not. I haven’t checked into it, but I would guestimate that a LIG thermometer marked in tenths, with enough range to read all of the possible atmospheric temperatures at a given site would be several feet long, probably taller than the average site observer. So we can state right now that any temperature recorded from a LIG thermometer is only accurate to two significant digits.
What do I mean by Significant Digits (several webpages called them Significant Factors, same thing)? The first reference page that popped up gives The Rules,
• Rule #1 – Non-zero digits are always significant
• Rule #2 – Any zeros between two significant digits are significant
• Rule #3 – A final zero or trailing zeros in the decimal portion ONLY are significant.
In Kip’s example he used 72. That has two significant digits.
Significant digits through operations: “When quantities are being added or subtracted, the number of decimal places (not significant digits) in the answer should be the same as the least number of decimal places in any of the numbers being added or subtracted.”, so when adding together two temperatures, each with two significant digits, none to the right of the decimal place, and the sum is >100, you retain no significant digits to the right of the decimal place, but the last significant digit remains the number just to the left of the decimal.
“In a calculation involving multiplication, division, trigonometric functions, etc., the number of significant digits in an answer should equal the least number of significant digits in any one of the numbers being multiplied, divided etc.” Secondly, “Exact numbers, such as the number of people in a room, have an infinite number of significant figures.” So now I want to do an average of a whole stack of temperatures, add together all the temperatures, retaining the significant digit at the first number to the left of the decimal place, then divide by the exact number of temperatures, and the result will still have at most 2 significant digits, unless I’m <-10°<T<10° (this is why I would prefer to record these things in Kelvin or Rankine, I would get the same number of significant digits for all my atmospheric temperature readings).
You know what this does to an anomaly, right? You can see this coming? Take your 30 year average baseline, with the last significant digit just to the left of the decimal point, and read the current year with the last significant digit just to the left of the decimal point, and subtract one from the other, what do you get? You get an integer. A number with no digits at all to the right of the decimal place.
And yet, after all that, the opportunists want to chortle about THIS: https://wattsupwiththat.com/2018/10/03/uah-globally-the-coolest-september-in-the-last-10-years/ Well, let’s take a random sample (OK, this is the first thing that popped up on a Duck-Duck-Go search) we have a table of Coldest/Warmest Septembers for three (for our purposes random) locations which will make a good example.

Top 20 Coldest/Warmest Septembers in Southeast Lower Michigan

Rank Detroit Area* Flint Bishop** Saginaw Area***
Coldest Warmest Coldest Warmest Coldest Warmest
Temp Year Temp Year Temp Year Temp Year Temp Year Temp Year
1 57.4 1918 72.2 1881 55.4 1924 69.2 1933 54.9 1918 69.0 1931
2 58.6 1879 69.8 1931 56.3 1993 68.1 1931 56.4 1924 68.0 1933
3 59.1 1975 69.5 1921 57.3 1975 68.0 2015 56.7 1993 67.6 2015
4 59.1 1876 69.3 2015 57.4 1966 66.6 2002 56.8 1949 66.8 1921
5 59.2 1883 68.9 2018 57.4 1949 66.6 1934 56.9 1956 66.2 1961
6 59.4 1924 68.9 2002 57.5 1956 66.2 1921 57.0 1943 66.2 1927
7 59.6 1896 68.8 1961 57.7 1981 66.1 1961 57.3 1975 65.9 2005
8 59.6 1974 68.6 1908 57.8 1962 65.9 1927 57.4 1981 65.9 1998
9 59.6 1949 68.5 1933 58.0 1967 65.6 1939 58.5 1991 65.6 2017
10 59.6 1890 68.4 2005 58.5 1995 65.4 1978 58.5 1962 65.6 2016
11 59.7 1899 68.4 1906 58.6 1928 65.3 1998 58.7 1935 65.5 1971
12 59.9 1875 68.2 2016 58.7 2006 65.1 2005 58.8 1917 65.3 1930
13 60.0 1888 68.0 1998 58.8 1963 65.1 1930 58.9 1951 65.2 1936
14 60.1 1887 67.9 1891 58.9 1974 65.1 1925 58.9 1950 65.1 2018
15 60.3 1967 67.9 1884 59.1 1957 65.0 1936 59.0 1938 64.7 1948
16 60.4 1956 67.5 1978 59.3 2001 64.8 2016 59.0 1928 64.6 2004
17 60.7 1928 67.5 1941 59.3 1937 64.8 2018 59.1 2006 64.5 2002
18 60.8 1981 67.5 1898 59.5 1943 64.8 1983 59.3 1984 64.4 1941
19 61.0 1993 67.4 2004 59.6 1991 64.8 1971 59.3 2000 64.3 1968
20 61.2 1984 67.2 1927 59.6 1989 64.5 1941 59.3 1992 64.0 2007
* Detroit Area temperature records date back to January 1874.

** Flint Bishop temperature records date back to January 1921.

*** Saginaw Area temperature records date back to January 1912.

I have copied/pasted the entire table because it was easiest that way. I can make my point, and save myself quite a few mouse-clicks, by taking just the Detroit Area readings, using the Excel nested functions of ROUND(CONVERT()) in one swell foop to show the temperature data with appropriate significant digits.

Detroit Area*
Coldest Warmest
Temp Year Temp Year
14 1918 22 1881
15 1879 21 1931
15 1975 21 1921
15 1876 21 2015
15 1883 21 2018
15 1924 21 2002
15 1896 20 1961
15 1974 20 1908
15 1949 20 1933
15 1890 20 2005
15 1899 20 1906
16 1875 20 2016
16 1888 20 1998
16 1887 20 1891
16 1967 20 1884
16 1956 20 1978
16 1928 20 1941
16 1981 20 1898
16 1993 20 2004
16 1984 20 1927

The sub-heading on the linked article is UAH Global Temperature Update for September, 2018: +0.14 deg. C, without giving any absolute temperature, but if it were talking about a temperature reading taken in the Detroit Area, we could guess that anomaly is relative to something in the vicinity of 15°C, and then that 0.14°C disappears in the noise and is indistinguishable from 10 others that also show up as 15°C when shown with the proper number of significant digits. In fact, given the significant digits discussed above, the temperature anomaly, calculated to the best accuracy available from the instrumentation, becomes 0. ZERO. Zilch. Nada. Nothing. Nothing to write home about. Nothing to make a post on ANY blog about! Thus, you can see why I have sprained eyeball muscles, because every time the “warmunists” call a press conference to declare Hottest Year EVAH!™, my eyes do a spontaneous eyeroll, so hard I believe I have incurred permanent damage, and it’s uncontrollable, it’s so bad! Their Hottest Year EVAH!™ is indistinguishable from at least 10 others just like it, as far as what the thermometers can really measure, and what that database can really tell us.

156. Sasha says:

You might not be aware that Google has dumped its “Don’t be evil” slogan.
If you watch this video, you will understand why….
Google is not a search engine.
Google is not even an advertising company with a search box attached.
Google is watching every move you make in order to manipulate your view of the world and your actions within it.
Google’s search returns are serving an agenda, and you are unaware of what that agenda is.
Google hates competition, so its algorithm actively returns searches designed to attack its rivals, serve its commercial interests and further its political agenda — and all this while spying on its users.
Don’t believe me? Check this out:

157. Dr. Strangelove says:
158. Sheri says:

Comment test.

159. Sheri says:

Test comment

160. Sheri says:

test.

161. Sheri says:

So I can’t comment in Safari now. Okay, I didn’t need this site anyway.

162. Jim Ross says:
163. Jim Ross says:
164. steve case says:

Guess I have to wait (-:

165. steve case says:

OK only a minute but the image didn’t show.

166. Kip Hansen says:
167. Nitpicker says:

OHC
This Taillandier 2018 paper is about ”the metrological verification of a biogeochemical observing system based on a fleet of BGC- Argo floats” but the authors gather and report on temperature data gathered by the Argo floats, and compare them with a ship-board temperature sensor (CTD), lowered to depth, by a cable. “less than 1 year after the cruise” the ship-board sensor was checked, and had drifted “0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.” The standard deviation of the Argo ‘ misfits’ is ≈0.02 °C (Table 2). The authors “ascribe misfits as instrumental calibration shifts rather than natural variability.”

Taillandier2.2.1 ”The BGC-Argo floats were equipped with factory-calibrated CTD modules (SBE41CPs).”

2.2.2 ”During stations, seawater properties were sampled at 24 Hz with the [ship-board] CTD unit and transmitted on board through an electro-mechanical sea cable and slip-ring-equipped winch.”

2.2.3 ”There were no independent samples (such as salinity bottles) or double probes in the [ship-board] CTD unit that would have allowed the assessment of the temperature and conductivity sensors’ stability. Thus, the quality of [the ship-board] CTD data relies on frequent factory calibrations operated on the sensors: a pre-cruise bath was performed in April 2015 (less than 1 month before the cruise), and a post-cruise bath performed in March 2016 (less than 1 year after the cruise). The static drift of the temperature sensor [of the ship-board CTD] between baths was 0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.”

2.2.3 ”Given the reproducibility of the processing method, the uncertainties of measurement provided by the [ship-board] CTD unit should have stayed within the accuracy of the sensors, which is 0.001 °C and 0.003 mS/cm out of lowered dynamic accuracy cases (such as in sharp temperature gradients).”

2.2.3 ”The data collection of temperature and practical salinity profiles at every station is thus used as reference to assess the two other sensing systems: the TSG [A SeaCAT thermosalinograph (SBE21, serial no. 3146)] and the BGC-Argo floats. Systematic comparisons between the profiles from the CTD unit and the neighboring data were made at every cast.”

2.2.3 ”Considering TSG data set, the median value of temperature and practical salinity over a time window of 1 h around the profile date was extracted from the 5 min resolution time series. The comparison with the surface value from profiles showed a spread distribution of misfits for temperature, with an average 0.009 °C, and a narrower distribution of misfits for practical salinity with an average of 0.007. Given the nominal accuracy expected by the TSG system and in ab- sence of systematic marked shift in the comparison, no post-cruise adjustment was performed. The uncertainty of measurement in the TSG data set should have stayed under the 0.01 °C in temperature, and 0.01 in practical salinity.”

2.2.3 ”Considering BGC-Argo floats, the comparison with [ship-board] CTD profiles was performed over the 750–1000 dbar layer, where water mass characteristics remained stable enough to ascribe misfits as instrumental calibration shifts rather than natural variability. The misfits between temperature measurements and practical salinity measurements at geopotential horizons were computed and median values provided for every BGC-Argo float. The median offsets are reported in Table 2. Their amplitudes remained within 0.01 °C in temperature or 0.01 in practical salinity except in two cases. A large temperature offset occurred for WMO 6901769.”

The Oxygen concentration of seawater had to be calculated. 2.3.2 ”To process the results, the temperature measured from the [ship-board] CTD unit was preferred to the built-in temperature of the sensor.”

Taillandier, Vincent, et al. 2018 “Hydrography and biogeochemistry dedicated to the Mediterranean BGC-Argo network during a cruise with RV Tethys 2 in May 2015.” Earth System Science Data
https://www.earth-syst-sci-data.net/10/627/2018/essd-10-627-2018.pdf

168. Nitpicker says:

small formatting errors corrected
OHC
This Taillandier 2018 paper is about ”the metrological verification of a biogeochemical observing system based on a fleet of BGC- Argo floats” but the authors gather and report on temperature data gathered by the Argo floats, and compare them with a ship-board temperature sensor (CTD), lowered to depth, by a cable. [L]ess than 1 year after the cruise” the ship-board sensor was checked, and had drifted “0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.” The standard deviation of the Argo ‘ misfits’ is ≈0.02 °C (Table 2). The authors “ascribe misfits as instrumental calibration shifts rather than natural variability.”

Taillandier 2018: 2.2.1 ”The BGC-Argo floats were equipped with factory-calibrated CTD modules (SBE41CPs).”

2.2.2 ”During stations, seawater properties were sampled at 24 Hz with the [ship-board] CTD unit and transmitted on board through an electro-mechanical sea cable and slip-ring-equipped winch.”

2.2.3 ”There were no independent samples (such as salinity bottles) or double probes in the [ship-board] CTD unit that would have allowed the assessment of the temperature and conductivity sensors’ stability. Thus, the quality of [the ship-board] CTD data relies on frequent factory calibrations operated on the sensors: a pre-cruise bath was performed in April 2015 (less than 1 month before the cruise), and a post-cruise bath performed in March 2016 (less than 1 year after the cruise). The static drift of the temperature sensor [of the ship-board CTD] between baths was 0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.”

2.2.3 ”Given the reproducibility of the processing method, the uncertainties of measurement provided by the [ship-board] CTD unit should have stayed within the accuracy of the sensors, which is 0.001 °C and 0.003 mS/cm out of lowered dynamic accuracy cases (such as in sharp temperature gradients).”

2.2.3 ”The data collection of temperature and practical salinity profiles at every station is thus used as reference to assess the two other sensing systems: the TSG [A SeaCAT thermosalinograph (SBE21, serial no. 3146)] and the BGC-Argo floats. Systematic comparisons between the profiles from the CTD unit and the neighboring data were made at every cast.”

2.2.3 ”Considering TSG data set, the median value of temperature and practical salinity over a time window of 1 h around the profile date was extracted from the 5 min resolution time series. The comparison with the surface value from profiles showed a spread distribution of misfits for temperature, with an average 0.009 °C, and a narrower distribution of misfits for practical salinity with an average of 0.007. Given the nominal accuracy expected by the TSG system and in absence of systematic marked shift in the comparison, no post-cruise adjustment was performed. The uncertainty of measurement in the TSG data set should have stayed under the 0.01 °C in temperature, and 0.01 in practical salinity.”

2.2.3 ”Considering BGC-Argo floats, the comparison with [ship-board] CTD profiles was performed over the 750–1000 dbar layer, where water mass characteristics remained stable enough to ascribe misfits as instrumental calibration shifts rather than natural variability. The misfits between temperature measurements and practical salinity measurements at geopotential horizons were computed and median values provided for every BGC-Argo float. The median offsets are reported in Table 2. Their amplitudes remained within 0.01 °C in temperature or 0.01 in practical salinity except in two cases. A large temperature offset occurred for WMO 6901769.”

The Oxygen concentration of seawater had to be calculated. 2.3.2 ”To process the results, the temperature measured from the [ship-board] CTD unit was preferred to the built-in temperature of the sensor.”

Taillandier, Vincent, et al. 2018 “Hydrography and biogeochemistry dedicated to the Mediterranean BGC-Argo network during a cruise with RV Tethys 2 in May 2015.” Earth System Science Data
https://www.earth-syst-sci-data.net/10/627/2018/essd-10-627-2018.pdf

169. Sheri says:

• steve case says:

Someone really needs to step up to the plate and tell us chickens how to post images.

170. Sheri says:

Still cannot post….

171. Sheri says:

Testing.

172. Scott W Bennett says:

But fortunately, anomalies are much more homogeneous. If it is warmer than usual, it tends to be warm high and low. – Nick Stokes

This is the heart of the problem of attempting to calculate global temperature.

Essentially two important differences are conflated and then glossed over.

Spatial sampling is a three dimensional problem, while anomalies may deal – nominally (per-say!) – with
altitude issues, they don’t deal with directionality, or more accurately the symmetry of the two dimensional temperature distribution.

It is assumed that because anomalies are used, the temporal correlation between any two points will have the same spatial scale in any direction. However, spatial anisotropy in the coherence of climate variations has been well documented and it is an established fact that the spatial scale of climate variables varies geographically and depends on the choice of directions. (Chen, D. et al.2016).

The point I am making here is completely uncontroversial and well know in the literature.

What Nick and all climate data apologists are glossing over is that despite the ubiquity of spatial averaging, its application – the way it is applied particularly – is inappropriate because it assumes spatial coherence. But climate data has long been know to be incoherent across changing topography. (Hendrick & Comer 1970).

In layman’s terms (Although I am a layman!), station records are aggregated over a grid box assuming that the fall-off or change in correlation between different stations is constant. So conventionally, you would imagine a point on the map for your station and a circle (or square) area around it overlapping other stations or the grid box border. However, in reality this “areal” area is actually much more likely to be elongated, forming an ellipse or rectangle stretched in one direction – commonly and topographical north/south in Australia.

But it is actually worse than this in reality because unless the landscape is completely flat, coherence will not be uniform. And that is an understatement because to calculate correlation decay correctly, spatial variability actually has to be mapped in and from the real world.

Unfortunately, directionality would be a very useful factor in the accurate determination of UHI effects, due to the dominant north/south sprawl of urban settlement. Coincidentally, all weather moves from west to east and associated fronts with their troughs and ridges typically align roughly north/south.

The other consequence of areal averaging is that it is a case of the classical ecological fallacy, in that conclusions about individual sites are incorrectly assumed to have the same properties as the average of a group of sites. Simpson’s paradox – confusion between the group average and total average – is one of the four most common statistical ecological fallacies. If you have the patience, it is well worth making your own tiny dataset on paper and working through this paradox as it is mind blowing to apprehend!

What I believe this all means is that the temperature record is dominated by smearing generally and a latitudinal smearing i.e east/west particularly. And this means for Australia and probably the US as well, that the UHI effect of north/south coastal urban sprawl is tainting the record.

Either way, if real changes in climate are actually happening locally, then this local affect will be smeared into a global trend – by the current practice – despite or in lieu of any real global effect.

So, yes I do think the globe has warmed since the LIA or at least the last glaciation but I don’t believe it can be detected in any of the global climate data products.

Chen, D. et al. Satellite measurements reveal strong anisotropy in spatial coherence of
climate variations over the Tibet Plateau. Sci. Rep. 6, 30304; doi: 10.1038/srep30304
(2016).

Director, H., and L. Bornn, 2015: Connecting point-level and gridded moments in the
analysis of climate data. J. Climate, 28, 3496–3510, doi:10.1175/JCLI-D-14-00571.1.
Hendrick, R. L. & Comer, G. H. Space variations of precipitation and implications for raingauge
network designing. J Hydrol 10,
151–163 (1970).

Jones, P. D., T. J. Osborn, and K. R. Briffa, Estimating sampling
errors in large-scalet emperaturea veragesJ, . Clim.,
10, 2548-2568, 1997a.

Robinson, W., 1950: Ecological correlations and the behaviour of
individuals. Amer. Sociol. Rev., 15, 351–357, doi:10.2307/2087176.

173. Scott W Bennett says:

test again, take 2:

But fortunately, anomalies are much more homogeneous. If it is warmer than usual, it tends to be warm high and low. – Nick Stokes

This is the heart of the problem of attempting to calculate global temperature.

Essentially two important differences are conflated and then glossed over.

Spatial sampling is a three dimensional problem, while anomalies may deal – nominally (per-say!) – with altitude issues, they don’t deal with directionality, or more accurately, the symmetry of the two dimensional temperature distribution.

It is assumed that because anomalies are used, the temporal correlation between any two points will have the same spatial scale in any direction. However, spatial anisotropy in the coherence of climate variations has been well documented and it is an established fact that the spatial scale of climate variables varies geographically and depends on the choice of directions. (Chen, D. et al.2016).

The point I am making here is completely uncontroversial and well know in the literature.

What Nick and all climate data apologists are glossing over is that despite the ubiquity of spatial averaging, its application – the way it is applied particularly – is inappropriate because it assumes spatial coherence. But climate data has long been know to be incoherent across changing topography. (Hendrick & Comer 1970).

In layman’s terms (Although I am a layman!) station records are aggregated over a grid box assuming that the fall-off or change in correlation between different stations is constant. So conventionally, you would imagine a point on the map for your station and a circle (or square) area around it overlapping other stations or the grid box border. However, in reality this “areal” area is actually much more likely to be elongated, forming an ellipse or rectangle stretched in one direction – commonly and topographical north/south in Australia.

But it is actually worse than this in reality because unless the landscape is completely flat, coherence will not be uniform. And that is an understatement because to calculate correlation decay correctly, spatial variability actually has to be mapped in and from the real world.

Unfortunately, directionality would be a very useful factor in the accurate determination of UHI effects, due to the dominant north/south sprawl of urban settlement. Coincidentally, all weather moves from west to east and associated fronts with their troughs and ridges typically align roughly north/south.

The other consequence of areal averaging is that it is a case of the classical ecological fallacy, in that conclusions about individual sites are incorrectly assumed to have the same properties as the average of a group of sites. Simpson’s paradox – confusion between the group average and total average – is one of the four most common statistical ecological fallacies. If you have the patience, it is well worth making your own tiny dataset on paper and working through this paradox as it is mind blowing to apprehend!

What I believe this all means is that the temperature record is dominated by smearing generally and a latitudinal smearing i.e east/west particularly. And this means for Australia and probably the US as well, that the UHI effect of north/south coastal urban sprawl is tainting the record.

Either way, if real changes in climate are actually happening locally, then this local affect will be smeared into a global trend – by the current practice – despite or in lieu of any real global effect.

So, yes I do think the globe has warmed since the LIA or at least the last glaciation but I don’t believe it is or can be detected in any of the global climate data products.

Chen, D. et al. Satellite measurements reveal strong anisotropy in spatial coherence of
climate variations over the Tibet Plateau. Sci. Rep. 6, 30304; doi: 10.1038/srep30304
(2016).

Director, H., and L. Bornn, 2015: Connecting point-level and gridded moments in the
analysis of climate data. J. Climate, 28, 3496–3510, doi:10.1175/JCLI-D-14-00571.1.
Hendrick, R. L. & Comer, G. H. Space variations of precipitation and implications for raingauge
network designing. J Hydrol 10,
151–163 (1970).

Jones, P. D., T. J. Osborn, and K. R. Briffa, Estimating sampling
errors in large-scalet emperaturea veragesJ, . Clim.,
10, 2548-2568, 1997a.

Robinson, W., 1950: Ecological correlations and the behaviour of
individuals. Amer. Sociol. Rev., 15, 351–357, doi:10.2307/2087176.

174. Lars P. says:
175. Michael Webb says:
176. Michael Webb says:

[img]https://i.imgur.com/4ggK2O9.jpg[/img]

177. Michael Webb says:
178. Michael Webb says:
179. Bryan A says:

(ₘ&#8342)

180. Bryan A says:

42 ₄₂

• charles the moderator says:

cool

181. Bryan A says:

₄₂

182. Bryan A says:

𔔪

183. Joe H says:

\sum_i=0^10 i^2

• Joe H says:

No joy!! didn’t work

184. Joe H says:

$latex \sum_i=0^10 i^2 185. Joe H says:$latex \sum_x=0^10 x^2

186. Joe H says:

$\sum_x=1^10 x^2$

187. Joe H says:

$\sum_{i=0}^{10} i^2$

188. TonyL says:

Some test images

or

189. Beta Blocker says:

TRIAL TEST OF WORDPRESS LIMITATIONS ON THE TOTAL NUMBER OF CHARACTERS ALLOWED IN A COMMENT

—- This is the first draft of a comment on the UCS’ decision to embrace nuclear power —–
——– It will be posted if all characters still remain in the text without truncation. ———

Here in the US, including the options of nuclear, wind, solar, and hydro in the power generation mix is strictly a public policy decision. Left to its own devices, the power market in the US would swing decisively towards gas-fired generation given that among all the choices available for the next several decades, gas-fired generation has the least technical, environmental, and financial risks. It also has the highest profit making potential for private investors.

More than a decade ago, in about 2006 when the initial cost estimates for pursuing a 21st century nuclear renaissance were being done, the 6 billion dollar estimate for a pair of new technology AP1000’s was thought by many to be too low. With twenty-five years passing without construction of a clean-sheet reactor design having been initiated, the US nuclear industrial base was in a deeply withered state. It was recognized that the steep learning curve for doing nuclear construction in the US had to be passed through for a second time, and that the cost estimates for initiating new projects had to include the costs of rebuilding the nuclear industrial base and of passing through the nuclear construction learning curve for yet another time.

More realistic estimates for two AP1000’s were developed in 2009 and later in 2012 — 9 billion dollars and 12 billion dollars respectively. It cannot be emphasized enough here that the estimate of 12 billion dollars when onsite construction began in 2012 included the expected costs of full compliance with NRC regulations and of passing through the nuclear learning curve for a second time. These estimates also assumed that all the difficult lessons learned from the nuclear projects of the 1980’s would be diligently applied to the latest projects as they were being initiated and while they were in progress.

How did 2012’s estimate of 12 billion dollars for two AP1000’s grow to 2017’s estimate of 25 billion dollars in just five years?

The answer here is that all the lessons learned from the 1980’s were ignored. Thirty years ago, a raft of studies and reports were published which analyzed the cost growth problems and the severe quality assurance issues the nuclear construction industry was then experiencing, and made a series of recommendations as to how to solve these problems. Those studies had a number of common threads:

Complex, First of a Kind Projects: Any large project that is complicated, involves new and/or high technology, has several phases, involves a diversity of technical specialties, involves a number of organizational interfaces, and has significant cost and schedule pressures—any project which has these characteristics is a prime candidate for experiencing significant quality assurance issues, cost control issues, and schedule growth problems.

Strength of the Industrial Base: Nuclear power requires competent expertise in every facet of design, construction, testing, and operations. This kind of competent expertise existed in the early 1980’s but was not being effectively utilized in many of the power reactor construction projects, the ones that experienced the most serious cost and schedule growth issues.

A Changing Technical Environment: The large reactor projects, the 1300 megawatt plants, were being built for the first time. They were being built without a prototype, and they were substantially different from previous designs. Those big plants had many new and significantly revised systems inside them, systems that had to be designed, constructed, tested, and subsequently operated.

A Changing Regulatory Environment: In the late 1970’s and early 1980’s, there was a continual increase in the regulatory requirements being placed on power reactors. The Three Mile Island accident, the Brown’s Ferry fire, the Calvert Cliffs environmental decision, all of those events required the power utilities to change the way they were dealing with their projects in the middle of the game. Some power utilities were successful in making the necessary changes, others were not.

Project Management Effectiveness: Those nuclear projects which had a strong management team and strong management control systems at all levels of the project organization generally succeeded in delivering their projects on cost and on schedule. Those that didn’t were generally incapable of dealing with the changing technical and regulatory environment and became paralyzed in the face of the many QA issues, work productivity issues, and cost control issues they were experiencing.

Overconfidence Based on Past Project Success: Many of the power utilities which had a record of past success in building non-nuclear projects, and which were constructing nuclear plants for the first time, did not recognize that nuclear is different. Those utilities which did not take their regulatory commitments seriously and which did not do an adequate job of assessing whether or not the management systems and the project methods they had been using successfully for years were up to the task of managing a nuclear project.

Reliance on Contractor Expertise: The projects which succeeded had substantial nuclear expertise inside the power utility’s own shop. Those utilities who were successful in building nuclear plants were knowledgeable customers for the nuclear construction services they were buying. They paid close and constant attention to the work that was being done on the construction site, in the subcontractor fabrication shops, and in the contractor’s technical support organization. Emerging issues and problems were quickly and proactively identified, and quick action was taken to resolve those problems.

Management Control Systems: The nuclear projects which failed did not have effective management control systems for contractor and subcontractor design interface control; for configuration control and management of design documentation and associated systems and components; and for proper and up-to-date maintenance of contractor and inter-contractor cost and schedule progress information. Inadequate management control systems prevented an accurate assessment of where the project actually stood, and in many cases were themselves an important factor in producing substandard technical work.

Cost & Schedule Control Systems: For those projects which lacked a properly robust cost & schedule control system, many activities listed on their project schedules were seriously mis-estimated for time, cost, scope, and complexity. Other project activities covering significant portions of the total work scope were missing altogether, making it impossible to accurately assess where the project’s cost and schedule performance currently stood, and where it was headed in the future.

Quality Assurance: For those nuclear projects which lacked the necessary management commitment to meeting the NRC’s quality assurance expectations, the added cost of meeting new and existing regulatory requirements was multiplied several times over as QA deficiencies were discovered and as significant rework of safety-critical systems and components became necessary.

Construction Productivity & Progress: For those nuclear projects which lacked a strong management team; and which lacked effective project control systems and a strong management commitment to a ‘do-it-right the first time’ QA philosophy, the combined impacts of these deficiencies had severe impacts on worker productivity at the plant site, on supplier quality and productivity at offsite vendor facilities, and on the overall forward progress of the entire project taken as a whole.

Project Financing and Completion Schedule: As a result of these emerging QA and site productivity problems, many of the power utilities were forced to extend their construction schedules and to revise their cost estimates upward. Finding the additional money and the necessary project resources to complete these projects proved extremely difficult in the face of competition from other corporate spending priorities and from other revenue consuming activities.

A Change in Strategy by the Anti-nuclear Activists: In the late 1970’s and early 1980’s, the anti-nuclear activists were focusing their arguments on basic issues of nuclear safety. They got nowhere with those arguments. Then they changed their strategic focus and began challenging the nuclear projects on the basis of quality assurance issues, i.e., that many nuclear construction projects were not living up to the quality assurance commitments they had made to the public in their NRC license applications.

Regulatory Oversight Effectiveness: In the early 1980’s, the NRC was slow to react to emerging problems in the nuclear construction industry. In that period, the NRC was focusing its oversight efforts on the very last phases of the construction process when the plants were going for their operating licenses. Relatively little time and effort was being devoted to the earlier phases of these projects, when emerging QA problems and deficiencies were most easily identified and fixed. Quality assurance deficiencies that had been present for years were left unaddressed until the very last phases of the project, and so were much more difficult, time consuming, and expensive to resolve.

Working Relationships with Regulators: The successful nuclear projects from the 1970’s and 1980’s, the ones that stayed on cost and on schedule, did not view the NRC as an adversary. The successful projects viewed the NRC as a partner and a technical resource in determining how best to keep their project on track in the face of an increasingly more complex and demanding project environment. On the other hand, for those projects which had significant deficiencies in their QA programs, for those that did not take their QA commitments seriously, the anti-nuclear activists introduced those deficiencies into the NRC licensing process and were often successful in delaying and sometimes even killing a poorly managed nuclear project.

If it’s done with nuclear, it must be done with exceptional dedication to doing a professional job in all phases of project execution from beginning to end.

Once again, it cannot be emphasized enough here that the estimate of 12 billion dollars for two AP1000’s when onsite construction at VC Summer and at Vogtle 3 & 4 began in 2012 included the expected costs of full compliance with NRC regulations and of passing through the nuclear learning curve for a second time. These estimates also assumed that all the difficult lessons learned from the nuclear projects of the 1980’s, as I’ve described them above, would be diligently applied to the latest projects as they were being initiated and while they were in progress.

For those of us who went through the wrenching experiences of the 1980’s in learning how to do nuclear construction right the first time, what we’ve seen with VC Summer and Vogtle 3 & 4 has been deja vu all over again. The first indications of serious trouble came in 2011 when the power utilities chose contractor teams that did not have the depth of talent and experience needed to handle nuclear projects of this level of complexity and with this level of project risk. That the estimated cost eventually grew to 25 billion dollars in 2017 should be no surprise.

The project owners and managers ignored the hard lessons of the 1980’s. They did not do a professional job in managing their nuclear projects; and they did not meet their commitments to the public as these commitments are outlined in their regulatory permit applications. Just as happened in the 1980’s, the anti-nuclear activists and the government regulatory agencies are now holding these owners and managers to account for failures that were completely avoidable if sound management practices had been followed.

190. Joe H says:

test test

191. Joe H says:

test test

192. Joe h says:

yellow text.

193. I went into reading this with my hackles up. than I clamed down and said to myslef, “Shelly, you have to be open to new information and maybe they know something you don’t.” So I carefully and calmly read it inbetween deep yoga breaths.

This caught my attention:

early warnings could be issued that include information on what people can do to protect themselves and to protect crops and ecosystems,” Ebi said.

We already do this every single day. It’s called the weather channel.

I agree heartily with Zigmaster :

In the scheme of climate cycles 1980- 2016 is not a long period. The last two years may not have been on trend and certainly in the 1930s and the 1890s there were extreme heat conditions which appear to have been worse than the period they looked at. Typical cherry picking by warmist extremists.

And Samuel was pretty coherent

What the above means is that Sheridan and his co-author simply used the “temperature data” they obtained from NOAA’s National Climatic Data Center (NCDC), …….. and everyone knows that just the “adjustments” introduced by NOAA proved that “every year is hotter than the previous year”.

But Richard M allowed me to stop with the deep breahing and go for the deep belly laughs

It is time to start a climate comedy channel.

You know, my dog Moxie helps me predict the weather–or Climate–ah, well warming. If its warming she runs all over the yard and I have to hollar for her to come back, if its cooling, she pees and runs back inside. She pretty good at it–

http://www.day-by-day.org/weatherpredictor.jpg

Although word press isn’t displaying images, it’s worth clicking to see Moxie predicting the weather for me.

194. Robert says:

This comment didn’t get approved:

I just wish to point out that the following:

π(U) ≥ 0.9: sea level rise up to 0.3 m; corroborated possibilities
0.5 > π(U) > 0.9: sea level rise exceeding 0.3 m and up to 0.63 m; verified possibilities contingent on DT, based on IPCC AR5 likely range (but excluding RCP8.5).
0.5 ≥ π(U) > 0.1: sea level rise exceeding 0.63 m and up to 1.6 m; unverified possibilities
0.1 ≥ π(U) > 0: sea level rise between 1.6 and 2.5 m; borderline impossible
π(U) = 0: sea level rise exceeding 2.5 m; impossible based upon background knowledge
π(U) = 0: negative values of sea level change; impossible based on background knowledge

Is mathematically incorrect. It should have been written:

π(U) ≥0.9: sea level rise up to 0.3 m; corroborated possibilities
0.5 < π(U) < 0.9: sea level rise exceeding 0.3 m and up to 0.63 m; verified possibilities contingent on DT, based on IPCC AR5 likely range (but excluding RCP8.5).
0.1 < π(U) ⇐ 0.5: sea level rise exceeding 0.63 m and up to 1.6 m; unverified possibilities
0 < π(U) ⇐ 0.1: sea level rise between 1.6 and 2.5 m; borderline impossible
π(U) = 0: sea level rise exceeding 2.5 m; impossible based upon background knowledge
π(U) = 0: negative values of sea level change; impossible based on background knowledge

Note the use of LESS THAN signs. This (e.g. 0.1 < π(U) ⇐ 0.5) is verbalized as "0.1 is less than π(U) and π(U) is less than or equal to 0.5″, meaning π(U) is strictly between and excluding 0.1 and including 0.5. Mathematicians might not be so stickery, but we computer scientists are. Many an algorithm has failed for want of proper greater-than and less-than signs.

Just saying
GoatGuy

195. D. J. Hawkins says:

ΔHf

• D. J. Hawkins says:

ΔHf

• D. J. Hawkins says:

ΔH┐ test again

196. D. J. Hawkins says:

ΔHf test

197. J.H. says:

Test…….

198. Scott W Bennett says:

Test 1

I’m not a climate scientist but I can read. So why does my climatology text book* take the exact opposite position to that of Nick Stokes and Steven Mosher above? They both have this whole tail wagging the dog thing going with their odd notion that a global statistic derived from local measurements is somehow forcing those same measurements!

Here are its conclusions, the concluding paragraph in fact [emphasis added]:

199. Scott W Bennett says:

test 2

Statistical methods are used to summarize patterns within data; however, the most useful summary is often not a temporal or spatial average.

Climate variability, in the form of temporal fluctuations and spatial variations, is often more important than changes in average values.

Much of the global warming debate, for instance, focuses on changes in global average air temperature anomalies; however, there is always important interannual variability, not necessarily systematic change, in air temperature that has important implications for applied climatological research (Katz and Brown, 1992).

In addition, global and hemispheric averages disregard the spatial distribution of climatic changes and variability. There are often years with very similar global average air temperature or precipitation; however the spatial distributions of these variables (and their climatic impacts) can be vastly different.

When using statistical analysis in applied climatological research, therefore, one must consider not only the ’average’ conditions at a given location, but also the variability of important climatological variables over a wide range of temporal and spatial scales. – Scott M. Robeson, Statistical Considerations

200. Scott W Bennett says:

test 3

Hang on! So climate change could cause volcanic eruptions that could cause climate change? That could be a real problem if volcanoes could warm the earth but they could only do that in the past, now all they could do is cool the atmosphere; apparently!

201. jorgekafkazar says:

Test
It’s DendroDildoclimatology.

202. Mark,

That’s an important point! I kept this post focused narrowly on climate science, but the broad loss of confidence in America’s institutions is an important factor. And loss of confidence in government officials is the core of this. To see why, read The Big List of Lies by Government Officials.

Also see Gallup’s annual Confidence in Institutions surveys. Terrifying data to anyone interested in America’s future:

https://news.gallup.com/poll/1597/confidence-institutions.aspx

203. J.H. says:

Test..

204. J.H. says:

Testing again… comments don’t seem to be appearing since I cleared my cookies…

205. Thomas Edwardson says:
206. Thomas Edwardson says:

blah blah

blah

207. Thomas Edwardson says:
208. michael hart says:

test

209. michael hart says:

test test

210. jkneps73 says:

211. jkneps73 says:

Cute! You win!

212. Willis Eschenbach says:

this is text

$\frac{\sigma}{\sqrt{N}}$

more text

213. Andy May says:

Another test

214. Andy May says:

Another test with editor.

215. Andy May says:

Test using the same command, but not logged in:

• Sasha says:

So if you want to post a picture, you have to be logged in.

• Sasha says:

This is the one and only picture to make it onto this page.
Well done!
How did you do it?

216. S W Bennett says:

[CO2−3]
⁠, and thus Ω, is most likely incorrect.

Rather, as outlined above, it is most likely the decrease in seawater pH and associated problems of pH homeostasis within organisms that governs changes in calcification rates under OA conditions.

217. S W Bennett says:

“They can do. But they have to work at it. And if they are to retain CaCO₃ in a low carbonate solution, they have to work harder.”

218. Sasha says:

The Global Warming Policy Forum are inviting you to take part in a competition, with a chance to win some excellent prizes.

Tell them about what you think was the tallest green tale of 2018, and explain why it was so daft.

Nominations together with rebuttals should be emailed to harry.wilkinson@thegwpf.com
Prize: Two GWPF books (Group Think and Population Bombed) plus a bottle of House of Lords whisky.
The GWPF team will decide the winner of the competition early in in the new year.
Good luck, and Merry Christmas!

219. Janice Moore says:

If I write something innocuous, will it be published on WUWT? IOW, is it me personally that is blocked?

[Printed and promptly published. .mod]

220. Janice Moore says:
221. Janice Moore says:

Great. Not only can I not post videos, but I can’t post photos on WUWT anymore.

222. “If you can chuckle at it, you can stay with it.”

– Erma Bombeck

Norman Cousins is frequently explained as the man who laughed himself again to health and fitness.
According to his autobiography, Norman Cousins-a well known political journalist, writer, professor, and
globe peace advocate-was identified with ankylosing spondylitis, a unpleasant backbone affliction. He place himself
on significant doses of vitamin C and humor-which involved seeing a whole lot
of Marks Brothers’ movies. He suggests, “I made the joyous discovery that ten minutes of real tummy laughter experienced an anesthetic outcome and would give me at least two hours of discomfort cost-free snooze. When the discomfort-killing impact of the laughter wore off, we would swap on the movement picture projector all over again, and not occasionally, it would direct to one more ache-absolutely free interval.”

We all know how great it feels to giggle.
Have you at any time “laughed ’til it hurts?” Nicely, possibly
that is a signal that individuals laughing muscles are not employed frequently enough.
Any time feasible and ideal, chortle. Do not laugh at the expense of
someone else’s thoughts. A healthful giggle calls for a wholesome
frame of mind. A hearty chuckle should really
embrace those people around you, not alienate them.

I adore to snicker. Every time I’m sensation down, I just start out smiling.
There is no way you can experience negative, unfortunate or depressed if you pressure
yourself to smile and chuckle. Consider it! Yeah, correct now.
Won’t that feel good? Drs. Gael Crystal and Patrick Flanagan, authors of the post entitled Laughter-Nonetheless the Ideal Drugs
(1995), say, “Laughter is a sort of interior jogging that routines the physique and stimulates the release of advantageous mind neurotransmitters and hormones. Positive outlook and laughter is essentially good for our well being!”

Try to see the humor in day to day predicaments you might discover your self in. Do not be overly sensitive to what somebody says or to
a different person’s position of perspective.

223. Sasha says:

This site needs updating to WordPress 5 and Gutenberg 5.0.2.
Then we will be able to properly edit our posts and ADD LINKED IMAGES.

224. Joe Born says:

“pre” test:

The following radiation quantities are consistent with those assumptions but show that the surface emits 2.2 W/m^2 for every 1 W/m^2 it absorbs from the sun. And only that 1 W/m^2 escapes back to space. Yet the emissions equal the absorptions: no energy is created or destroyed.

Total
Absorbed from: Surface L.Atm U.Atm Space Absorbed

Absorbed by:
Surface 0.0000 1.0500 0.1500 1.0000 || 2.2000
Lower Atmosphere 1.6500 0.0000 0.4500 0.0000 || 2.1000
Upper Atmosphere 0.4125 0.7875 0.0000 0.0000 || 1.2000
Space 0.1375 0.2625 0.6000 0.0000 || 1.0000
————————————————
Total Emitted: 2.2000 2.1000 1.2000 1.0000

225. Joe Born says:

Each atmosphere layer in this (no-convection, no-conduction, lumped-parameter) hypothetical absorbs ¾ of the radiation it receives, and it emits all the radiation it absorbs. Also, 1 W/m^2 comes from space and the same amount is returned to space, but the surface emits 2.2 W/m^2. If you go through the arithmetic you can confirm this. If you so change it that each atmosphere layer absorbs all the radiation it receives, then the surface will emit 3.0 W/m^2.

The point is that no energy is created or destroyed, yet the surface emits 2.2 times as much power as the system receives from space (the sun). Each atmospheric layer receives more, too.

$\begin{array}{lcccccc} &&&&&&\mathrm{Total}\\ \mathrm{Absorbed\,from:}&\mathrm{Surface}&\mathrm{L.Atm}&\mathrm{U.Atm}&\mathrm{Space}&&\mathrm{Absorbed}\\ &&&&&&\\ \mathrm{Absorbed\,by:}&&&&&\\ \mathrm{Surface}&0.0000&1.0500&0.1500&1.0000&||&2.2000\\ \mathrm{Lower Atmosphere}&1.6500&0.0000&0.4500&0.0000&||&2.1000\\ \mathrm{Upper Atmosphere}&0.4125&0.7875&0.0000&0.0000&||&1.2000\\ \mathrm{Space}&0.1375&0.2625&0.6000&0.0000&||&1.0000\\ &&&&&&\\ \mathrm{Total\,Emitted:}&2.2000&2.1000&1.2000&1.0000 \end{array}$

226. Joe Born says:

Don132

Sokath, his eyes opened!

Congratulations on your breakthrough. And congratulations to PJF for the comment responsible.
I withdraw my assessment of your limitations. Perhaps I need to reassess my ability to explain things; I had thought I’d made the same point. Indeed, I had been under the illusion that what I’d said here was pellucid: “Here’s the reason why Mr. Eschenbach is right that almost all of those arguments are irrelevant: there would be no average conduction between the earth’s surface and the atmosphere if the atmosphere were perfectly non-radiative.” Apparently not.

However that may be, now that you’ve made one breakthrough and recognized that the greenhouse effect is needed, I commend to your attention the Steve Goddard / Luboš Motl explanation of why its effect eventually becomes negligible in comparison to the integral of lapse rate with respect to altitude.

227. Richard Patton says:
228. Dr. Strangelove says:

Virtual particle – the magic of quantum mechanics
E t o (o + B + 3)/(o – B – 1)
Where: p is Rayleigh number = 28, o is Prandt number = 10, B is a geometric factor = 8/3

229. Dr. Strangelove says:

Virtual particle – the magic of quantum mechanics
E t < h’/2
Where: E is energy, t is time, h’ is reduced Planck constant

230. Dr. Strangelove says:

Lorenz attractor – the butterfly of chaos
p > o (o + B + 3)/(o – B – 1)
Where: p is Rayleigh number = 28, o is Prandt number = 10, B is a geometric factor = 8/3

231. Dr. Strangelove says:
232. Red94ViperRT10 says:

So how is that the fault of global warming Climate Change™?

233. Red94ViperRT10 says:

E≠mc¹
CO₂
I don’t know if I got these right.

234. Red94ViperRT10 says:

mc² take that!

235. Red94ViperRT10 says:

…oceans are &<;2,000 ft depth…

236. table {
border-collapse: collapse;
}
td {
border: 1px solid #000000;
}

Greenland

Location
ID No.
Elev.(m)
Lat.
Long.
Coldest Month
Yearly Avg
Hottest Month

Moriusaq
24597
25
76.8
-69.9
-30.8
-13
6.6

237. Greenland

Location
ID No.
Elev.(m)
Lat.
Long.
Coldest Month
Yearly Avg
Hottest Month

Moriusaq
24597
25
76.8
-69.9
-30.8
-13
6.6

238. Scott W Bennett says:

Moreover, the traditional method overestimates the daily average temperature at 134 stations (62.3%) underestimates it at 76 stations (35.4%), and shows no difference at only 5 stations (2.3%).

On average, the traditional method overestimates the daily average temperature compared to hourly averaging by approximately 0.16°F, though there is strong spatial variability.

The explanation for the long-term difference between the two methods is the underlying assumption for the twice-daily method that the diurnal curve of temperature is symmetrical.

In particular, the Yule–Kendall index is positive for all 215 CONUS stations, indicating that the daily temperature curve is right skewed; that is, more hourly observations occur near the bottom of the distribution of hourly temperatures (i.e., around Tmin) than near the top of the distribution (around Tmax). – Thorne et al. 2016*

239. rbabcock says:

test strike

240. DrT says:

Hello World

241. Taylor Davis says:

Testing?

242. James Schrumpf says:
243. Thomas Edwardson says:
244. Thomas Edwardson says:
245. Thomas Edwardson says:
246. HotScot says:

Test

247. HotScot says:

quoted text

248. Ashok Patel says:

HCO₃

249. This site was… how do I say it? Relevant!!
Finally I’ve found something which helped me. Many thanks!

250. The math comes from this. If you have a sinusoid frequency f Hz (sin(2πft)) samples at s Hz, the samples are sin(2πfn/s), n=0,1,2…
But this is indistinguishable from sin(2π(fn/s+m*n)) for any integer (positive or negative), because you can add a multiple of 2π to the argument of sin without changing its value.

But sin(2π(fn/s+m*n)) = sin(2π(f+m*s)n/s)

251. C. Johansen says:

test quote

252. It’s remarkable designed for me to have
a web site, which is useful in support of my know-how. thanks admin

253. S W Bennett says:

Three paragraphs of nothing follow:

And though you don’t deserve for me to dignify any other response from you, I will address your specific technical comments for anyone else who may be reading. Thank you for sharing some detailed information about the kinds of lag that the Stevenson screen and thermometers introduce. Why didn’t you just make that point without the hostility?  Scott said: “At some point you have to call BS on the navel gazing of abstraction and return to the real world.” And: “So that’s my bottom line! And talk of the comparison of those records with higher sampling rates is pointless!”

Five paragraphs now and 300 words in and “you” are yet to say anything of substance.

Finally – six paragraphs in – you may have actually asked a question but only by restating the initial “argument” for FFS! Are you actually thinking about (Or computing) what I said? :

[1] My reply: Why would you say that sampling and sampling properly is not the real world? It is how it is done in every other application I can think of except climate science. Why is it pointless to compare the correct method to the method currently used that doesn’t give us accurate information? Maybe I don’t fully get your drift, but it sounds like you are saying the problem is the lag. [2]Well why can’t there be more than 1 problem? Mercury in glass thermometers can be replaced with other faster instruments. Screens can be redesigned. (Is this what you are recommending?) The max/min method will still not give you what is correct. [3] From an engineering perspective, capture all of the content available and once sampled properly you are free to filter out what you don’t want or need. [4] When you start talking about exhausts and vehicular wakes then aren’t we now speaking about improperly sited stations? (Yet another problem with the record).

254. The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where {\displaystyle X_{1},\ldots ,X_{n}} {\displaystyle X_{1},\ldots ,X_{n}} are independent and identically distributed random variables with the same arbitrary distribution, zero mean, and variance {\displaystyle \sigma ^{2}} \sigma ^{2} and {\displaystyle Z} Z is their mean scaled by {\displaystyle {\sqrt {n}}} {\sqrt {n}}

{\displaystyle Z={\sqrt {n}}\left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)} Z={\sqrt {n}}\left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)
Then, as {\displaystyle n} n increases, the probability distribution of {\displaystyle Z} Z will tend to the normal distribution with zero mean and variance {\displaystyle \sigma ^{2}} \sigma ^{2}.

255. SLC Dave says:
256. rotor says:
257. rotor says:

Test Test Test

258. James Schrumpf says:

testing an html table

thisisa<table

259. James Schrumpf says:

 USW00026510 924 JAN -16.5 11.3 0.37 USW00026510 839 FEB -11.0 9.3 0.32 USW00026510 920 MAR -4.0 6.8 0.22 USW00026510 894 APR 4.8 6.1 0.20 USW00026510 924 MAY 14.1 5.5 0.18 USW00026510 894 JUN 20.1 4.1 0.14 USW00026510 924 JUL 20.8 4.3 0.14 USW00026510 924 AUG 17.5 4.1 0.13 USW00026510 894 SEP 11.8 4.9 0.16 USW00026510 924 OCT 0.0 6.0 0.20 USW00026510 894 NOV 10.4 7.8 0.26 USW00026510 922 DEC -14.8 9.3 0.31 

or blockquote

USW00026510 924 JAN -16.5 11.3 0.37
USW00026510 839 FEB -11.0 9.3 0.32
USW00026510 920 MAR -4.0 6.8 0.22
USW00026510 894 APR 4.8 6.1 0.20
USW00026510 924 MAY 14.1 5.5 0.18
USW00026510 894 JUN 20.1 4.1 0.14
USW00026510 924 JUL 20.8 4.3 0.14
USW00026510 924 AUG 17.5 4.1 0.13
USW00026510 894 SEP 11.8 4.9 0.16
USW00026510 924 OCT 0.0 6.0 0.20
USW00026510 894 NOV 10.4 7.8 0.26
USW00026510 922 DEC -14.8 9.3 0.31

260. James Schrumpf says:

 ID No. Recs. MON AVG TMAX STD DEV Est Error in Mean USW00026510 924 JAN -16.5 11.3 0.4 USW00026510 839 FEB -11.0 9.3 0.3 USW00026510 920 MAR -4.0 6.8 0.2 USW00026510 894 APR 4.8 6.1 0.2 USW00026510 924 MAY 14.1 5.5 0.2 USW00026510 894 JUN 20.1 4.1 0.1 USW00026510 924 JUL 20.8 4.3 0.1 USW00026510 924 AUG 17.5 4.1 0.1 USW00026510 894 SEP 11.8 4.9 0.2 USW00026510 924 OCT 0.0 6.0 0.2 USW00026510 894 NOV - 10.4 7.8 0.3 USW00026510 922 DEC -14.8 9.3 0.3 

Now let’s look at the data for 2013:
 ID No. Rrecs MON AVG TMAX STD DEV Est Error in Mean Anomaly Error in anomaly USW00026510 31 JAN -11.9 9.4 1.7 4.6 1.7 USW00026510 28 FEB -12.2 4.9 0.9 -1.2 1.0 USW00026510 31 MAR -4.5 6.1 1.1 -0.5 1.13 USW00026510 30 APR -0.1 6.4 1.2 -4.9 1.2 USW00026510 31 MAY 12.4 9.2 1.6 -1.7 1.7 USW00026510 30 JUN 23.7 5.2 1.0 3.6 1.0 USW00026510 31 JUL 21.0 5.5 1.0 0.2 1.0 USW00026510 31 AUG 18.0 3.8 0.7 0.5 0.7 USW00026510 30 SEP 9.1 4.4 0.8 -2.6 0.8 USW00026510 31 OCT 6.5 2.9 0.5 6.5 0.6 USW00026510 30 NOV -6.4 7.5 1.4 4.0 1.4 USW00026510 31 DEC -14.5 9.0 1.6 0.2 1.6 

261. James Schrumpf says:

ID No. Recs. MON AVG TMAX STD DEV Est Error in Mean
USW00026510 924 JAN -16.5 11.3 0.4
USW00026510 839 FEB -11.0 9.3 0.3
USW00026510 920 MAR -4.0 6.8 0.2
USW00026510 894 APR 4.8 6.1 0.2
USW00026510 924 MAY 14.1 5.5 0.2
USW00026510 894 JUN 20.1 4.1 0.1
USW00026510 924 JUL 20.8 4.3 0.1
USW00026510 924 AUG 17.5 4.1 0.1
USW00026510 894 SEP 11.8 4.9 0.2
USW00026510 924 OCT 0.0 6.0 0.2
USW00026510 894 NOV – 10.4 7.8 0.3
USW00026510 922 DEC -14.8 9.3 0.3

Now let’s look at the data for 2013:

ID No. Rrecs MON AVG TMAX STD DEV Est Error in Mean Anomaly Error in anomaly
USW00026510 31 JAN -11.9 9.4 1.7 4.6 1.7
USW00026510 28 FEB -12.2 4.9 0.9 -1.2 1.0
USW00026510 31 MAR -4.5 6.1 1.1 -0.5 1.13
USW00026510 30 APR -0.1 6.4 1.2 -4.9 1.2
USW00026510 31 MAY 12.4 9.2 1.6 -1.7 1.7
USW00026510 30 JUN 23.7 5.2 1.0 3.6 1.0
USW00026510 31 JUL 21.0 5.5 1.0 0.2 1.0
USW00026510 31 AUG 18.0 3.8 0.7 0.5 0.7
USW00026510 30 SEP 9.1 4.4 0.8 -2.6 0.8
USW00026510 31 OCT 6.5 2.9 0.5 6.5 0.6
USW00026510 30 NOV -6.4 7.5 1.4 4.0 1.4
USW00026510 31 DEC -14.5 9.0 1.6 0.2 1.6

262. James Schrumpf says:

Can you use styles in WordPress?

263. James Schrumpf says:

ID No. Recs. MON AVG TMAX STD DEV Est Error in Mean
USW00026510 924 JAN -16.5 11.3 0.4
USW00026510 839 FEB -11.0 9.3 0.3
USW00026510 920 MAR -4.0 6.8 0.2
USW00026510 894 APR 4.8 6.1 0.2
USW00026510 924 MAY 14.1 5.5 0.2
USW00026510 894 JUN 20.1 4.1 0.1
USW00026510 924 JUL 20.8 4.3 0.1
USW00026510 924 AUG 17.5 4.1 0.1
USW00026510 894 SEP 11.8 4.9 0.2
USW00026510 924 OCT 0.0 6.0 0.2
USW00026510 894 NOV – 10.4 7.8 0.3
USW00026510 922 DEC -14.8 9.3 0.3

Now let’s look at the data for 2013:

ID No. Rrecs MON AVG TMAX STD DEV Est Error in Mean Anomaly Error in anomaly
USW00026510 31 JAN -11.9 9.4 1.7 4.6 1.7
USW00026510 28 FEB -12.2 4.9 0.9 -1.2 1.0
USW00026510 31 MAR -4.5 6.1 1.1 -0.5 1.13
USW00026510 30 APR -0.1 6.4 1.2 -4.9 1.2
USW00026510 31 MAY 12.4 9.2 1.6 -1.7 1.7
USW00026510 30 JUN 23.7 5.2 1.0 3.6 1.0
USW00026510 31 JUL 21.0 5.5 1.0 0.2 1.0
USW00026510 31 AUG 18.0 3.8 0.7 0.5 0.7
USW00026510 30 SEP 9.1 4.4 0.8 -2.6 0.8
USW00026510 31 OCT 6.5 2.9 0.5 6.5 0.6
USW00026510 30 NOV -6.4 7.5 1.4 4.0 1.4
USW00026510 31 DEC -14.5 9.0 1.6 0.2 1.6

264. James Schrumpf says:

LaTex test

$P = e\sigma AT^{4}$

265. James Schrumpf says:

more latex

\Delta \overline{X} _{est} = \frac{\overline{X}}{\sqrt{N}}

266. James Schrumpf says:

forgot the LaTex tag

$latex \Delta \overline{X} _{est} = \frac{\overline{X}}{\sqrt{N}} 267. James Schrumpf says: again$latex \delta \overline{X} _{est} = \frac{\overline{X}}{\sqrt{N}}

268. James Schrumpf says:

again

$latex \Delta \overline{X} _{est} = \frac{\overline{X}}{\sqrt{N}} This worked fine at the online latex tester 269. James Schrumpf says:$latex \delta

270. James Schrumpf says:

$latex delta 271. James Schrumpf says: Try again.$latex delta

272. James Schrumpf says:

Still trying

$\mathscr{L}\{f(t)\}=F(s)$

273. James Schrumpf says:

Arrrrgh.

$\mathscr{L}\{f(t)\}=F(s)$

\$latex \delta \overline{X} _{est} = \frac{\overline{X}}{\sqrt{N}}

274. James Schrumpf says:

I’ve been interested in some time with the calculations used to get the global anomaly, especially the error calculation and propagation throughout the process. I don’t know exactly which set of stations are used in the calculations, so I grabbed the file ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd_all.tar.gz file. From that I loaded the station metadata into my database and selected for the GSN stations using the GSN_FLAG column in the table.

I was surprised how sparse some of the station data was; out of all of the stations with the GSN flag set (991), only 139 had enough valid data to fill out the baseline of 30 years from 1981 to 2010. I ran the statistics against that set to see what I’d find.

The raw daily station data tags data as MISSING with the -9999 flag, but the flag wasn’t consistent;, it appeared in the data set not only as -9999, but as 999, 99, and -99. I converted all of the varieties to -9999.

I think I understand the basics of creating an anomaly: a baseline data set of a station’s mean monthly temps is created from each station’s data over a 30-year period from 01-JAN-1981 to 31-DEC-2010, so that the result is an average of all the January temps from 1981 to 2010, all of the Feb temps, etc. The station’s data is then averaged over a month’s time, and the baseline mean for that month is subtracted from the current month’s mean, and that difference is the anomaly.

For my statistical rules, I used a website from the University of Toronto that I thought did a good job of explaining how to calculate standard error, how to propagate error through calculations, and how to determine the estimated error in the mean. The website’s url is https://faraday.physics.utoronto.ca/PVB/Harrison/ErrorAnalysis/Precision.html. There’s more to the site than where I linked to, but my link is where the equations I used were.

I picked a station at random that had a mostly full data set for the 30-year baseline, and also for the sample year of 2013. Both sets are a few days short, but six days or so out of 930 in a 30-year period shouldn’t make that much difference. The station name is AK MCGRATH AP amd the 30-year baseline data for that station looked like this:

https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png?w=450

The data for 2013:

https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png?w=450

Looking at the January in the 30-year baseline, you see the standard deviation is pretty large. Using this equation:
https://jaschrumpf.files.wordpress.com/2019/02/standard_error_formula.png?w=450
we get 11.3C/30.4C = 0.4C for the estimated error in the mean for the month of January.

Over at the annual calculations for 2013 for that station, we see that the standard deviation is 9.4C and the estimated error in the mean is 1.7C. I double-checked those numbers because they are pretty large — but they are correct.

So that leaves us to calculate the anomaly. The baseline for January subtracted from the mean for January for 2013 is -11.9C – (-16.5) = 4.6C with an error calculated with:

which equals 1.7C. The final value for the anomaly should be reported as 4.6C+/-1.7C. If the same procedure is performed with the entire year of 2013, the result is a mean anomaly of 0.7C+/-0.9C.

I ran these calculations against the entire 139-station data set that had very good data, and the final mean anomaly and error for the year of 2013 was 0.5C +/- 0.15C.

It’s my understanding that nothing can be done statistically to a set of measurements can improve its accuracy. The error in the mean can be reduced, but the mean itself can only have the same number of significant digits as in the original measurements. In this case, that’s one decimal point.

If my calculations of the estimated error in the mean are correct, I can certainly understand why we don’t see the error published along with the “hottest year ever” claims. It would be ludicrous to claim a year was 0.07C warmer than before when the error could be an order of magnitude larger.

275. James Schrumpf says:

Am I ever going to see my test post?

276. James Schrumpf says:

Corrected the image urls:

I’ve been interested for some time in the calculations used to get the global anomaly, especially the error calculation and propagation throughout the process. I thought I’d have a try at seeing what the numbers look like , but as I don’t know exactly which set of stations are used in the calculations, so I grabbed the file ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd_all.tar.gz from the NOAA website. From that I loaded the station metadata into my database and selected for the GSN stations using the GSN_FLAG column in the table.

I was surprised how sparse some of the station data was; out of all of the stations with the GSN flag set (991), only 139 had enough valid data to fill out the baseline of 30 years from 1981 to 2010. I ran the statistics against that set to see what I’d find.

The raw daily station records tags missing data with the -9999 flag, but the flag wasn’t consistent;, it appeared in the data set not only as -9999, but as 999, 99, and -99. I converted all of the varieties to -9999.

I think I understand the basics of creating an anomaly: a baseline data set of a station’s mean monthly temps is created from each station’s data over a 30-year period from 01-JAN-1981 to 31-DEC-2010, so that the result is an average of all the January temps from 1981 to 2010, all of the Feb temps, etc. The station’s data is then averaged over a month’s time, and the baseline mean for that month is subtracted from the current month’s mean, and that difference is the anomaly.

For my statistical rules, I used a website from the University of Toronto that I thought did a good job of explaining how to calculate standard error, how to propagate error through calculations, and how to determine the estimated error in the mean. The website’s url is https://faraday.physics.utoronto.ca/PVB/Harrison/ErrorAnalysis/Precision.html. There’s more to the site than where I linked to, but that’s where the equations I used were.

I picked a station at random that had a mostly full data set for the 30-year baseline, and also for the sample year of 2013. Both sets are a few days short, but six days or so out of 930 in a 30-year period shouldn’t make that much difference. The station name is AK MCGRATH AP amd the 30-year baseline data for that station looked like this:

https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png

The data for 2013:

https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png

Looking at January in the 30-year baseline, you see the standard deviation is pretty large. Using this equation:
https://jaschrumpf.files.wordpress.com/2019/02/standard_error_formula.png
we get 11.3C/30.4C = 0.4C for the estimated error in the mean for the month of January.

Over at the annual calculations for 2013 for that station, we see that the standard deviation is 9.4C and the estimated error in the mean is 1.7C. I double-checked those numbers because they are pretty large — but they are correct.

So that leaves us to calculate the anomaly. The baseline for January subtracted from the mean for January for 2013 is -11.9C – (-16.5) = 4.6C with an error calculated as:

which equals 1.7C. The final value for the anomaly should be reported as 4.6C+/-1.7C. If the same procedure is performed with the entire year of 2013, the result is a mean anomaly of 0.7C+/-0.9C.

I ran these calculations against the entire 139-station data set that had very good data, and the final mean anomaly and error for the year of 2013 was 0.5C +/- 0.15C.

It’s my understanding that nothing can be done statistically to a set of measurements can improve its accuracy. The error in the mean can be reduced, but the mean itself can only have the same number of significant digits as in the original measurements. In this case, that’s one decimal point. While I was looking at the NOAA site, I noticed the files of monthly temperature means did not include any error information. They took a month’s worth of errors in the data and made it go away, and those error calculations are significant.

It’s also apparent that using the anomaly removes the variance in the Earth’s temperatures. Rather than stating the temperature difference between Ecuador and Antarctica averaged 40C in the last year, the anomaly smooths it all out so that it can be said that Ecuador’s anomaly for 2018 was 0.1C less than that at the South Pole.

In any event, if my calculations of the estimated error in the mean are correct, I can certainly understand why we don’t see the error published along with the “hottest year ever” claims. It would be ludicrous to claim a year was 0.007C warmer than before, and then put an error bar on the number that could be an order of magnitude larger.

277. James Schrumpf says:

I’ve been interested for some time in the calculations used to get the global anomaly, especially the error calculation and propagation throughout the process. I thought I’d have a try at seeing what the numbers look like , but as I don’t know exactly which set of stations are used in the calculations, so I grabbed the file ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd_all.tar.gz from the NOAA website. From that I loaded the station metadata into my database and selected for the GSN stations using the GSN_FLAG column in the table.

I was surprised how sparse some of the station data was; out of all of the stations with the GSN flag set (991), only 139 had enough valid data to fill out the baseline of 30 years from 1981 to 2010. I ran the statistics against that set to see what I’d find.

The raw daily station records tags missing data with the -9999 flag, but the flag wasn’t consistent;, it appeared in the data set not only as -9999, but as 999, 99, and -99. I converted all of the varieties to -9999.

I think I understand the basics of creating an anomaly: a baseline data set of a station’s mean monthly temps is created from each station’s data over a 30-year period from 01-JAN-1981 to 31-DEC-2010, so that the result is an average of all the January temps from 1981 to 2010, all of the Feb temps, etc. The station’s data is then averaged over a month’s time, and the baseline mean for that month is subtracted from the current month’s mean, and that difference is the anomaly.

For my statistical rules, I used a website from the University of Toronto that I thought did a good job of explaining how to calculate standard error, how to propagate error through calculations, and how to determine the estimated error in the mean. The website’s url is https://faraday.physics.utoronto.ca/PVB/Harrison/ErrorAnalysis/Precision.html. There’s more to the site than where I linked to, but that’s where the equations I used were.

I picked a station at random that had a mostly full data set for the 30-year baseline, and also for the sample year of 2013. Both sets are a few days short, but six days or so out of 930 in a 30-year period shouldn’t make that much difference. The station name is AK MCGRATH AP amd the 30-year baseline data for that station looked like this:

https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png

The data for 2013:

https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png

Looking at January in the 30-year baseline, you see the standard deviation is pretty large. Using this equation:
https://jaschrumpf.files.wordpress.com/2019/02/standard_error_formula.png
we get 11.3C/30.4C = 0.4C for the estimated error in the mean for the month of January.

Over at the annual calculations for 2013 for that station, we see that the standard deviation is 9.4C and the estimated error in the mean is 1.7C. I double-checked those numbers because they are pretty large — but they are correct.

So that leaves us to calculate the anomaly. The baseline for January subtracted from the mean for January for 2013 is -11.9C – (-16.5) = 4.6C with an error calculated as:

which equals 1.7C. The final value for the anomaly should be reported as 4.6C+/-1.7C. If the same procedure is performed with the entire year of 2013, the result is a mean anomaly of 0.7C+/-0.9C.

I ran these calculations against the entire 139-station data set that had very good data, and the final mean anomaly and error for the year of 2013 was 0.5C +/- 0.15C.

It’s my understanding that nothing can be done statistically to a set of measurements can improve its accuracy. The error in the mean can be reduced, but the mean itself can only have the same number of significant digits as in the original measurements. In this case, that’s one decimal point. While I was looking at the NOAA site, I noticed the files of monthly temperature means did not include any error information. They took a month’s worth of errors in the data and made it go away, and those error calculations are significant.

It’s also apparent that using the anomaly removes the variance in the Earth’s temperatures. Rather than stating the temperature difference between Ecuador and Antarctica averaged 40C in the last year, the anomaly smooths it all out so that it can be said that Ecuador’s anomaly for 2018 was 0.1C less than that at the South Pole.

In any event, if my calculations of the estimated error in the mean are correct, I can certainly understand why we don’t see the error published along with the “hottest year ever” claims. It would be ludicrous to claim a year was 0.007C warmer than before, and then put an error bar on the number that could be an order of magnitude larger.

278. kalsel3294 says:

On a more regional note, a period of below average rain is followed by a period of above average rain, which looks as if it is happening now in Queensland, Australia, irrespective man made climate change or not.

279. I do not even know the way I finished up right here, however I believed this publish was
good. I don’t understand who you are however definitely
you’re going to a well-known blogger if you happen to aren’t already.
Cheers!

280. ЯΞ√ΩLUT↑☼N says:
281. Trying to do what Louis did

Test #3 (https​:​/​/​youtu​.​be/gO17hN-YvBc):
https://youtu.be/gO17hN-YvBc

Test #5 (bracket format):

282. J.H. says:

Test…. cleared cookies and changed some settings. Don’t seem to be able to view my comments again.

Testing, testing, Testing

…and a formatting test for fun.

283. J.H. says:

…. and there it is. 🙂

284. eyesonu says:

“I would rather be governed by the first two thousand people in the Boston telephone directory than by the two thousand people on the faculty of Harvard University.”
— William Buckley on “Meet the Press”, 17 October 1965.

285. eyesonu says:

“I would rather be governed by the first two thousand people in the Boston telephone directory than by the two thousand people on the faculty of Harvard University.”
— William Buckley on “Meet the Press”, 17 October 1965.

286. eyesonu says:

“I would rather be governed by the first two thousand people in the Boston telephone directory than by the two thousand people on the faculty of Harvard University.”
— William Buckley on “Meet the Press”, 17 October 1965.

287. eyesonu says:

“I would rather be governed by the first two thousand people in the Boston telephone directory than by the two thousand people on the faculty of Harvard University.”
— William Buckley on “Meet the Press”, 17 October 1965.

288. The telemetry of the voyage to the moon has been taped over

289. rotor says:

The telemetry of the voyage to the moon has been taped over

290. Kip Hansen says:
291. J.H. says:

test…..

292. Sasha says:

“How did you get here?”

Republican Wyoming Rep. Liz Cheney grilled environmental experts on their travel methods at a House hearing on climate change while discussing the Green New Deal’s call to phase out air travel.

After a moment of silence, one of the environmental experts on the panel chimed in to say she supports many of the recommendations outlined in the Green New Deal.

Cheney replied: “I would just say that it’s going to be crucially important for us to recognize and understand when we outlaw plane travel, we outlaw gasoline, we outlaw cars, I think actually probably the entire U.S. military, because of the Green New Deal, that we are able to explain to our constituents and to people all across this country what that really means. Even when it comes down to something like air travel … that means the government is going to be telling people where they can fly to and where they can’t. I would assume that means our colleagues from California are going to be riding their bicycles back home to their constituents.”

https://www.dailycaller.com/2019/02/12/liz-cheney-green-new-deal-question/

293. Menicholas says:
294. J.H. says:

295. J.H. says:

296. J.H. says:
297. Sasha says:

28 Jan 2019

“They Ruined YouTube” Mark Dice Shows How YouTube Rigged Their Search Algorithm to Suppress AltMedia
Chris Menahan
InformationLiberation

Google has begun changing search results in response to liberal journalists complaining to them that they don’t like the results they’re getting from their searches.

If websites such as WUWT have already been effectively banished from Google’s search results, they can look forward to worse happening to their YouTube videos. YouTube has decided to “disappear” so-called “conspiracy” videos and what they call “alternative media” from their search results and the sidebar. From now on, only “whitelisted” corporate videos will be visible.

http://www.informationliberation.com/?id=59731

298. Jurgen says:

sci-hub.tw/10.1080/02626667.2019.1567925

So I guess WordPress is the culprit here.

299. Jurgen says:

http://www.sci-hub.tw/10.1080/02626667.2019.1567925

So I guess WordPress is the culprit here.

300. Generally I don’t learn article on blogs, however I wish to say that this write-up very forced me
to try and do so! Your writing taste has been amazed
me. Thank you, very great article.

301. Joe Born says:

M <- 100
for(j in 2:4){
m <- numeric(M)
N <- 10 ^ j
for(i in 1:M){
x <- 10 * rnorm(N)
d <- rnorm(N)
y <- round(x + d)
m[i] <- mean(y) – mean(x)
}
hist(m)
}

302. Beta Blocker says:

Near the end of one of Scott Adams’ daily Periscope podcasts, he mentions his desire to moderate a debate between climate scientists and their skeptics concerning the existence and dangers of climate change.

As Scott Adams would manage it, the debaters would not be facing each other in the same venue. Rather, they would be asked in a Periscope interview to present their five most persuasive arguments for their position.

As I myself view Adam’s proposal, his podcasted debate might serve as a dry run for a larger public debate over today’s mainstream climate science, one which might go critical mass if America’s voters are ever asked to make serious personal and economic sacrifices in the name of fighting climate change.

Among his other pursuits, Scott Adams is an expert in the art and science of persuasion. He is looking for a short list of arguments from each side of the AGW question that would be persuasive to those people who are not climate scientists themselves but who have an interest in hearing summaries of the competing arguments.

It is clear from listening to Adam’s thoughts on his proposed debate that he does not have a grasp of the most basic fundamentals of each side of the question. Nor does he understand how those basic fundamentals influence the content and rhetoric of the science debate. Anyone who participated in this debate would have to educate Scott Adams on the basics in a way that is comprehensible to the layman.

The other problem for those representing the skeptical position is that Adams views the question as having only two very distinct sides. He does not understand that a middle position exists which covers the many uncertainties of today’s climate science.

In his look at how the scientific debate is being pursued by both sides, Scott Adams frames the science question in a stark terms. Is the earth warming, or is not warming? If it is warming, is CO2 the cause or is it not the cause? Is warming dangerous or is it not dangerous? If it is dangerous, then how dangerous is it?

Judith Curry’s name was mentioned as a climate scientist who might be a good representative for the skeptic side of the debate.

Presumably, each representative would be asked at some point to refute the five most persuasive arguments offered by the opposition. I would suggest that these arguments might cover some or all of these topics:

— The fundamental basis of today’s mainstream climate science including the postulated water vapor feedback mechanism.

— Ocean warming versus atmospheric warming as the true measure of the presence and the rate of increase of climate change.

— The accuracy, validity, and uncertainties of the modern temperature record and of the paleoclimate temperature record.

— The accuracy, validity, and uncertainties of the general circulation models, the sea level rise projections, and the projections of AGW-related environmental, human, and economic impacts.

— The costs and benefits of alternative public policy responses to climate change including the practicality of relying on wind and solar for our energy needs and the role of nuclear power,

— The costs and benefits of massive government spending on the Green New Deal versus the use of government-mandated carbon pricing mechanisms combined with an aggressive application of the Clean Air Act.

If Scott Adams goes forward with his podcasted debate, will anyone show up to defend the mainstream climate science side of the question?

If no one does, then someone from the skeptic side must present the mainstream’s side in a way that is both true to the mainstream position but which also drastically condenses the raw science into something the layman can understand.

Here is an example of just how condensed a basic description of today’s mainstream climate science might have to be in order to be comprehensible to the laymen — and also to Scott Adams himself — as a description of today’s mainstream theory:

—————–

Mainstream Climate Science Theory: CO2 as the Earth’s Temperature Control Knob

Over time periods covering the last 10,000 years of the earth’s temperature history, carbon dioxide has been the earth’s primary temperature control knob.

Although Water vapor is the earth’s primary greenhouse gas, adding carbon dioxide further warms the atmosphere thus allowing it to hold more water vapor than it otherwise could. The additional carbon dioxide amplifies the total warming effect of both greenhouse gases, CO2 and water vapor, through a feedback mechanism operating between CO2’s warming effects and water vapor’s warming effects.

For example, if carbon dioxide’s pre-industrial concentration of 280 ppm is doubled to 560 ppm by adding more CO2 to the atmosphere, CO2’s basic effect of a 1C to 1.5C warming per CO2 doubling is amplified by the water vapor feedback mechanism into a much larger 2.5C to 4C range of total warming.

Atmospheric and ocean circulation mechanisms affect the rate and extent of atmospheric and ocean warming. These mechanisms transport heat within the atmosphere and the oceans, and move heat between the atmosphere and the oceans. The circulation mechanisms also affect how much of the additional trapped heat is being stored in the oceans, how much is being stored in the atmosphere, and how much is being lost to outer space.

Uncertainties in our basic knowledge of atmospheric and ocean circulation mechanisms make it difficult to predict with exact precision how much warming will occur if CO2 concentration is doubled from 280 ppm to 560 ppm.

These uncertainties also limit out ability to predict exactly how fast the warming will occur and to predict with exact certainty where and how much of the additional trapped heat will be stored in the oceans versus in the atmosphere. Thus a range of warming predictions can be expected and must be studied further.

Climate modeling exercises now indicate that a range of from 2.5C to 4C of total global warming over and above pre-industrial temperatures is likely to occur, depending upon which assumptions are being made concerning atmospheric and ocean circulation mechanisms and concerning how much CO2 will be added to the atmosphere over the next 100 years.

(End of Summary)

—————–

As a layman in trying to understand climate science topics, you have to crawl before you can walk.

If the skeptics arguments are to be persuasive to the non-scientist layman, then explaining the basics of today’s mainstream climate science is a necessary step prior to explaining the uncertainties of the science and its predictions. It’s even a necessary prior step if one completely rejects the basic tenants of today’s mainstream climate science. An informed debate has to start somewhere.

As someone who is not a climate scientist myself, the description I’ve written above is my own highly-condensed summary of what I understand to be the mainstream climate scientist’s basic theory. The description is presented in terms that might be understandable to the non-scientist layman while also being true to the basic tenants of the mainstream climate science narrative.

Is my example of a highly summarized description actually understandable to the non-scientist layman? Is it actually true to the scientific position mainstream climate scientists now hold? Is it useful as a starting point for understanding the overall context of the debate?

Here is a most important point concerning what Scott Adams is trying to accomplish.

Adams is not asking the opposing sides to prove scientifically that their side of the climate change question is the scientific truth. He is asking them to offer a defense of their side of the question that is understandable to the non-scientist and is persuasive as debating arguments go.

In his look at how the scientific debate is being pursued by both sides, Scott Adams frames the science question in stark terms. Is the earth warming, or is not warming? If it is warming, is the cause CO2, or is it not CO2? Is the warming dangerous, or is it not dangerous? If it is dangerous, how dangerous is it?

Logically, any level of warming regardless of its rate of increase could become dangerous if the warming continues indefinitely into the future. A 0.2C per decade rate of warming will produce a 2C increase in a hundred years time, 4C in two-hundred years time. If we continue adding CO2 to the atmosphere, and if CO2 will indeed be the earth’s temperature control knob for the next several thousand years, then when will the warming stop?

What is left out the current debate over climate change is the question of certainty versus uncertainty.

If America’s voting public is ever asked to make serious personal and economic sacrifices in the name of fighting climate change, and if the debate over mainstream climate science then goes critical mass, the question of certainty versus uncertainty will become a deciding factor as to who wins or loses that debate.

303. Thomas.Edwardson says:

I see no account of why this is supposed to be caused by a momentary pressure drop rather than nucleation by exhaust pollutants. If they are ready to stretch causation to 6 miles, they are not able to differentiate between wing depression and exhaust trails.

The authors quote “Woodley et al. (1991) have shown that aircraft exhaust is of negligible importance in aircraft-produced ice particle formation”, which made that case quite nicely. Exhaust pollutants are excluded by stipulation via Woodley, and Woodley et al were pretty thorough in their investigations.

The authors showed how the ice particles from the aircraft wake were measurably different than the surrounding precipitation, matched with individual aircraft, and could take between 20 and 40 minutes to fully develop into precipitation. During that time the LIP tracked with the advection movement of the weather system from where the aircraft had been. So, the up to 6 mile displacement in space is really just a displacement in time.

Finally, engine contrails are not the result of pollutant caused nucleation, but instead are caused by direct injection of excess moisture into the atmosphere as a byproduct of combustion.

304. James A Schrumpf says:

 Station ID USW00093729 Cape Hatteras ap TEMPS 2013 TEMPS 1981-2010 month N std temp avg temp err in mean N se temp avg temp err in mean anomaly error in mean JAN 62 6.1 9.6 0.8 1848 6.8 7.8 0.16 1.8 0.8 FEB 56 5.1 8.84 0.7 1676 6.3 8.5 0.15 0.4 0.7 MAR 62 4.7 9.46 0.6 1846 6.3 11.2 0.15 -1.7 0.6 APR 60 5.3 15.1 0.7 1788 5.9 15.6 0.14 -0.5 0.7 MAY 62 5.1 20.48 0.6 1848 5.3 19.8 0.12 0.7 0.7 JUN 60 3.6 25.59 0.5 1786 4.5 24.2 0.11 1.4 0.5 JUL 62 3.3 27.38 0.4 1838 3.9 26.4 0.09 1.0 0.4 AUG 62 3.7 26.61 0.5 1846 4.0 26.1 0.09 0.5 0.5 SEP 60 4.9 23.52 0.6 1788 4.3 23.8 0.10 -0.3 0.6 OCT 62 5.2 19.7 0.7 1848 5.4 19.1 0.13 0.6 0.7 NOV 60 6.3 13.69 0.8 1784 6.0 14.5 0.14 -0.9 0.8 DEC 62 6.5 11.42 0.8 1844 6.5 9.9 0.15 1.5 0.8

 

 ann anomaly 0.38 stdev anomaly 1.00 error in mean 0.29 final +0.38+/-0.29c 

305. James A Schrumpf says:

 Station ID USW00093729 Cape Hatteras ap TEMPS 2013 TEMPS 1981-2010 month N std temp avg temp err in mean N se temp avg temp err in mean anomaly error in mean JAN 62 6.1 9.6 0.8 1848 6.8 7.8 0.16 1.8 0.8 FEB 56 5.1 8.84 0.7 1676 6.3 8.5 0.15 0.4 0.7 MAR 62 4.7 9.46 0.6 1846 6.3 11.2 0.15 -1.7 0.6 APR 60 5.3 15.1 0.7 1788 5.9 15.6 0.14 -0.5 0.7 MAY 62 5.1 20.48 0.6 1848 5.3 19.8 0.12 0.7 0.7 JUN 60 3.6 25.59 0.5 1786 4.5 24.2 0.11 1.4 0.5 JUL 62 3.3 27.38 0.4 1838 3.9 26.4 0.09 1.0 0.4 AUG 62 3.7 26.61 0.5 1846 4.0 26.1 0.09 0.5 0.5 SEP 60 4.9 23.52 0.6 1788 4.3 23.8 0.10 -0.3 0.6 OCT 62 5.2 19.7 0.7 1848 5.4 19.1 0.13 0.6 0.7 NOV 60 6.3 13.69 0.8 1784 6.0 14.5 0.14 -0.9 0.8 DEC 62 6.5 11.42 0.8 1844 6.5 9.9 0.15 1.5 0.8

 

 ann anomaly 0.38 stdev anomaly 1.00 error in mean 0.29 final +0.38+/-0.29c 

306. James A Schrumpf says:

non breaking spaces EVERYWHERE
 Station ID USW00093729 Cape Hatteras ap  TEMPS 2013                                              TEMPS 1981-2010 month   N      std temp    avg temp    err in mean   N         se temp     avg temp    err in mean     anomaly  error in mean JAN     62     6.1         9.6         0.8           1848       6.8          7.8     0.16             1.8        0.8 FEB     56     5.1         8.84        0.7           1676       6.3          8.5     0.15             0.4        0.7 MAR     62     4.7         9.46        0.6           1846       6.3          11.2     0.15             -1.7          0.6 APR     60     5.3         15.1        0.7           1788       5.9          15.6     0.14             -0.5          0.7 MAY     62     5.1         20.48       0.6           1848       5.3          19.8     0.12             0.7        0.7 JUN     60     3.6         25.59       0.5           1786       4.5          24.2     0.11             1.4        0.5 JUL     62     3.3         27.38       0.4           1838       3.9          26.4     0.09             1.0        0.4 AUG     62     3.7         26.61       0.5           1846       4.0          26.1     0.09             0.5        0.5 SEP     60     4.9         23.52       0.6           1788       4.3          23.8     0.10             -0.3          0.6 OCT     62     5.2         19.7        0.7           1848       5.4          19.1     0.13             0.6        0.7 NOV     60     6.3         13.69       0.8           1784       6.0          14.5     0.14             -0.9          0.8 DEC     62     6.5         11.42       0.8           1844       6.5          9.9     0.15             1.5        0.8                                                                       ann anomaly 0.38          stdev anomaly 1.00          error in mean 0.29          final +0.38+/-0.29c