Test

SMPTE color bars – Click for your own test pattern kit

This page is for posters to test comments prior to submitting them to WUWT. Your tests will be deleted in a while, though especially interesting tests, examples, hints, and cool stuff will remain for quite a while longer.

Some things that don’t seem to work any more, or perhaps never did, are kept in Ric Werme’s Guide to WUWT.

Formatting in comments

WordPress does not provide much documentation for the HTML formatting permitted in comments. There are only a few commands that are useful, and a few more that are pretty much useless.

A typical HTML formatting command has the general form of <name>text to be formatted</name>. A common mistake is to forget the end command. Until WordPress gets a preview function, we have to live with it.

N.B. WordPress handles some formatting very differently than web browsers do. A post of mine shows these and less useful commands in action at WUWT.

N.B. You may notice that the underline command, <u>, is missing. WordPress seems to suppress for almost all users, so I’m not including it here. Feel free to try it, don’t expect it to work.

Name Sample Result
b (bold) This is <b>bold</b> text This is bold text
Command strong also does bolding.
i (italics) This is <i>italicized</i> text This is italicized text
Command em (emphasize) also does italics.
a (anchor) See <a href=http://wermenh.com>My home page</a> See My home page
A URL by itself (with a space on either side) is often adequate in WordPress. It will make a link to that URL and display the URL, e.g. See http://wermenh.com.

Some source on the web is presenting anchor commands with other parameters beyond href, e.g. rel=nofollow. In general, use just href=url and don’t forget the text to display to the reader.

blockquote (indent text) My text

<blockquote>quoted text</blockquote>

More of my text

My text

quoted text

More of my text

Quoted text can be many paragraphs long.

WordPress italicizes quoted text (and the <i> command enters normal text).

strike This is <strike>text with strike</strike> This is text with strike
pre (“preformatted” – use for monospace display) <pre>These lines are bracketed<br>with &lt;pre> and &lt;/pre>
These lines are bracketed

with <pre> and </pre>
Preformatted text, generally done right. Use it when you have a table or something else that will look best in monospace. Each space is displayed, something that <code> (next) doesn’t do.
code (use for monospace display) <code>Wordpress handles this very differently</code> WordPress handles this very differently
See http://wattsupwiththat.com/resources/#comment-65319 to see what this really does.

Youtube videos

Using the URL for a YouTube video creates a link like any other URL. However, WordPress accepts the HTML for “embedded” videos. From the YouTube page after the video finishes, click on the “embed” button and it will suggest HTML like:

<iframe width="560" height="315"

        src="http://www.youtube.com/embed/yaBNjTtCxd4"

        frameborder="0" allowfullscreen>

</iframe>

WordPress will convert this into an internal square bracket command, changing the URL and ignoring the dimension. You can use this command yourself, and use its options for dimensions. WordPress converts the above into something like:

[youtube https://www.youtube.com/watch?v=yaBNjTtCxd4&w=640&h=480]

Use this form and change the w and h options to suit your interests.

Images in comments

If WordPress thinks a URL refers to an image, it will display the image

instead of creating a link to it. The following rules may be a bit excessive,

but they should work:

  1. The URL must end with .jpg, .gif, or .png. (Maybe others.)
  2. The URL must be the only thing on the line.
  3. This means you don’t use <img>, which WordPress ignores and displays nothing.
  4. This means WordPress controls the image size.
  5. <iframe> doesn’t work either, it just displays a link to the image.

If you have an image whose URL doesn’t end with the right kind of prefix, there may be two options if the url includes attributes, i.e. if it has a question mark followed by attribute=value pairs separated by ampersands.

Often the attributes just provide information to the server about the source of the URL. In that case, you may be able to just delete everything from the question mark to the end.

For some URLs, e.g. many from FaceBook, the attributes provide lookup information to the server and it can’t be deleted. Most servers don’t bother to check for unfamiliar attributes, so try appending “&xxx=foo.jpg”. This will give you a URL with one of the extensions WordPress will accept.

WordPress will usually scale images to fit the horizontal space available for text. One place it doesn’t is in blockquoted text, there it seems to display fullsize and large images overwrite the rightside nav bar text.

Special characters in comments

Those of us who remember acceptance of ASCII-68 (a specification released in 1968) are often not clever enough to figure out all the nuances of today’s international character sets. Besides, most keyboards lack the keys for those characters, and that’s the real problem. Even if you use a non-ASCII but useful character like ° (as in 23°C) some optical character recognition software or cut and paste operation is likely to change it to 23oC or worse, 230C.

Nevertheless, there are very useful characters that are most reliably entered as HTML character entities:

Type this To get Notes
&amp; & Ampersand
&lt; < Less than sign

Left angle bracket

&bull; Bullet
&deg; ° Degree (Use with C and F, but not K (kelvins))
&#8304;

&#185;

&#178;

&#179;

&#8308;

¹

²

³

Superscripts (use 8304, 185, 178-179, 8308-8313 for superscript digits 0-9)
&#8320;

&#8321;

&#8322;

&#8323;

Subscripts (use 8320-8329 for subscript digits 0-9)
&pound; £ British pound
&ntilde; ñ For La Niña & El Niño
&micro; µ Mu, micro
&plusmn; ± Plus or minus
&times; × Times
&divide; ÷ Divide
&ne; Not equals
&nbsp; Like a space, with no special processing (i.e. word wrapping or multiple space discarding)
&gt; > Greater than sign

Right angle bracket

Generally not needed

Various operating systems and applications have mechanisms to let you directly enter character codes. For example, on Microsoft Windows, holding down ALT and typing 248 on the numeric keypad may generate the degree symbol. I may extend the table above to include these some day, but the character entity names are easier to remember, so I recommend them.

Latex markup

WordPress supports Latex. To use it, do something like:

$latex P = e\sigma AT^{4}$     (Stefan-Boltzmann's law)

$latex \mathscr{L}\{f(t)\}=F(s)$

to produce

P = e\sigma AT^{4}     (Stefan-Boltzmann’s law)

\mathscr{L}\{f(t)\}=F(s)

Linking to past comments

Each comment has a URL that links to the start of that comment. This is usually the best way to refer to comment a different post. The URL is “hidden” under the timestamp for that comment. While details vary with operating system and browser, the best way to copy it is to right click on the time stamp near the start of the comment, choose “Copy link location” from the pop-up menu, and paste it into the comment you’re writing. You should see something like http://wattsupwiththat.com/2013/07/15/central-park-in-ushcnv2-5-october-2012-magically-becomes-cooler-in-july-in-the-dust-bowl-years/#comment-1364445.

The “#<label>” at the end of the URL tells a browser where to start the page view. It reads the page from the Web, searches for the label and starts the page view there. As noted above, WordPress will create a link for you, you don’t need to add an <a> command around it.

One way to avoid the moderation queue.

Several keywords doom your comment to the moderation queue. One word, “Anthony,” is caught so that people trying to send a note to Anthony will be intercepted and Anthony should see the message pretty quickly.

If you enter Anthony as An<u>th</u>ony, it appears to not be caught,

so apparently the comparison uses the name with the HTML within it and

sees a mismatch.

Advertisements

320 thoughts on “Test

  1. I just had another thought about underlines.
    I think I discovered that if I could get around the automatic spam trap by writing Anthony with an empty HTML command inside, e.g. Ant<b></b>hony .
    What happens when I try that with underline?
    Apologies in advance to the long-suffering mods, at least one of these comments may get caught by the spam trap.

    • I remember seeing this test pattern on TV late at night after the National Anthem and before the local station broadcast came on early in the morning while the biscuits, bacon and oatmeal were still cooking. The first show after a weather report was “Dialing For Dollars” and you had better know the count when your phone rang…. 1 up and 3 down… to get the cash.

      • Reply to Ric W ==> Thanks — I was fielding comments on an essay using an unfamiliar tablet, and wasn’t sure which and/or both were part of HTML5. I usually use the old ClimateAudit comment Greasemonkey tool, even though its formatting is funky these days, for the tags. Don’t suppose you could update that add-in?

  2. Hey, what happened to the old smiley face?? When I tried to post it, this appeared:

    I wonder if WordPress changed any others?
     ☹ ☻
    The old smiley was more subtle; less in-your-face. The new one is way too garish.
    If WP keeps that up, I’ll just have to use this lame replacement:
    🙂
    Or even worse:
    😉

  3. Source          Energy (J)          Normalized (E)
    Atmosphere:     1.45x10^22 J              1 J
    Ice:            1.36x10^25 J            935 J
    Oceans:         1.68x10^25 J          1,157 J
  4. In my previous post I use the example of the following over the next 100 years: 3 units of new energy goes to the oceans and 1 unit to the atmosphere – with all 4 units being equal in Joules. 1 unit raises the average temperature of the atmosphere by 4C or the average temperature of the oceans by 0.0003C. In this example the atmosphere warms by 4C and the oceans warm by 4 x 0.0003C or 0.0012C. It is exactly the higher heat capacity you mention that allows the heat energy to be absorbed with less movement of temperature. At the detail level maybe the top 2 inches of water gets much hotter and this will then support the physics of the more complex mechanisms you mention. But the beauty of this approach (I think – and hope) is that it doesn’t really matter how the energy gets distributed in the water with its corresponding temperature effect. Determine the mass of the ocean water you want to see affected in this model and apply the energy to it to get the temperature you would expect.

  5. Is anyone using CA Assistant? I was using it before the migration — doesn’t work in current version and I can’t figure out why.

    • IIRC, I think it was written for Climate Audit and only accidentally worked here. It may be broken for good. The ItsAllText add-on may also be broken in newer Firefoxes.

      • Ric ==> CAsst had code that allowed it to function on Climate Etc, Climate Audit, WUWT and several others. The code was editable by the end-user to add additional sites using the standard WP format.
        Still works on Judith’s site.
        It is the shift to the new WP structure that has broken it.
        Any hot coders out there? CA Asst is editable in Firefox with GreaseMonkey.

  6. How come this FAQ doesn’t work for me?

    Subject: Linking to past comments

    Each comment has a URL that links to the start of that comment. ….. the best way to copy it is to right click on the time stamp near the start of the comment, choose “Copy link location” from the pop-up menu, and paste it into the comment

    Is it because the “time stamp” is located at the end of the comment (at lower right-hand corner)?

    Sam C

    • Things have changed, click on the link icon way to the right of your name to see the URL.

      I’ll update the main post in a bit.

  7. Testing “pre” in the new comment system

    									
    7/10/2012					4/18/2012				
    	High	TieH	Low	TieL		High	TieH	Low	TieL
    1998	7		0		1998	7		0	
    1999	3		0		1999	3		0	
    2000	3		0		2000	3		2	
    2001	4		1		2001	4		1	
    2002	3		2		2002	4		2	
    2003	1		0		2003	2		0	
    2004	0		1		2004	0		1	
    2005	2		1		2005	2		1	
    2006	0		0		2006	0		0	
    2007	8	2	0		2007	10		0	
    2008	4		0		2008	3		0	
    2009	2		0		2009	0		0	
    2010	8		0		2010	1		0	
    2011	2		0		2011	0		0	
    2012	1		0		2012	0		0	
    									
    	48	2	5	0		39	0	7	0
    
    • It worked.
      PS Those are the number of Columbus Ohio record highs and lows set for each of the years according to the list from 4/18/2012 compared with the list from 7/10/2012, a bit less than 3 months later.
      Notice how, somehow, 7 additional record highs were set in 2010.

    • Car deaths per citizen for some counties:

      Country    Fatalities     Population     Fatalities per million citizens
      US:            37 461         325 mill           115
      UK:             1 792           66 mill               26
      Germany: 3 214            83 mill              39
      Sweden:       263            10 mill             26
      France:     3 469          67 mill          51
      
    • Car deaths per citizen for some counties:

      Country    	Fatalities     	Population     	Fatalities per million citizens
      US:         	37 461       	325 mill         	115
      UK:          	1 792         	66 mill           	26
      Germany:   	3 214          	83 mill            	39
      Sweden:     	263          	10 mill           	26
      France:     	3 469        	67 mill          	51
      
  8. Who is deleting my posts?
    Why are most of my images deleted?
    Why are all of my YouTube videos being deleted?
    Who is doing all this?

    • I maintain this this page, part of the task is to trim people’s tests when they are stale, and I’ve been greatly remiss about that this year. I’ve done a massive amount of cleanup in the last week or two, but I don’t think I had much to your posts before June 11.

      I’m catching up though! Next is to update the main post with current knowledge.

        • If I want to test posting and image,
          I will test an image worth posting.

          By the way, my questions were rhetorical. I was testing formatting code on images, text, and videos all of which kept disappearing then reappearing so I just typed out what I was thinking at that time.

  9. “… the gasoline you buy might trace its heritage to carbon dioxide pulled straight out of the sky… engineers … have demonstrated a scalable and cost-effective way to make deep cuts in the carbon footprint of transportation…”

    1. So their machine can recognize the difference between a CO2 molecule produced by anything transportation related, and all other CO2 molecules?
    2. By “cost-effective” I assume they mean they have a product that some willing buyer someplace is willing to pay them an amount that will be greater than the cost it takes them to produce, market and deliver that product? ‘Cuz iffen they don’t, it ain’t “cost-effective”.

    “…claim that realizing direct air capture on an impactful scale will cost roughly $94-$232 per ton of carbon dioxide captured…”

    3. So what’s that in $/gal of gasoline? (See 2. above WRT “cost-effective”.) Will it have the same BTU/gal as gasoline?
    So, yeah, other than that, Mrs. Lincoln, how did you like the play?

  10. [In walk the drones]

    “Today we celebrate the first glorious anniversary of the Information Purification Directives.

    [Apple’s hammer-thrower enters, pursued by storm troopers.]

    We have created for the first time in all history a garden of pure ideology, where each worker may bloom, secure from the pests of any contradictory true thoughts.

    Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth.

    We are one people, with one will, one resolve, one cause.

    Our enemies shall talk themselves to death and we will bury them with their own confusion.

    [Hammer is thrown at the screen]

    We shall prevail!

    [Boom!]

  11. A little late on the discussion, but this is one of the worst articles written by Dr. Ball I have read in years.

    First let us start with where I take a huge exception:

    This fits the Mauna Loa trend very nicely, but the measurements and instrumentation used there are patented and controlled by the Keeling family, first the father and now the son.

    While C.D. Keeling was the first to measure CO2 with an IR beam (NDIR), and smart enough to make himself an extremely accurate (gravimetric) device to calibrate any CO2 measuring device with extreme accurate calibration mixtures.
    The Scripps institute where Keeling worked later provided all calibration mixtures for all devices worldwide. Since 1995, calibration and intercalibration of CO2 mixtures and measurements worldwide are done by the central lab of the WMO.
    Ralph Keeling works at Scripps and has no infuence at all at the calibration work of the WMO, neither on the measurements at Mauna Loa, which are done by NOAA under Pieter Tans.

    As Scripps lost its control position, they still take their own (flask) samples at Mauna Loa and still have their own calibration mixtures, independent of NOAA. Both Scripps and NOAA measurements are within +/- 0.2 ppmv for the same moment of sampling. If NOAA should manipulate the data, I am pretty sure Scripps/Keeling would get them…

    Beyond that, there are about 70 “background” stations, managed by different organisations of different countries, measuring CO2 on as far as possible uncontaminated places, from the South Pole to near the North Pole (Barrow), which all show, besides seasonal changes, which are more explicit in the NH, the same trend: up at about half the rate of the yearly human injection and a lag of the SH, which points to the main source of the increase in the NH, where 90% of human emissions occurs.

    Thus Dr. Ball, if you want to accuse somebody of manipulation, first have your facts right.

    Then:
    Where is the reflection of CO2 increase due to the dramatic ocean warming and temperature increase caused by El Nino?

    There is, if you look at the yearly rate of increase at Mauna Loa:

    http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em6.jpg

    The 1998 and 2015 El Niño’s give a clear increase in yearly CO2 increase in the atmosphere. The 1992 Pinatubo explosion shows a huge dip in CO2 increase.

    The reason, in part the ocean temperature in the tropics, but the dominant factor is (tropical) vegetation due to (too) high temperatures and drying out of the Amazon as the rain patterns change with an El Niño and increased photosynthesis after the Pinatubo injection of light scattering aerosols into the stratosphere:

    http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_dco2_d13C_mlo.jpg

    It is pretty clear that changes in temperature rate of change lead changes in CO2 rate of change with about 6 months. The interesting point is that the δ13C (that is the ratio between 13CO2 and 12CO2) rate of change changes in opposite direction. That is the case if the increase/decrease in CO2 rate of change is caused by decaying/growing vegetation. If the CO2 rate of change was caused by warming/cooling oceans, then the CO2 and δ13C rate of change changes would parallel each other.

    Again Dr. Ball, a little more research would have shown that you were wrong in your accusation.

    It is getting late here, more comment tomorrow…

  12. You are not logged in or you do not have permission to access this page. This could be due to one of several reasons

  13. Oh, Canada! While confirming the rumor that snowfall is predicted for northern Quebec on June 21, I also discovered Labrador fishing lodges can’t open because they’re still under 6 ft of snow. Clearly, the folks at weathernetwork.com think these “extreme weather events” are man-caused, as the website features stories like this one:

    How can kids handle climate change? By throwing a tantrum!

    https://s1.twnmm.com/thumb?src=//smedia.twnmm.com/storage.filemobile.com/storage/32996630/1462&w=690&h=388&scale=1&crop=1&foo=bar.jpg

    Buy the book “The TANTRUM that SAVED the WORLD” and let Michael Mann and Megan Herbert indoctrinate your child into bad behavior!

    Also, don’t miss:

    CANADA IN 2030: Future of our water and changing coastlines

    Antarctica lost 3 trillion tonnes of ice in blink of an eye

    Covering Greenland in a blanket is one way to fight climate

    Racism and climate change denial: Study delves into the link

    Links to those articles and other balderdash are at:

    https://www.theweathernetwork.com/news/articles/kids-picture-book-tantrum-that-saved-the-world-delivers-empowering-message-about-climate-change-action/104689/

  14. Re. Flooding from sea level rise threatens over 300,000 US coastal homes – study
    https://www.theguardian.com/environment/2018/jun/17/sea-level-rise-impact-us-coastal-homes-study-climate-change

    As described by Kristina Dahl, a senior climate scientist at the Union of Concerned Scientists (UCS), who should know better than to publish this load of junk science hysterical alarmism.

    “Sea level rise driven by climate change is set to pose an existential crisis to many US coastal communities…”
    No it is not. Do you know what the word “existential” means? And there is no connection between climate change, CO2 and sea levels.

    “…Under this scenario, where planet-warming emissions are barely constrained and the seas rise by around 6.5ft globally by the end of the century…”
    Absolute rubbish. The maximum projected increase is six INCHES by 2100.

    “…The oceans are rising by around 3mm a year due to the thermal expansion of seawater that’s warming because of the burning of fossil fuels by humans…”
    Where is the proof that “the burning of fossil fuels by humans” causes ANY sea level rise?

    To the Guardian: You do love publishing this rubbish, don’t you?

    There is nothing we can do about rising sea levels except to build better build dikes and sea walls a little bit higher. Sea level rise does not depend on ocean temperature, and certainly not on CO2. We can expect the sea to continue rising at about the present rate for the foreseeable future. By 2100 the seas will rise another 6 inches or so.

    Failed serial doomcaster James Hansen’s sea level predictions have been trashed by real scientists.
    Hansen claimed that sea level rise has been accelerating, from 0.6mm/year from 1900 to 1930, to 1.4mm/year from 1930 to 1992, and 2.6mm/year from 1993 to 2015.
    Hansen cherry-picked the 1900-1930 trend as his data to try to show acceleration … because if he had used 1930-1960 instead, there would not be any acceleration to show.

    According to the data, the rate of sea level rise:
    • decelerated from the start of the C&W record until 1930
    • accelerated rapidly until 1960
    • decelerated for the next ten years
    • stayed about the same from 1970 to 2000
    • then started accelerating again. Until that time, making any statement about sea level acceleration is premature. One thing is clear: There is no simple relationship between CO2 levels and the rate of sea level rise.

    If we assume that the trend prior to 1950 was natural (we really did not emit much CO2 into the atmosphere before then) and that the following increase in the trend since 1950 was 100% due to humans, we get a human influence of only about 0.3 inches per decade, or 1 inch every 30 years.

    If an anthropogenic signal cannot be conspicuously connected to sea level rise (as scientists have noted*), then the greatest perceived existential threat promulgated by advocates of dangerous man-made global warming will no longer be regarded as even worth considering (except by the Guardian).

    *4 New Papers: Anthropogenic Signal Not Detectable in Sea Level Rise
    http://notrickszone.com/2016/08/01/all-natural-four-new-scientific-publications-show-no-detectable-sea-level-rise-signal/

  15. Hi all. I wasn’t sure where else to post this question. On the old site, if I left a comment or reply, I was prompted as to whether or not I wanted email updates on new comments etc….

    Since moving to this site, I see no option for this after leaving a reply or comment. What have I done wrong?

    Thanks

    Chuck

  16. Hello Anthony,

    This may the first time I have disagreed with you since watts up with that started, but the “ship of fools comment was rather dismissive of what could be useful data collection.

    When I want to stimulate my brain, I go to your website. Often the responses to articles posted are more stimulating than the articles themselves. I attribute that to the scientific literacy of most of your audience.

    Thanks for providing a commons for unfettered debate, rather than the propaganda of most sites

    Ron

  17. I have some time to update the main content here, there are a number of things to update. One thing we never figured out at WordPress is why I could underline text but nearly everyone else could not. So, we need to experiment.

    If you’re curious, please reply to this and paste in these HTML lines:

    This is <b>bold</b>ed.
    This is <i>italic</i>ized.
    This is <u>underline</u>ed.
    This is <b><i><u>everything</b></i></u>.

    If you’re one of the folks completely mystified over this puzzle, feel free to create a top-level comment (text entry box is before the first comment). We saw some things where top level comments and replies were handled differently, and I expect to see that at Pressable.

    BTW, what I get:

    This is bolded.
    This is italicized.
    This is underlineed.
    This is everything.

    While I’m here, strong should work like bold.

  18. Now here is where my memory gets fuzzy, I think I picked the opening post of a random thread (50% confidence level, it might have been a reply to another comment), in which, about half-way down the page the author proclaimed (this is from memory, so may not be exact),

    “We know Global Warming is happening, all the models say so, but we’re not seeing it in the records. So clearly, the records must be wrong. (italics mine, if they show up). But we have somebody working on that.”!!! (Exclamations mine)

    .

    Imagine that for a second, he’s admitting there is no Global Warming in the data, so he has assigned people to set about changing the data!!!

  19. “…we can hardly afford to double the carbon footprint that the USA and the EU already generate.

    “We hope that this model proves to be useful for those seeking to intervene in efforts to avoid producing Western levels of environmental degradation [affluence] in these countries,” the authors conclude.
    Just in case in of you doubted Walter Sobchak’s interpretation of the article.

  20. Reality Check: “Conventional Crude” peaked in 2005
    See: ExxonMobil World Energy Outlook 2018 A View to 2040
    ExxonMobil clearly shows conventional crude oil peaked in 2005 and has declined since then. Adding in Deepwater and Oil sands still shows declining production. ExxonMobil has to appeal to Tight Oil to show liquids growth that combination flattening out by 2040.
    Growth prospects for conventional through tight oil appear so poor that Shell Oil and TOTAL have strategically shifted their major effort out of oil into natural gas. See:
    Liquids Supply ExxonMobil 2018
    http://instituteforenergyresearch.org/wp-content/uploads/2018/03/MARY5.png

    http://cdn.exxonmobil.com/~/media/global/charts/energy-outlook/2018/2018_supply_liquids-demand-by-sector.png?as=1

  21. Once again, the insane image linking has returned.
    Pictures are once again appearing then disappearing.
    Will it link an image or not? Who knows?
    The added lunacy includes all my images being displayed together on Refresh, yet no other post.
    Then sometimes all my images load but later only some of them, seemingly chosen at random.
    Can anyone explain any of this?

  22. Bold commands do not seem to work on the new site – at least not like the old site worked.

    Trying use of characters: BOLD

  23. Test for putting a photo from my PC up using the “pre” formatting. (The photo will be “inserted” into Excel.)

    
    
  24. src=”https://www.youtube.com/embed/jxtUBJgk_vY? version=3&rel=1&fs=1&autohide=2&showsearch=0&showinfo=1&iv_load_policy=1&wmode=transparent”

  25. This is brilliant:

    https://www.theguardian.com/commentisfree/2018/aug/02/bbc-climate-change-deniers-balance

    “I won’t go on the BBC if it supplies climate change deniers as ‘balance’”
    by Rupert Read

    Here we have Rupert Read who teaches philosophy at the University of East Anglia and chairs the Green House think-tank, explaining why he refused an invitation to discuss climate change on the BBC because it was with a so-called “denier.”

    The big joke here is that after refusing to go on air to put his point of view he is now making a formal complaint to the BBC “because the BBC cannot defend the practice of allowing a climate change denier to speak unopposed.”

    This is the level of stupidity of the man-made climate hysterics.
    This is their level of debating skills.
    Still, what can we expect from the University of East Anglia?

  26. ABCDEFGHIJKLMNOPQRSTUVWXYZ
    abcdefghijklmnopqrstuvwxyz
    123456789

    ABCDEFGHIJKLMNOPQRSTUVWXYZ
    abcdefghijklmnopqrstuvwxyz

  27. As climatologist Roy Spencer has explained the climate models used to arrive at alarming values of equilibrium climate sensitivity don’t do so in the way Lord Monckton describes.

  28. Google’s Empire of Censorship Marches On>

    More than 1,000 Google employees protest against plan for censored Chinese search engine.

    The Google staff have signed letter calling on executives to review ethics and transparency and protesting against the company’s secretive plan to build a search engine that would comply with Chinese censorship. The letter’s contents were confirmed by a Google employee who helped organize it but wished to stay anonymous. It calls on executives to review the company’s ethics and transparency; says employees lack the information required “to make ethically informed decisions about our work”; and complains that most employees only found out through leaks and media reports about the project, nicknamed Dragonfly. “We urgently need more transparency, a seat at the table and a commitment to clear and open processes: Google employees need to know what we’re building,” says the document.

    Google engineers are working on software that would block certain search terms and leave out content blacklisted by the Chinese government, so the company can re-enter the Chinese market. Google’s chief executive Sundar Pichai told a company-wide meeting that providing more services in the world’s most populous country fits with Google’s global mission. (and I bet you did not know that Google even had a “Global Mission”)

    This is the first time the project has been mentioned by any Google executive since details about it were leaked.

    Three former employees told Reuters that current leadership might think that offering limited search results in China is better than providing no information at all. The same rationale led Google to enter China in 2006. It left in 2010 over an escalating dispute with regulators that was capped by what security researchers identified as state-sponsored cyber attacks against Google and other large US firms. One former employees said they doubt the Chinese government will welcome back Google.

    The Chinese human rights community said Google’s acquiescence to China’s censorship would be a “dark day for internet freedom.”

  29. How to make Quick Links for your Essay in MS Word:

    This method requires MS Word. It results in a new document (automatically created) which contains a simple list of all the hypertext links from your essay.
    (see the end of The Fight Against Global Greening — Part 4.

    1. Open the Word document which you want to copy the hyperlinks, and press Alt + F11 to open the Microsoft Visual Based Application Window.

    2. Click Insert > Module, and copy the following VBA code into the Window.

    Sub HyperlinksExtract()
    ‘Updateby20140214
    Dim oLink As Hyperlink
    Dim docCurrent As Document ‘current document
    Dim docNew As Document ‘new document
    Dim rngStory As StoryRanges
    Set docCurrent = ActiveDocument
    Set docNew = Documents.Add
    For Each oLink In docCurrent.Hyperlinks
    oLink.Range.Copy
    docNew.Activate
    Selection.Paste
    Selection.TypeParagraph
    Next

    Set docNew = Nothing
    Set docCurrent = Nothing
    End Sub

    3. Click the Run button then RunSub/UserForm to run the VBA code. Then all the hyperlinks are copied to a new document. You can save the new document later.

    ***************

    Notes:
    1. This VBA only can run when all the hyperlinks are linked with word, if there are pictures with hyperlinks, this VBA code cannot work.
    2. Using this will train you to make human-readable links: for instance, attaching the hyperlink to “the NY Times article” rather than “here”.
    3. The links can be copied from the newly created word document into your Word copy of your essay. I place them at the end in a section called Quick Links. If any of the links don’t read right or communicate clearly, you can edit them in the Quick Links to be more readable, such as “The April 26th NY Times article”.

  30. The new server system does not allow BLOCKQUOTES in comments. The following should appear as a blockquote but does not.

    Epilogue:

    Thanks for reading.

  31. Year   Jan  Feb  Mar  Apr  May  Jun  Jul  Aug  Sep  Oct  Nov  Dec
    1880   -30  -18  -11  -20  -12  -23  -21  -10  -16  -24  -20  -23
    2018    78 
    
    1880   -29  -18  -11  -20  -12  -23  -21   -9  -16  -23  -20  -23 
    2018    78   78
    
    1880   -29  -18  -12  -20  -12  -25  -21  -10  -17  -25  -20  -21
    2018    77   79   89
    
    1880   -28  -18  -12  -20  -12  -25  -22  -10  -18  -25  -20  -21
    2018    75   80   88   86
     
    1880   -29  -18  -11  -20  -12  -23  -21   -9  -16  -23  -20  -23
    2018    77   80   90   85   82
     
    1880   -30  -18  -11  -20  -12  -23  -21   -9  -16  -24  -20  -23
    2018    78   81   91   87   83   77 
    
    1880   -29  -18  -11  -19  -11  -23  -20   -9  -15  -23  -20  -22
    2018    77   81   91   87   82   76   78
    
                     Number of changes made in 2018
    Year   Jan  Feb  Mar  Apr  May  Jun  Jul  Aug  Sep  Oct  Nov  Dec
    1880     4    0    2    1    1    2    2    2    4    4    0    3
    2018     5    3    3    2    1    1    
  32. Attempt monospacing:

                                                   Total
    Absorbed from:  Surface L.Atm  U.Atm  Space    Absorbed
    
    Absorbed by:
    Surface          0.0000 1.0500 0.1500 1.0000 || 2.2000
    Lower Atmosphere 1.6500 0.0000 0.4500 0.0000 || 2.1000
    Upper Atmosphere 0.4125 0.7875 0.0000 0.0000 || 1.2000
    Space            0.1375 0.2625 0.6000 0.0000 || 1.0000
    ------------------------------------------------
    Total Emitted:   2.2000 2.1000 1.2000 1.0000
    

    Done

  33. 9IOt8i4DzQgoptIZ0CxUefb6BDQLZ/TjSfT0ifnkF+sB3Vw83y/miLReJn/UZqG0mXqgEQHFVPoGNMler4Cmv5j/t+LaS9nD1QKqzbGr/6Kt/Ca/oP93AoodCCimMsIaaJrHfLfl7jXQZAXz/pMXb//jWeMaaPNV8YBGBBRTkd8Hmt4bLVibOXq1rSypFNDz/PD4n8/a9oFy9Cf6IqCYSmdAs2ilBSymwrPDjJoDGi/zpqjmrll47bFv4k14dUhAaRZeuwPYhYBiKp0BLY5HinuXX2/+7llnQOPSnhTrjLuOA9VO+Xy2Kh8H+jN9pOiOe+kpnOccxoRuBBRT6Q7o6uCTdvKPOuHy1+vkjjygTfsmo1/9p2da5vQzkdTPawGNf/0k2YSP/3v8NLIzkS70M5HUeVLbH+9XHEiPbgQUU+kM6H3t0KKtfkr7Pz/Tz1CvbVwn54X+rj9Iw7nwxa/rZ74nv1c7F16/gxVQdCKgmEr3LHyarb10Y/wmDerT8iVIGg8FLa2aNl6NSfv191k8j7OdnbWrMd3kRzLRT3QjoJjKjsOYbqPy3X+Z//SP9/vqkpzFcUabN2qzvrZUtWpZWi/Nrgea/i07MjT/9fg/3z++Lk4ETa8HWhxMtYmvGHr/+Jvo68cMEVC458cpk348CwSFgMI9l+kqjhxt/l5koAMBhXuOA7pWm/tqzp0j6GGIgMI9lwHVz31nBRSGCCjcc7r38Qtz7rBGQOGe2+mbH+oLQplzhw0CCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYImAAoAlAgoAlggoAFgioABgiYACgCUCCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYImAAoAlAgoAlggoAFgioABgiYACgCUCCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYImAAoAlAgoAlggoAFgioABgiYACgCUCCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYOn/AHGOt2Gtlg63AAAAAElFTkSuQmCC

  34. Ex-IPCC chief Rajendra Pachauri will stand trial on sexual harassment charges.

    A Delhi court decided there is enough evidence to charge Pachauri with harassing a female colleague.

    There is prime facie evidence to charge Rajendra Pachauri, 78, with sexual harassment and two offences of intending to outrage the modesty of a woman. Pachauri, who was head of the UN Intergovernmental Panel on Climate Change (IPCC) when it was awarded the Nobel prize in 2007, denies any wrongdoing. Pachauri resigned from the IPCC in 2015 when the complaint against him was registered.

    The woman told police Pachauri had flooded her with offensive messages, emails and texts and made several “carnal and perverted” advances over the 16 months they worked together at the Energy and Resources Institute (Teri), a Delhi-based energy and environment research centre Pachauri led for over 30 years.

    An investigation into the complaints questioned more than 50 employees and concluded the woman’s claims were valid. Pachauri claimed text messages and emails submitted by the woman to police had had been tampered with by unknown cyber criminals, but police last year found no evidence of tampering.

    The complainant, who was 29 at the time of the alleged offences, said she was pleased the case would proceed to trial after so long.

    Pachauri’s lawyer, Ashish Dixit, said the court had dropped four other charges, including stalking and intimidation: “The majority of the charges have been dropped by the court on it own, so it’s big step forward,” Dixit said.

  35. This link is more specific about “acrylic polymers” being used by artists such as Andy Warhol, David Hockney, and Mark Rothko. This leaves me the task finding out more about this. I do remember an early batch of acrylic paint by Talens also called “polymers” they stopped producing. So I guess there is not just one kind of “acrylic”. The problem being of course the producers keep their secrets.

  36. Ontario government to scrap Green Energy Act

    The Green Energy Act aimed to bolster the province’s renewable energy industry. It will be scrapped in spring 2019. The Green Energy Act resulted in an increase in electricity costs and saw the province overpay for power it did not need.

    Infrastructure Minister Monte McNaughton said repealing the law will ensure that municipalities regain planning authority over renewable projects, something that was removed under the act. Future renewable energy projects must first demonstrate need for the electricity they generate before being granted approval.

  37. Having read this I believed it was very enlightening.
    I appreciate you taking the time and energy to put this short article together.

    I once again find myself spending a lot of time both reading and leaving comments.
    But so what, it was still worth it!

  38. This is the data from 11/11/2017.

    Samples/day: 288 72 36 24 12 6 4 2 (Tmax+Tmin)/2
    Tmean ( Deg C) -3.3 -3.2 -3.4 -3.4 -3.8 -4.1 -4.0 -4.0 -4.7

    Test

  39. This is the data from 11/11/2017.

    Samples/day Tmean (C)
    288 -3.3
    72 -3.2
    36 -3.4
    24 -3.4
    12 -3.8
    6 -4.1
    4 -4.0
    2 -4.0
    Tmax+Tmin)/2 -4.7

    Test

  40. Try again. Testing how to format a table.

    Samples/day Tmean (C)
    288 -3.3
    72 -3.2
    36 -3.4
    24 -3.4
    12 -3.8
    6 -4.1
    4 -4.0
    2 -4.0
    Tmax+Tmin)/2 -4.7

    End of table

  41. I did this once before – lost the recipe.

    Samples/day Tmean (C)
    288 -3.3
    72 -3.2
    36 -3.4
    24 -3.4
    12 -3.8
    6 -4.1
    4 -4.0
    2 -4.0
    Tmax+Tmin)/2 -4.7

    End of the table.

  42. Samples/day Tmean (C)
    288 -3.3
    72 -3.2
    36 -3.4
    24 -3.4
    12 -3.8
    6 -4.1
    4 -4.0
    2 -4.0
    (Tmax+Tmin)/2 -4.7

    [Good to see you using the Test page. Try the “pre” “/ pre” unformatted, column-like text style for tables. .mod]

  43. 0 45 90 135 180 225 270 315 range
    6 16.3 16.2 16.2 16.3 16.4 16.4 16.3 16.3 0.1
    4 16.1 16.1 16.2 16.4 16.5 16.5 16.4 16.2 0.4
    2 15.3 15.4 16.1 16.7 17.0 17.1 16.8 16.2 1.8

  44. _____0___45___90__135__180__225__270__315_range
    6 16.3 16.2 16.2 16.3 16.4 16.4 16.3 16.3 0.1
    4 16.1 16.1 16.2 16.4 16.5 16.5 16.4 16.2 0.4
    2 15.3 15.4 16.1 16.7 17.0 17.1 16.8 16.2 1.8

    [USE “pre” and “/pre” (within html brackets) to get text in proper column alignment. .mod]

    _____0___45___90__135__180__225__270__315_range
    6 16.3 16.2 16.2 16.3 16.4 16.4 16.3 16.3   0.1
    4 16.1 16.1 16.2 16.4 16.5 16.5 16.4 16.2   0.4
    2 15.3 15.4 16.1 16.7 17.0 17.1 16.8 16.2   1.8
    
    
  45. Thanks for the oven /freezer joke pointing out the problem with using averages. Another favorite quote in that regard:

    Beware of averages. The average person has one breast and one testicle.
                                                                                                            Dixie Lee Ray

  46. On the CRU web site page – https://crudata.uea.ac.uk/cru/data/temperature/crutem4/landstations.htm – under the heading:

    Land Stations used by the Climatic Research Unit within CRUTEM4

    there is a link to the station files (crutem4_asof020611_stns_used_hdr.dat)

    In it you will find Apto Uto (Station No. 800890)

    Below the link it says:

    The file gives the locations and names of the stations used at some time (i.e. in the gridding that is used to produce CRUTEM4) during the period from 1850 to 2010. All these stations have sufficient data to calculate 30-year averages for 1961-90 as defined in Jones et al. (2012). In the file there are five pieces of information

    John McLean said in the paper:

    When constructing the CRUTEM4 dataset the CRU adopts a threshold for outliers of five standard deviations from the mean temperature and although calculating the long-term average temperatures from data over the 30-year period from 1961 to 1990 the standard deviations used for CRUTEM4 are calculated over a minimum of 15 years of data over the 50-year period from 1941 to 1990.
    And

    The analysis used in this section differs from the approach used to create the CRUTEM4 dataset but as noted in the previous chapter, these monthly mean temperatures were included when both the long-term average temperatures and standard deviations were calculated…

    The charge that bad stations such as Apt Uto are not in use, is invalid in the context because though the exclusion of outliers is explicitly suggested, the inclusion of this station is implicitly noted – and listed – in the calculation of the means!

    Perhaps “suggested” is the word we are all struggling with; I’m certainly finding it hard to see past the doublespeak of the CRU!

  47. On the CRU web site page – https://crudata.uea.ac.uk/cru/data/temperature/crutem4/landstations.htm – under the heading:

    Land Stations used by the Climatic Research Unit within CRUTEM4

    there is a link to the station files (crutem4_asof020611_stns_used_hdr.dat)

    In it you will find Apto Uto (Station No. 800890)

    Below the link it says:

    The file gives the locations and names of the stations used at some time (i.e. in the gridding that is used to produce CRUTEM4) during the period from 1850 to 2010. All these stations have sufficient data to calculate 30-year averages for 1961-90 as defined in Jones et al. (2012). In the file there are five pieces of information

    John McLean said in the paper:

    When constructing the CRUTEM4 dataset the CRU adopts a threshold for outliers of five standard deviations from the mean temperature and although calculating the long-term average temperatures from data over the 30-year period from 1961 to 1990 the standard deviations used for CRUTEM4 are calculated over a minimum of 15 years of data over the 50-year period from 1941 to 1990.
    And

    The analysis used in this section differs from the approach used to create the CRUTEM4 dataset but as noted in the previous chapter, these monthly mean temperatures were included when both the long-term average temperatures and standard deviations were calculated…

    The charge that bad stations such as Apt Uto are not in use, is invalid in the context because though the exclusion of outliers is explicitly suggested, the inclusion of this station is implicitly noted – and listed – in the calculation of the means!

    Perhaps “suggested” is the word we are all struggling with; I’m certainly finding it hard to see past the doublespeak of the CRU!

  48. On the CRU web site page – https://crudata.uea.ac.uk/cru/data/temperature/crutem4/landstations.htm – under the heading:

    Land Stations used by the Climatic Research Unit within CRUTEM4

    there is a link to the station files (crutem4_asof020611_stns_used_hdr.dat)

    In it you will find Apto Uto (Station No. 800890)

    Below the link it says:

    The file gives the locations and names of the stations used at some time (i.e. in the gridding that is used to produce CRUTEM4) during the period from 1850 to 2010. All these stations have sufficient data to calculate 30-year averages for 1961-90 as defined in Jones et al. (2012). In the file there are five pieces of information

  49. John McLean said in the paper:

    When constructing the CRUTEM4 dataset the CRU adopts a threshold for outliers of five standard deviations from the mean temperature and although calculating the long-term average temperatures from data over the 30-year period from 1961 to 1990 the standard deviations used for CRUTEM4 are calculated over a minimum of 15 years of data over the 50-year period from 1941 to 1990.
    And

    The analysis used in this section differs from the approach used to create the CRUTEM4 dataset but as noted in the previous chapter, these monthly mean temperatures were included when both the long-term average temperatures and standard deviations were calculated…

    The charge that bad stations such as Apt Uto are not in use, is invalid in the context because though the exclusion of outliers is explicitly suggested, the inclusion of this station is implicitly noted – and listed – in the calculation of the means!

    Perhaps “suggested” is the word we are all struggling with; I’m certainly finding it hard to see past the doublespeak of the CRU!

  50. Steven Mosher – works with BEST

    “No open data. no open code. no science.”
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483888

    “i check his apto uto station. its not used.”
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483949

    Here is a longer and more complex rebuttal:
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483908

    Another comment:
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483923

    “Poor guy. 1 check and his Phd is toast. now some of you will pay for this report. But I wont because he failed the simple requirement of posting his data and code., And more importantly he points to data. THAT CRU DOESNT USE!! For fucks sake skeptics.

    “CRU requires data in the period of 1950-1980. that is HOW the calculate an anomaly. and look. in 30 seconds I checked ONE one his claims. None of you checked. you spent money to get something that FIT YOUR WORLD VIEW. you could have checked. but no. gullible gullible gullible.”

    Nick Stokes (retired, was a Principal Research Scientist with CSIRO)

    First comment:
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2482061

    “OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
    1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
    2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
    3. I can’t find Golden Rock, either in older or current station listings.”

    Another comment:
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2484961

    Has it ever been used?

    “Well, that seems to be a question that John McLean, PhD, did not bother to investigate, nor his supervisor (nor any of his supporters here). But this 2011 post-QC data listing shows the station had its data truncated after 1970. And then, as Steven says, for use in a global anomaly calculation as in CRUTEM 4, the entire station failed to qualify because of lack of data in the anomaly base period. That is not exactly a QC decision, but doubly disqualifies it from HADCRUT 4.”

  51. Steven Mosher – works with BEST

    “No open data. no open code. no science.”

    “i check his apto uto station. its not used.”

    Here is a longer and more complex rebuttal:
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483908

    Another comment:

    “Poor guy. 1 check and his Phd is toast. now some of you will pay for this report. But I wont because he failed the simple requirement of posting his data and code., And more importantly he points to data. THAT CRU DOESNT USE!! For fucks sake skeptics.

    “CRU requires data in the period of 1950-1980. that is HOW the calculate an anomaly. and look. in 30 seconds I checked ONE one his claims. None of you checked. you spent money to get something that FIT YOUR WORLD VIEW. you could have checked. but no. gullible gullible gullible.”

    Nick Stokes (retired, was a Principal Research Scientist with CSIRO)

    First comment:

    “OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
    1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
    2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
    3. I can’t find Golden Rock, either in older or current station listings.”

    Another comment:

    Has it ever been used?

    “Well, that seems to be a question that John McLean, PhD, did not bother to investigate, nor his supervisor (nor any of his supporters here). But this 2011 post-QC data listing shows the station had its data truncated after 1970. And then, as Steven says, for use in a global anomaly calculation as in CRUTEM 4, the entire station failed to qualify because of lack of data in the anomaly base period. That is not exactly a QC decision, but doubly disqualifies it from HADCRUT 4.”

  52. This is the time to be honest with ourselves… Just a few days ago the oh-so-capable Kip Hansen wrote about those curious anomalies, https://wattsupwiththat.com/2018/09/25/the-trick-of-anomalous-temperature-anomalies/ A very good post, and very true. Now, let’s drag forward what we learned from there: A thermometer marked on every degree can be read only to that marking, anything in between those markings is not a significant digit, I don’t care what the recorder writes down. If I’m using a Fahrenheit thermometer in Phoenix Arizona, and I read 113°F that has three significant digits, but I argue that’s spurious, because if I’m using a Centigrade thermometer, that same reading is 45°C with only two significant digits.
    There do exist liquid-in-glass (LIG) thermometers marked in tenths of a degree, but they are useful only for a relatively tight range of readings, which atmospheric temperature is not. I haven’t checked into it, but I would guestimate that a LIG thermometer marked in tenths, with enough range to read all of the possible atmospheric temperatures at a given site would be several feet long, probably taller than the average site observer. So we can state right now that any temperature recorded from a LIG thermometer is only accurate to two significant digits.
    What do I mean by Significant Digits (several webpages called them Significant Factors, same thing)? The first reference page that popped up gives The Rules,
    • Rule #1 – Non-zero digits are always significant
    • Rule #2 – Any zeros between two significant digits are significant
    • Rule #3 – A final zero or trailing zeros in the decimal portion ONLY are significant.
    In Kip’s example he used 72. That has two significant digits.
    Significant digits through operations: “When quantities are being added or subtracted, the number of decimal places (not significant digits) in the answer should be the same as the least number of decimal places in any of the numbers being added or subtracted.”, so when adding together two temperatures, each with two significant digits, none to the right of the decimal place, and the sum is >100, you retain no significant digits to the right of the decimal place, but the last significant digit remains the number just to the left of the decimal.
    “In a calculation involving multiplication, division, trigonometric functions, etc., the number of significant digits in an answer should equal the least number of significant digits in any one of the numbers being multiplied, divided etc.” Secondly, “Exact numbers, such as the number of people in a room, have an infinite number of significant figures.” So now I want to do an average of a whole stack of temperatures, add together all the temperatures, retaining the significant digit at the first number to the left of the decimal place, then divide by the exact number of temperatures, and the result will still have at most 2 significant digits, unless I’m <-10°<T<10° (this is why I would prefer to record these things in Kelvin or Rankine, I would get the same number of significant digits for all my atmospheric temperature readings).
    What about numbers of mixed origin? What if I am averaging a temperature taken each day of the year, and I start with a LIG thermometer, and half-way through I switch to an electronic hygrometer with a digital readout, what am I really measuring, what am I reading, and what do I really know (thank you, Dr Tim Ball)? Let’s take the first one (actually the first 3 handheld hygrometers) I find on-line, displays the temperature with a decimal point, one digit to the right of the decimal point, again, what am I reading? Go the specification datasheet, however, and it declares the accuracy to be ±0.5°C. You have no more significant digits than your LIG thermometer, because it may well read 22.2°C, but that temperature could be anywhere from 21.7°C to 22.7°C. You may want to record that last *.2, but it is not a significant digit. “The accepted convention is that only one uncertain digit is to be reported for a measurement.” The correct way to record the reading from that thermometer, even though the display says 22.2, is to write 22 ± 0.5. Your gee whiz high tech handheld hygrometer has not improved your accuracy any. (In fact, after nearly an hour of searching, I could not find any digital thermometer suitable for measuring ambient air with an accuracy finer than ±0.3°C.) So, for significant digits when taking an average, see above. Bottom line, there are NO significant digits to the right of the decimal place in weather! (I started this paragraph to make the point that even if the newer instrument could record temperatures to four figures to the right of the decimal, when you averaged it with a number from a LIG thermometer you still would have only the same number of significant digits as the least accurate of your numbers; i.e., two significant digits of the LIG thermometer. But, no need, the newer instruments, in reality are, no more accurate than the old.)
    You know what this does to an anomaly, right? You can see this coming? Take your 30 year average baseline, with the last significant digit just to the left of the decimal point, and read the current year with the last significant digit just to the left of the decimal point, and subtract one from the other, what do you get? You get an integer. A number with no digits at all to the right of the decimal place.
    And yet, after all that, the opportunists want to chortle about THIS: https://wattsupwiththat.com/2018/10/03/uah-globally-the-coolest-september-in-the-last-10-years/ Well, let’s take a random sample (OK, this is the first thing that popped up on a Duck-Duck-Go search) we have a table of Coldest/Warmest Septembers for three (for our purposes random) locations which will make a good example.

    Top 20 Coldest/Warmest Septembers in Southeast Lower Michigan

    Rank Detroit Area* Flint Bishop** Saginaw Area***
    Coldest Warmest Coldest Warmest Coldest Warmest
    Temp Year Temp Year Temp Year Temp Year Temp Year Temp Year
    1 57.4 1918 72.2 1881 55.4 1924 69.2 1933 54.9 1918 69.0 1931
    2 58.6 1879 69.8 1931 56.3 1993 68.1 1931 56.4 1924 68.0 1933
    3 59.1 1975 69.5 1921 57.3 1975 68.0 2015 56.7 1993 67.6 2015
    4 59.1 1876 69.3 2015 57.4 1966 66.6 2002 56.8 1949 66.8 1921
    5 59.2 1883 68.9 2018 57.4 1949 66.6 1934 56.9 1956 66.2 1961
    6 59.4 1924 68.9 2002 57.5 1956 66.2 1921 57.0 1943 66.2 1927
    7 59.6 1896 68.8 1961 57.7 1981 66.1 1961 57.3 1975 65.9 2005
    8 59.6 1974 68.6 1908 57.8 1962 65.9 1927 57.4 1981 65.9 1998
    9 59.6 1949 68.5 1933 58.0 1967 65.6 1939 58.5 1991 65.6 2017
    10 59.6 1890 68.4 2005 58.5 1995 65.4 1978 58.5 1962 65.6 2016
    11 59.7 1899 68.4 1906 58.6 1928 65.3 1998 58.7 1935 65.5 1971
    12 59.9 1875 68.2 2016 58.7 2006 65.1 2005 58.8 1917 65.3 1930
    13 60.0 1888 68.0 1998 58.8 1963 65.1 1930 58.9 1951 65.2 1936
    14 60.1 1887 67.9 1891 58.9 1974 65.1 1925 58.9 1950 65.1 2018
    15 60.3 1967 67.9 1884 59.1 1957 65.0 1936 59.0 1938 64.7 1948
    16 60.4 1956 67.5 1978 59.3 2001 64.8 2016 59.0 1928 64.6 2004
    17 60.7 1928 67.5 1941 59.3 1937 64.8 2018 59.1 2006 64.5 2002
    18 60.8 1981 67.5 1898 59.5 1943 64.8 1983 59.3 1984 64.4 1941
    19 61.0 1993 67.4 2004 59.6 1991 64.8 1971 59.3 2000 64.3 1968
    20 61.2 1984 67.2 1927 59.6 1989 64.5 1941 59.3 1992 64.0 2007
    * Detroit Area temperature records date back to January 1874.

    ** Flint Bishop temperature records date back to January 1921.

    *** Saginaw Area temperature records date back to January 1912.

    I have copied/pasted the entire table because it was easiest that way. I can make my point, and save myself quite a few mouse-clicks, by taking just the Detroit Area readings, using the Excel nested functions of ROUND(CONVERT()) in one swell foop to show the temperature data with appropriate significant digits.

    Detroit Area*
    Coldest Warmest
    Temp Year Temp Year
    14 1918 22 1881
    15 1879 21 1931
    15 1975 21 1921
    15 1876 21 2015
    15 1883 21 2018
    15 1924 21 2002
    15 1896 20 1961
    15 1974 20 1908
    15 1949 20 1933
    15 1890 20 2005
    15 1899 20 1906
    16 1875 20 2016
    16 1888 20 1998
    16 1887 20 1891
    16 1967 20 1884
    16 1956 20 1978
    16 1928 20 1941
    16 1981 20 1898
    16 1993 20 2004
    16 1984 20 1927

    The sub-heading on the linked article is UAH Global Temperature Update for September, 2018: +0.14 deg. C, without giving any absolute temperature, but if it were talking about a temperature reading taken in the Detroit Area, we could guess that anomaly is relative to something in the vicinity of 15°C, and then that 0.14°C disappears in the noise and is indistinguishable from 10 others that also show up as 15°C when shown with the proper number of significant digits. In fact, given the significant digits discussed above, the temperature anomaly, calculated to the best accuracy available from the instrumentation, becomes 0. ZERO. Zilch. Nada. Nothing. Nothing to write home about. Nothing to make a post on ANY blog about! Thus, you can see why I have sprained eyeball muscles, because every time the “warmunists” call a press conference to declare Hottest Year EVAH!™, my eyes do a spontaneous eyeroll, so hard I believe I have incurred permanent damage, and it’s uncontrollable, it’s so bad! Their Hottest Year EVAH!™ is indistinguishable from at least 10 others just like it, as far as what the thermometers can really measure, and what that database can really tell us.

  53. You might not be aware that Google has dumped its “Don’t be evil” slogan.
    If you watch this video, you will understand why….
    Google is not a search engine.
    Google is not even an advertising company with a search box attached.
    Google is watching every move you make in order to manipulate your view of the world and your actions within it.
    Google’s search returns are serving an agenda, and you are unaware of what that agenda is.
    Google hates competition, so its algorithm actively returns searches designed to attack its rivals, serve its commercial interests and further its political agenda — and all this while spying on its users.
    Don’t believe me? Check this out:

  54. OHC
    This Taillandier 2018 paper is about ”the metrological verification of a biogeochemical observing system based on a fleet of BGC- Argo floats” but the authors gather and report on temperature data gathered by the Argo floats, and compare them with a ship-board temperature sensor (CTD), lowered to depth, by a cable. “less than 1 year after the cruise” the ship-board sensor was checked, and had drifted “0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.” The standard deviation of the Argo ‘ misfits’ is ≈0.02 °C (Table 2). The authors “ascribe misfits as instrumental calibration shifts rather than natural variability.”

    Taillandier2.2.1 ”The BGC-Argo floats were equipped with factory-calibrated CTD modules (SBE41CPs).”

    2.2.2 ”During stations, seawater properties were sampled at 24 Hz with the [ship-board] CTD unit and transmitted on board through an electro-mechanical sea cable and slip-ring-equipped winch.”

    2.2.3 ”There were no independent samples (such as salinity bottles) or double probes in the [ship-board] CTD unit that would have allowed the assessment of the temperature and conductivity sensors’ stability. Thus, the quality of [the ship-board] CTD data relies on frequent factory calibrations operated on the sensors: a pre-cruise bath was performed in April 2015 (less than 1 month before the cruise), and a post-cruise bath performed in March 2016 (less than 1 year after the cruise). The static drift of the temperature sensor [of the ship-board CTD] between baths was 0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.”

    2.2.3 ”Given the reproducibility of the processing method, the uncertainties of measurement provided by the [ship-board] CTD unit should have stayed within the accuracy of the sensors, which is 0.001 °C and 0.003 mS/cm out of lowered dynamic accuracy cases (such as in sharp temperature gradients).”

    2.2.3 ”The data collection of temperature and practical salinity profiles at every station is thus used as reference to assess the two other sensing systems: the TSG [A SeaCAT thermosalinograph (SBE21, serial no. 3146)] and the BGC-Argo floats. Systematic comparisons between the profiles from the CTD unit and the neighboring data were made at every cast.”

    2.2.3 ”Considering TSG data set, the median value of temperature and practical salinity over a time window of 1 h around the profile date was extracted from the 5 min resolution time series. The comparison with the surface value from profiles showed a spread distribution of misfits for temperature, with an average 0.009 °C, and a narrower distribution of misfits for practical salinity with an average of 0.007. Given the nominal accuracy expected by the TSG system and in ab- sence of systematic marked shift in the comparison, no post-cruise adjustment was performed. The uncertainty of measurement in the TSG data set should have stayed under the 0.01 °C in temperature, and 0.01 in practical salinity.”

    2.2.3 ”Considering BGC-Argo floats, the comparison with [ship-board] CTD profiles was performed over the 750–1000 dbar layer, where water mass characteristics remained stable enough to ascribe misfits as instrumental calibration shifts rather than natural variability. The misfits between temperature measurements and practical salinity measurements at geopotential horizons were computed and median values provided for every BGC-Argo float. The median offsets are reported in Table 2. Their amplitudes remained within 0.01 °C in temperature or 0.01 in practical salinity except in two cases. A large temperature offset occurred for WMO 6901769.”

    The Oxygen concentration of seawater had to be calculated. 2.3.2 ”To process the results, the temperature measured from the [ship-board] CTD unit was preferred to the built-in temperature of the sensor.”

    https://uploads.disquscdn.com/images/efa801329b85b88a6b9f212f64865c02e8cffb159dbdfc6aae1771cbfa4eb1d7.jpg

    Taillandier, Vincent, et al. 2018 “Hydrography and biogeochemistry dedicated to the Mediterranean BGC-Argo network during a cruise with RV Tethys 2 in May 2015.” Earth System Science Data
    https://www.earth-syst-sci-data.net/10/627/2018/essd-10-627-2018.pdf

  55. small formatting errors corrected
    OHC
    This Taillandier 2018 paper is about ”the metrological verification of a biogeochemical observing system based on a fleet of BGC- Argo floats” but the authors gather and report on temperature data gathered by the Argo floats, and compare them with a ship-board temperature sensor (CTD), lowered to depth, by a cable. [L]ess than 1 year after the cruise” the ship-board sensor was checked, and had drifted “0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.” The standard deviation of the Argo ‘ misfits’ is ≈0.02 °C (Table 2). The authors “ascribe misfits as instrumental calibration shifts rather than natural variability.”

    Taillandier 2018: 2.2.1 ”The BGC-Argo floats were equipped with factory-calibrated CTD modules (SBE41CPs).”

    2.2.2 ”During stations, seawater properties were sampled at 24 Hz with the [ship-board] CTD unit and transmitted on board through an electro-mechanical sea cable and slip-ring-equipped winch.”

    2.2.3 ”There were no independent samples (such as salinity bottles) or double probes in the [ship-board] CTD unit that would have allowed the assessment of the temperature and conductivity sensors’ stability. Thus, the quality of [the ship-board] CTD data relies on frequent factory calibrations operated on the sensors: a pre-cruise bath was performed in April 2015 (less than 1 month before the cruise), and a post-cruise bath performed in March 2016 (less than 1 year after the cruise). The static drift of the temperature sensor [of the ship-board CTD] between baths was 0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.”

    2.2.3 ”Given the reproducibility of the processing method, the uncertainties of measurement provided by the [ship-board] CTD unit should have stayed within the accuracy of the sensors, which is 0.001 °C and 0.003 mS/cm out of lowered dynamic accuracy cases (such as in sharp temperature gradients).”

    2.2.3 ”The data collection of temperature and practical salinity profiles at every station is thus used as reference to assess the two other sensing systems: the TSG [A SeaCAT thermosalinograph (SBE21, serial no. 3146)] and the BGC-Argo floats. Systematic comparisons between the profiles from the CTD unit and the neighboring data were made at every cast.”

    2.2.3 ”Considering TSG data set, the median value of temperature and practical salinity over a time window of 1 h around the profile date was extracted from the 5 min resolution time series. The comparison with the surface value from profiles showed a spread distribution of misfits for temperature, with an average 0.009 °C, and a narrower distribution of misfits for practical salinity with an average of 0.007. Given the nominal accuracy expected by the TSG system and in absence of systematic marked shift in the comparison, no post-cruise adjustment was performed. The uncertainty of measurement in the TSG data set should have stayed under the 0.01 °C in temperature, and 0.01 in practical salinity.”

    2.2.3 ”Considering BGC-Argo floats, the comparison with [ship-board] CTD profiles was performed over the 750–1000 dbar layer, where water mass characteristics remained stable enough to ascribe misfits as instrumental calibration shifts rather than natural variability. The misfits between temperature measurements and practical salinity measurements at geopotential horizons were computed and median values provided for every BGC-Argo float. The median offsets are reported in Table 2. Their amplitudes remained within 0.01 °C in temperature or 0.01 in practical salinity except in two cases. A large temperature offset occurred for WMO 6901769.”

    The Oxygen concentration of seawater had to be calculated. 2.3.2 ”To process the results, the temperature measured from the [ship-board] CTD unit was preferred to the built-in temperature of the sensor.”

    https://uploads.disquscdn.com/images/efa801329b85b88a6b9f212f64865c02e8cffb159dbdfc6aae1771cbfa4eb1d7.jpg

    Taillandier, Vincent, et al. 2018 “Hydrography and biogeochemistry dedicated to the Mediterranean BGC-Argo network during a cruise with RV Tethys 2 in May 2015.” Earth System Science Data
    https://www.earth-syst-sci-data.net/10/627/2018/essd-10-627-2018.pdf

  56. But fortunately, anomalies are much more homogeneous. If it is warmer than usual, it tends to be warm high and low. – Nick Stokes

    This is the heart of the problem of attempting to calculate global temperature.

    Essentially two important differences are conflated and then glossed over.

    Spatial sampling is a three dimensional problem, while anomalies may deal – nominally (per-say!) – with
    altitude issues, they don’t deal with directionality, or more accurately the symmetry of the two dimensional temperature distribution.

    It is assumed that because anomalies are used, the temporal correlation between any two points will have the same spatial scale in any direction. However, spatial anisotropy in the coherence of climate variations has been well documented and it is an established fact that the spatial scale of climate variables varies geographically and depends on the choice of directions. (Chen, D. et al.2016).

    The point I am making here is completely uncontroversial and well know in the literature.

    What Nick and all climate data apologists are glossing over is that despite the ubiquity of spatial averaging, its application – the way it is applied particularly – is inappropriate because it assumes spatial coherence. But climate data has long been know to be incoherent across changing topography. (Hendrick & Comer 1970).

    In layman’s terms (Although I am a layman!), station records are aggregated over a grid box assuming that the fall-off or change in correlation between different stations is constant. So conventionally, you would imagine a point on the map for your station and a circle (or square) area around it overlapping other stations or the grid box border. However, in reality this “areal” area is actually much more likely to be elongated, forming an ellipse or rectangle stretched in one direction – commonly and topographical north/south in Australia.

    But it is actually worse than this in reality because unless the landscape is completely flat, coherence will not be uniform. And that is an understatement because to calculate correlation decay correctly, spatial variability actually has to be mapped in and from the real world.

    Unfortunately, directionality would be a very useful factor in the accurate determination of UHI effects, due to the dominant north/south sprawl of urban settlement. Coincidentally, all weather moves from west to east and associated fronts with their troughs and ridges typically align roughly north/south.

    The other consequence of areal averaging is that it is a case of the classical ecological fallacy, in that conclusions about individual sites are incorrectly assumed to have the same properties as the average of a group of sites. Simpson’s paradox – confusion between the group average and total average – is one of the four most common statistical ecological fallacies. If you have the patience, it is well worth making your own tiny dataset on paper and working through this paradox as it is mind blowing to apprehend!

    What I believe this all means is that the temperature record is dominated by smearing generally and a latitudinal smearing i.e east/west particularly. And this means for Australia and probably the US as well, that the UHI effect of north/south coastal urban sprawl is tainting the record.

    Either way, if real changes in climate are actually happening locally, then this local affect will be smeared into a global trend – by the current practice – despite or in lieu of any real global effect.

    So, yes I do think the globe has warmed since the LIA or at least the last glaciation but I don’t believe it can be detected in any of the global climate data products.

    Chen, D. et al. Satellite measurements reveal strong anisotropy in spatial coherence of
    climate variations over the Tibet Plateau. Sci. Rep. 6, 30304; doi: 10.1038/srep30304
    (2016).

    Director, H., and L. Bornn, 2015: Connecting point-level and gridded moments in the
    analysis of climate data. J. Climate, 28, 3496–3510, doi:10.1175/JCLI-D-14-00571.1.
    Hendrick, R. L. & Comer, G. H. Space variations of precipitation and implications for raingauge
    network designing. J Hydrol 10,
    151–163 (1970).

    Jones, P. D., T. J. Osborn, and K. R. Briffa, Estimating sampling
    errors in large-scalet emperaturea veragesJ, . Clim.,
    10, 2548-2568, 1997a.

    Robinson, W., 1950: Ecological correlations and the behaviour of
    individuals. Amer. Sociol. Rev., 15, 351–357, doi:10.2307/2087176.

  57. test again, take 2:

    But fortunately, anomalies are much more homogeneous. If it is warmer than usual, it tends to be warm high and low. – Nick Stokes

    This is the heart of the problem of attempting to calculate global temperature.

    Essentially two important differences are conflated and then glossed over.

    Spatial sampling is a three dimensional problem, while anomalies may deal – nominally (per-say!) – with altitude issues, they don’t deal with directionality, or more accurately, the symmetry of the two dimensional temperature distribution.

    It is assumed that because anomalies are used, the temporal correlation between any two points will have the same spatial scale in any direction. However, spatial anisotropy in the coherence of climate variations has been well documented and it is an established fact that the spatial scale of climate variables varies geographically and depends on the choice of directions. (Chen, D. et al.2016).

    The point I am making here is completely uncontroversial and well know in the literature.

    What Nick and all climate data apologists are glossing over is that despite the ubiquity of spatial averaging, its application – the way it is applied particularly – is inappropriate because it assumes spatial coherence. But climate data has long been know to be incoherent across changing topography. (Hendrick & Comer 1970).

    In layman’s terms (Although I am a layman!) station records are aggregated over a grid box assuming that the fall-off or change in correlation between different stations is constant. So conventionally, you would imagine a point on the map for your station and a circle (or square) area around it overlapping other stations or the grid box border. However, in reality this “areal” area is actually much more likely to be elongated, forming an ellipse or rectangle stretched in one direction – commonly and topographical north/south in Australia.

    But it is actually worse than this in reality because unless the landscape is completely flat, coherence will not be uniform. And that is an understatement because to calculate correlation decay correctly, spatial variability actually has to be mapped in and from the real world.

    Unfortunately, directionality would be a very useful factor in the accurate determination of UHI effects, due to the dominant north/south sprawl of urban settlement. Coincidentally, all weather moves from west to east and associated fronts with their troughs and ridges typically align roughly north/south.

    The other consequence of areal averaging is that it is a case of the classical ecological fallacy, in that conclusions about individual sites are incorrectly assumed to have the same properties as the average of a group of sites. Simpson’s paradox – confusion between the group average and total average – is one of the four most common statistical ecological fallacies. If you have the patience, it is well worth making your own tiny dataset on paper and working through this paradox as it is mind blowing to apprehend!

    What I believe this all means is that the temperature record is dominated by smearing generally and a latitudinal smearing i.e east/west particularly. And this means for Australia and probably the US as well, that the UHI effect of north/south coastal urban sprawl is tainting the record.

    Either way, if real changes in climate are actually happening locally, then this local affect will be smeared into a global trend – by the current practice – despite or in lieu of any real global effect.

    So, yes I do think the globe has warmed since the LIA or at least the last glaciation but I don’t believe it is or can be detected in any of the global climate data products.

    Chen, D. et al. Satellite measurements reveal strong anisotropy in spatial coherence of
    climate variations over the Tibet Plateau. Sci. Rep. 6, 30304; doi: 10.1038/srep30304
    (2016).

    Director, H., and L. Bornn, 2015: Connecting point-level and gridded moments in the
    analysis of climate data. J. Climate, 28, 3496–3510, doi:10.1175/JCLI-D-14-00571.1.
    Hendrick, R. L. & Comer, G. H. Space variations of precipitation and implications for raingauge
    network designing. J Hydrol 10,
    151–163 (1970).

    Jones, P. D., T. J. Osborn, and K. R. Briffa, Estimating sampling
    errors in large-scalet emperaturea veragesJ, . Clim.,
    10, 2548-2568, 1997a.

    Robinson, W., 1950: Ecological correlations and the behaviour of
    individuals. Amer. Sociol. Rev., 15, 351–357, doi:10.2307/2087176.

Leave a Reply

Your email address will not be published. Required fields are marked *