Test

SMPTE color bars – Click for your own test pattern kit

This page is for posters to test comments prior to submitting them to WUWT. Your tests will be deleted in a while, though especially interesting tests, examples, hints, and cool stuff will remain for quite a while longer.

Some things that don’t seem to work any more, or perhaps never did, are kept in Ric Werme’s Guide to WUWT.

Formatting in comments

WordPress does not provide much documentation for the HTML formatting permitted in comments. There are only a few commands that are useful, and a few more that are pretty much useless.

A typical HTML formatting command has the general form of <name>text to be formatted</name>. A common mistake is to forget the end command. Until WordPress gets a preview function, we have to live with it.

N.B. WordPress handles some formatting very differently than web browsers do. A post of mine shows these and less useful commands in action at WUWT.

N.B. You may notice that the underline command, <u>, is missing. WordPress seems to suppress for almost all users, so I’m not including it here. Feel free to try it, don’t expect it to work.

Name Sample Result
b (bold) This is <b>bold</b> text This is bold text
Command strong also does bolding.
i (italics) This is <i>italicized</i> text This is italicized text
Command em (emphasize) also does italics.
a (anchor) See <a href=http://wermenh.com>My home page</a> See My home page
A URL by itself (with a space on either side) is often adequate in WordPress. It will make a link to that URL and display the URL, e.g. See http://wermenh.com.

Some source on the web is presenting anchor commands with other parameters beyond href, e.g. rel=nofollow. In general, use just href=url and don’t forget the text to display to the reader.

blockquote (indent text) My text

<blockquote>quoted text</blockquote>

More of my text

My text

quoted text

More of my text

Quoted text can be many paragraphs long.

WordPress italicizes quoted text (and the <i> command enters normal text).

strike This is <strike>text with strike</strike> This is text with strike
pre (“preformatted” – use for monospace display) <pre>These lines are bracketed<br>with &lt;pre> and &lt;/pre>
These lines are bracketed

with <pre> and </pre>
Preformatted text, generally done right. Use it when you have a table or something else that will look best in monospace. Each space is displayed, something that <code> (next) doesn’t do.
code (use for monospace display) <code>Wordpress handles this very differently</code> WordPress handles this very differently
See http://wattsupwiththat.com/resources/#comment-65319 to see what this really does.

Youtube videos

Using the URL for a YouTube video creates a link like any other URL. However, WordPress accepts the HTML for “embedded” videos. From the YouTube page after the video finishes, click on the “embed” button and it will suggest HTML like:

<iframe width="560" height="315"

        src="http://www.youtube.com/embed/yaBNjTtCxd4"

        frameborder="0" allowfullscreen>

</iframe>

WordPress will convert this into an internal square bracket command, changing the URL and ignoring the dimension. You can use this command yourself, and use its options for dimensions. WordPress converts the above into something like:

[youtube https://www.youtube.com/watch?v=yaBNjTtCxd4&w=640&h=480]

Use this form and change the w and h options to suit your interests.

Images in comments

If WordPress thinks a URL refers to an image, it will display the image

instead of creating a link to it. The following rules may be a bit excessive,

but they should work:

  1. The URL must end with .jpg, .gif, or .png. (Maybe others.)
  2. The URL must be the only thing on the line.
  3. This means you don’t use <img>, which WordPress ignores and displays nothing.
  4. This means WordPress controls the image size.
  5. <iframe> doesn’t work either, it just displays a link to the image.

If you have an image whose URL doesn’t end with the right kind of prefix, there may be two options if the url includes attributes, i.e. if it has a question mark followed by attribute=value pairs separated by ampersands.

Often the attributes just provide information to the server about the source of the URL. In that case, you may be able to just delete everything from the question mark to the end.

For some URLs, e.g. many from FaceBook, the attributes provide lookup information to the server and it can’t be deleted. Most servers don’t bother to check for unfamiliar attributes, so try appending “&xxx=foo.jpg”. This will give you a URL with one of the extensions WordPress will accept.

WordPress will usually scale images to fit the horizontal space available for text. One place it doesn’t is in blockquoted text, there it seems to display fullsize and large images overwrite the rightside nav bar text.

Special characters in comments

Those of us who remember acceptance of ASCII-68 (a specification released in 1968) are often not clever enough to figure out all the nuances of today’s international character sets. Besides, most keyboards lack the keys for those characters, and that’s the real problem. Even if you use a non-ASCII but useful character like ° (as in 23°C) some optical character recognition software or cut and paste operation is likely to change it to 23oC or worse, 230C.

Nevertheless, there are very useful characters that are most reliably entered as HTML character entities:

Type this To get Notes
&amp; & Ampersand
&lt; < Less than sign

Left angle bracket

&bull; Bullet
&deg; ° Degree (Use with C and F, but not K (kelvins))
&#8304;

&#185;

&#178;

&#179;

&#8308;

¹

²

³

Superscripts (use 8304, 185, 178-179, 8308-8313 for superscript digits 0-9)
&#8320;

&#8321;

&#8322;

&#8323;

Subscripts (use 8320-8329 for subscript digits 0-9)
&pound; £ British pound
&ntilde; ñ For La Niña & El Niño
&micro; µ Mu, micro
&plusmn; ± Plus or minus
&times; × Times
&divide; ÷ Divide
&ne; Not equals
&nbsp; Like a space, with no special processing (i.e. word wrapping or multiple space discarding)
&gt; > Greater than sign

Right angle bracket

Generally not needed

Various operating systems and applications have mechanisms to let you directly enter character codes. For example, on Microsoft Windows, holding down ALT and typing 248 on the numeric keypad may generate the degree symbol. I may extend the table above to include these some day, but the character entity names are easier to remember, so I recommend them.

Latex markup

WordPress supports Latex. To use it, do something like:

$latex P = e\sigma AT^{4}$     (Stefan-Boltzmann's law)

$latex \mathscr{L}\{f(t)\}=F(s)$

to produce

P = e\sigma AT^{4}     (Stefan-Boltzmann’s law)

\mathscr{L}\{f(t)\}=F(s)

Linking to past comments

Each comment has a URL that links to the start of that comment. This is usually the best way to refer to comment a different post. The URL is “hidden” under the timestamp for that comment. While details vary with operating system and browser, the best way to copy it is to right click on the time stamp near the start of the comment, choose “Copy link location” from the pop-up menu, and paste it into the comment you’re writing. You should see something like http://wattsupwiththat.com/2013/07/15/central-park-in-ushcnv2-5-october-2012-magically-becomes-cooler-in-july-in-the-dust-bowl-years/#comment-1364445.

The “#<label>” at the end of the URL tells a browser where to start the page view. It reads the page from the Web, searches for the label and starts the page view there. As noted above, WordPress will create a link for you, you don’t need to add an <a> command around it.

One way to avoid the moderation queue.

Several keywords doom your comment to the moderation queue. One word, “Anthony,” is caught so that people trying to send a note to Anthony will be intercepted and Anthony should see the message pretty quickly.

If you enter Anthony as An<u>th</u>ony, it appears to not be caught,

so apparently the comparison uses the name with the HTML within it and

sees a mismatch.

Advertisements

689 thoughts on “Test

  1. I just had another thought about underlines.
    I think I discovered that if I could get around the automatic spam trap by writing Anthony with an empty HTML command inside, e.g. Ant<b></b>hony .
    What happens when I try that with underline?
    Apologies in advance to the long-suffering mods, at least one of these comments may get caught by the spam trap.

    • I remember seeing this test pattern on TV late at night after the National Anthem and before the local station broadcast came on early in the morning while the biscuits, bacon and oatmeal were still cooking. The first show after a weather report was “Dialing For Dollars” and you had better know the count when your phone rang…. 1 up and 3 down… to get the cash.

      • Reply to Ric W ==> Thanks — I was fielding comments on an essay using an unfamiliar tablet, and wasn’t sure which and/or both were part of HTML5. I usually use the old ClimateAudit comment Greasemonkey tool, even though its formatting is funky these days, for the tags. Don’t suppose you could update that add-in?

  2. Hey, what happened to the old smiley face?? When I tried to post it, this appeared:

    I wonder if WordPress changed any others?
     ☹ ☻
    The old smiley was more subtle; less in-your-face. The new one is way too garish.
    If WP keeps that up, I’ll just have to use this lame replacement:
    🙂
    Or even worse:
    😉

  3. Source          Energy (J)          Normalized (E)
    Atmosphere:     1.45x10^22 J              1 J
    Ice:            1.36x10^25 J            935 J
    Oceans:         1.68x10^25 J          1,157 J
  4. In my previous post I use the example of the following over the next 100 years: 3 units of new energy goes to the oceans and 1 unit to the atmosphere – with all 4 units being equal in Joules. 1 unit raises the average temperature of the atmosphere by 4C or the average temperature of the oceans by 0.0003C. In this example the atmosphere warms by 4C and the oceans warm by 4 x 0.0003C or 0.0012C. It is exactly the higher heat capacity you mention that allows the heat energy to be absorbed with less movement of temperature. At the detail level maybe the top 2 inches of water gets much hotter and this will then support the physics of the more complex mechanisms you mention. But the beauty of this approach (I think – and hope) is that it doesn’t really matter how the energy gets distributed in the water with its corresponding temperature effect. Determine the mass of the ocean water you want to see affected in this model and apply the energy to it to get the temperature you would expect.

  5. Is anyone using CA Assistant? I was using it before the migration — doesn’t work in current version and I can’t figure out why.

    • IIRC, I think it was written for Climate Audit and only accidentally worked here. It may be broken for good. The ItsAllText add-on may also be broken in newer Firefoxes.

      • Ric ==> CAsst had code that allowed it to function on Climate Etc, Climate Audit, WUWT and several others. The code was editable by the end-user to add additional sites using the standard WP format.
        Still works on Judith’s site.
        It is the shift to the new WP structure that has broken it.
        Any hot coders out there? CA Asst is editable in Firefox with GreaseMonkey.

  6. How come this FAQ doesn’t work for me?

    Subject: Linking to past comments

    Each comment has a URL that links to the start of that comment. ….. the best way to copy it is to right click on the time stamp near the start of the comment, choose “Copy link location” from the pop-up menu, and paste it into the comment

    Is it because the “time stamp” is located at the end of the comment (at lower right-hand corner)?

    Sam C

    • Things have changed, click on the link icon way to the right of your name to see the URL.

      I’ll update the main post in a bit.

  7. Testing “pre” in the new comment system

    									
    7/10/2012					4/18/2012				
    	High	TieH	Low	TieL		High	TieH	Low	TieL
    1998	7		0		1998	7		0	
    1999	3		0		1999	3		0	
    2000	3		0		2000	3		2	
    2001	4		1		2001	4		1	
    2002	3		2		2002	4		2	
    2003	1		0		2003	2		0	
    2004	0		1		2004	0		1	
    2005	2		1		2005	2		1	
    2006	0		0		2006	0		0	
    2007	8	2	0		2007	10		0	
    2008	4		0		2008	3		0	
    2009	2		0		2009	0		0	
    2010	8		0		2010	1		0	
    2011	2		0		2011	0		0	
    2012	1		0		2012	0		0	
    									
    	48	2	5	0		39	0	7	0
    
    • It worked.
      PS Those are the number of Columbus Ohio record highs and lows set for each of the years according to the list from 4/18/2012 compared with the list from 7/10/2012, a bit less than 3 months later.
      Notice how, somehow, 7 additional record highs were set in 2010.

    • Car deaths per citizen for some counties:

      Country    Fatalities     Population     Fatalities per million citizens
      US:            37 461         325 mill           115
      UK:             1 792           66 mill               26
      Germany: 3 214            83 mill              39
      Sweden:       263            10 mill             26
      France:     3 469          67 mill          51
      
    • Car deaths per citizen for some counties:

      Country    	Fatalities     	Population     	Fatalities per million citizens
      US:         	37 461       	325 mill         	115
      UK:          	1 792         	66 mill           	26
      Germany:   	3 214          	83 mill            	39
      Sweden:     	263          	10 mill           	26
      France:     	3 469        	67 mill          	51
      
  8. Who is deleting my posts?
    Why are most of my images deleted?
    Why are all of my YouTube videos being deleted?
    Who is doing all this?

    • I maintain this this page, part of the task is to trim people’s tests when they are stale, and I’ve been greatly remiss about that this year. I’ve done a massive amount of cleanup in the last week or two, but I don’t think I had much to your posts before June 11.

      I’m catching up though! Next is to update the main post with current knowledge.

        • If I want to test posting and image,
          I will test an image worth posting.

          By the way, my questions were rhetorical. I was testing formatting code on images, text, and videos all of which kept disappearing then reappearing so I just typed out what I was thinking at that time.

  9. “… the gasoline you buy might trace its heritage to carbon dioxide pulled straight out of the sky… engineers … have demonstrated a scalable and cost-effective way to make deep cuts in the carbon footprint of transportation…”

    1. So their machine can recognize the difference between a CO2 molecule produced by anything transportation related, and all other CO2 molecules?
    2. By “cost-effective” I assume they mean they have a product that some willing buyer someplace is willing to pay them an amount that will be greater than the cost it takes them to produce, market and deliver that product? ‘Cuz iffen they don’t, it ain’t “cost-effective”.

    “…claim that realizing direct air capture on an impactful scale will cost roughly $94-$232 per ton of carbon dioxide captured…”

    3. So what’s that in $/gal of gasoline? (See 2. above WRT “cost-effective”.) Will it have the same BTU/gal as gasoline?
    So, yeah, other than that, Mrs. Lincoln, how did you like the play?

  10. [In walk the drones]

    “Today we celebrate the first glorious anniversary of the Information Purification Directives.

    [Apple’s hammer-thrower enters, pursued by storm troopers.]

    We have created for the first time in all history a garden of pure ideology, where each worker may bloom, secure from the pests of any contradictory true thoughts.

    Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth.

    We are one people, with one will, one resolve, one cause.

    Our enemies shall talk themselves to death and we will bury them with their own confusion.

    [Hammer is thrown at the screen]

    We shall prevail!

    [Boom!]

  11. A little late on the discussion, but this is one of the worst articles written by Dr. Ball I have read in years.

    First let us start with where I take a huge exception:

    This fits the Mauna Loa trend very nicely, but the measurements and instrumentation used there are patented and controlled by the Keeling family, first the father and now the son.

    While C.D. Keeling was the first to measure CO2 with an IR beam (NDIR), and smart enough to make himself an extremely accurate (gravimetric) device to calibrate any CO2 measuring device with extreme accurate calibration mixtures.
    The Scripps institute where Keeling worked later provided all calibration mixtures for all devices worldwide. Since 1995, calibration and intercalibration of CO2 mixtures and measurements worldwide are done by the central lab of the WMO.
    Ralph Keeling works at Scripps and has no infuence at all at the calibration work of the WMO, neither on the measurements at Mauna Loa, which are done by NOAA under Pieter Tans.

    As Scripps lost its control position, they still take their own (flask) samples at Mauna Loa and still have their own calibration mixtures, independent of NOAA. Both Scripps and NOAA measurements are within +/- 0.2 ppmv for the same moment of sampling. If NOAA should manipulate the data, I am pretty sure Scripps/Keeling would get them…

    Beyond that, there are about 70 “background” stations, managed by different organisations of different countries, measuring CO2 on as far as possible uncontaminated places, from the South Pole to near the North Pole (Barrow), which all show, besides seasonal changes, which are more explicit in the NH, the same trend: up at about half the rate of the yearly human injection and a lag of the SH, which points to the main source of the increase in the NH, where 90% of human emissions occurs.

    Thus Dr. Ball, if you want to accuse somebody of manipulation, first have your facts right.

    Then:
    Where is the reflection of CO2 increase due to the dramatic ocean warming and temperature increase caused by El Nino?

    There is, if you look at the yearly rate of increase at Mauna Loa:

    http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em6.jpg

    The 1998 and 2015 El Niño’s give a clear increase in yearly CO2 increase in the atmosphere. The 1992 Pinatubo explosion shows a huge dip in CO2 increase.

    The reason, in part the ocean temperature in the tropics, but the dominant factor is (tropical) vegetation due to (too) high temperatures and drying out of the Amazon as the rain patterns change with an El Niño and increased photosynthesis after the Pinatubo injection of light scattering aerosols into the stratosphere:

    http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_dco2_d13C_mlo.jpg

    It is pretty clear that changes in temperature rate of change lead changes in CO2 rate of change with about 6 months. The interesting point is that the δ13C (that is the ratio between 13CO2 and 12CO2) rate of change changes in opposite direction. That is the case if the increase/decrease in CO2 rate of change is caused by decaying/growing vegetation. If the CO2 rate of change was caused by warming/cooling oceans, then the CO2 and δ13C rate of change changes would parallel each other.

    Again Dr. Ball, a little more research would have shown that you were wrong in your accusation.

    It is getting late here, more comment tomorrow…

  12. You are not logged in or you do not have permission to access this page. This could be due to one of several reasons

  13. Oh, Canada! While confirming the rumor that snowfall is predicted for northern Quebec on June 21, I also discovered Labrador fishing lodges can’t open because they’re still under 6 ft of snow. Clearly, the folks at weathernetwork.com think these “extreme weather events” are man-caused, as the website features stories like this one:

    How can kids handle climate change? By throwing a tantrum!

    https://s1.twnmm.com/thumb?src=//smedia.twnmm.com/storage.filemobile.com/storage/32996630/1462&w=690&h=388&scale=1&crop=1&foo=bar.jpg

    Buy the book “The TANTRUM that SAVED the WORLD” and let Michael Mann and Megan Herbert indoctrinate your child into bad behavior!

    Also, don’t miss:

    CANADA IN 2030: Future of our water and changing coastlines

    Antarctica lost 3 trillion tonnes of ice in blink of an eye

    Covering Greenland in a blanket is one way to fight climate

    Racism and climate change denial: Study delves into the link

    Links to those articles and other balderdash are at:

    https://www.theweathernetwork.com/news/articles/kids-picture-book-tantrum-that-saved-the-world-delivers-empowering-message-about-climate-change-action/104689/

  14. Re. Flooding from sea level rise threatens over 300,000 US coastal homes – study
    https://www.theguardian.com/environment/2018/jun/17/sea-level-rise-impact-us-coastal-homes-study-climate-change

    As described by Kristina Dahl, a senior climate scientist at the Union of Concerned Scientists (UCS), who should know better than to publish this load of junk science hysterical alarmism.

    “Sea level rise driven by climate change is set to pose an existential crisis to many US coastal communities…”
    No it is not. Do you know what the word “existential” means? And there is no connection between climate change, CO2 and sea levels.

    “…Under this scenario, where planet-warming emissions are barely constrained and the seas rise by around 6.5ft globally by the end of the century…”
    Absolute rubbish. The maximum projected increase is six INCHES by 2100.

    “…The oceans are rising by around 3mm a year due to the thermal expansion of seawater that’s warming because of the burning of fossil fuels by humans…”
    Where is the proof that “the burning of fossil fuels by humans” causes ANY sea level rise?

    To the Guardian: You do love publishing this rubbish, don’t you?

    There is nothing we can do about rising sea levels except to build better build dikes and sea walls a little bit higher. Sea level rise does not depend on ocean temperature, and certainly not on CO2. We can expect the sea to continue rising at about the present rate for the foreseeable future. By 2100 the seas will rise another 6 inches or so.

    Failed serial doomcaster James Hansen’s sea level predictions have been trashed by real scientists.
    Hansen claimed that sea level rise has been accelerating, from 0.6mm/year from 1900 to 1930, to 1.4mm/year from 1930 to 1992, and 2.6mm/year from 1993 to 2015.
    Hansen cherry-picked the 1900-1930 trend as his data to try to show acceleration … because if he had used 1930-1960 instead, there would not be any acceleration to show.

    According to the data, the rate of sea level rise:
    • decelerated from the start of the C&W record until 1930
    • accelerated rapidly until 1960
    • decelerated for the next ten years
    • stayed about the same from 1970 to 2000
    • then started accelerating again. Until that time, making any statement about sea level acceleration is premature. One thing is clear: There is no simple relationship between CO2 levels and the rate of sea level rise.

    If we assume that the trend prior to 1950 was natural (we really did not emit much CO2 into the atmosphere before then) and that the following increase in the trend since 1950 was 100% due to humans, we get a human influence of only about 0.3 inches per decade, or 1 inch every 30 years.

    If an anthropogenic signal cannot be conspicuously connected to sea level rise (as scientists have noted*), then the greatest perceived existential threat promulgated by advocates of dangerous man-made global warming will no longer be regarded as even worth considering (except by the Guardian).

    *4 New Papers: Anthropogenic Signal Not Detectable in Sea Level Rise
    http://notrickszone.com/2016/08/01/all-natural-four-new-scientific-publications-show-no-detectable-sea-level-rise-signal/

  15. Hi all. I wasn’t sure where else to post this question. On the old site, if I left a comment or reply, I was prompted as to whether or not I wanted email updates on new comments etc….

    Since moving to this site, I see no option for this after leaving a reply or comment. What have I done wrong?

    Thanks

    Chuck

  16. Hello Anthony,

    This may the first time I have disagreed with you since watts up with that started, but the “ship of fools comment was rather dismissive of what could be useful data collection.

    When I want to stimulate my brain, I go to your website. Often the responses to articles posted are more stimulating than the articles themselves. I attribute that to the scientific literacy of most of your audience.

    Thanks for providing a commons for unfettered debate, rather than the propaganda of most sites

    Ron

  17. I have some time to update the main content here, there are a number of things to update. One thing we never figured out at WordPress is why I could underline text but nearly everyone else could not. So, we need to experiment.

    If you’re curious, please reply to this and paste in these HTML lines:

    This is <b>bold</b>ed.
    This is <i>italic</i>ized.
    This is <u>underline</u>ed.
    This is <b><i><u>everything</b></i></u>.

    If you’re one of the folks completely mystified over this puzzle, feel free to create a top-level comment (text entry box is before the first comment). We saw some things where top level comments and replies were handled differently, and I expect to see that at Pressable.

    BTW, what I get:

    This is bolded.
    This is italicized.
    This is underlineed.
    This is everything.

    While I’m here, strong should work like bold.

  18. Now here is where my memory gets fuzzy, I think I picked the opening post of a random thread (50% confidence level, it might have been a reply to another comment), in which, about half-way down the page the author proclaimed (this is from memory, so may not be exact),

    “We know Global Warming is happening, all the models say so, but we’re not seeing it in the records. So clearly, the records must be wrong. (italics mine, if they show up). But we have somebody working on that.”!!! (Exclamations mine)

    .

    Imagine that for a second, he’s admitting there is no Global Warming in the data, so he has assigned people to set about changing the data!!!

  19. “…we can hardly afford to double the carbon footprint that the USA and the EU already generate.

    “We hope that this model proves to be useful for those seeking to intervene in efforts to avoid producing Western levels of environmental degradation [affluence] in these countries,” the authors conclude.
    Just in case in of you doubted Walter Sobchak’s interpretation of the article.

  20. Reality Check: “Conventional Crude” peaked in 2005
    See: ExxonMobil World Energy Outlook 2018 A View to 2040
    ExxonMobil clearly shows conventional crude oil peaked in 2005 and has declined since then. Adding in Deepwater and Oil sands still shows declining production. ExxonMobil has to appeal to Tight Oil to show liquids growth that combination flattening out by 2040.
    Growth prospects for conventional through tight oil appear so poor that Shell Oil and TOTAL have strategically shifted their major effort out of oil into natural gas. See:
    Liquids Supply ExxonMobil 2018
    http://instituteforenergyresearch.org/wp-content/uploads/2018/03/MARY5.png

    http://cdn.exxonmobil.com/~/media/global/charts/energy-outlook/2018/2018_supply_liquids-demand-by-sector.png?as=1

  21. Once again, the insane image linking has returned.
    Pictures are once again appearing then disappearing.
    Will it link an image or not? Who knows?
    The added lunacy includes all my images being displayed together on Refresh, yet no other post.
    Then sometimes all my images load but later only some of them, seemingly chosen at random.
    Can anyone explain any of this?

  22. Bold commands do not seem to work on the new site – at least not like the old site worked.

    Trying use of characters: BOLD

  23. Test for putting a photo from my PC up using the “pre” formatting. (The photo will be “inserted” into Excel.)

    
    
  24. src=”https://www.youtube.com/embed/jxtUBJgk_vY? version=3&rel=1&fs=1&autohide=2&showsearch=0&showinfo=1&iv_load_policy=1&wmode=transparent”

  25. This is brilliant:

    https://www.theguardian.com/commentisfree/2018/aug/02/bbc-climate-change-deniers-balance

    “I won’t go on the BBC if it supplies climate change deniers as ‘balance’”
    by Rupert Read

    Here we have Rupert Read who teaches philosophy at the University of East Anglia and chairs the Green House think-tank, explaining why he refused an invitation to discuss climate change on the BBC because it was with a so-called “denier.”

    The big joke here is that after refusing to go on air to put his point of view he is now making a formal complaint to the BBC “because the BBC cannot defend the practice of allowing a climate change denier to speak unopposed.”

    This is the level of stupidity of the man-made climate hysterics.
    This is their level of debating skills.
    Still, what can we expect from the University of East Anglia?

  26. ABCDEFGHIJKLMNOPQRSTUVWXYZ
    abcdefghijklmnopqrstuvwxyz
    123456789

    ABCDEFGHIJKLMNOPQRSTUVWXYZ
    abcdefghijklmnopqrstuvwxyz

  27. As climatologist Roy Spencer has explained the climate models used to arrive at alarming values of equilibrium climate sensitivity don’t do so in the way Lord Monckton describes.

  28. Google’s Empire of Censorship Marches On>

    More than 1,000 Google employees protest against plan for censored Chinese search engine.

    The Google staff have signed letter calling on executives to review ethics and transparency and protesting against the company’s secretive plan to build a search engine that would comply with Chinese censorship. The letter’s contents were confirmed by a Google employee who helped organize it but wished to stay anonymous. It calls on executives to review the company’s ethics and transparency; says employees lack the information required “to make ethically informed decisions about our work”; and complains that most employees only found out through leaks and media reports about the project, nicknamed Dragonfly. “We urgently need more transparency, a seat at the table and a commitment to clear and open processes: Google employees need to know what we’re building,” says the document.

    Google engineers are working on software that would block certain search terms and leave out content blacklisted by the Chinese government, so the company can re-enter the Chinese market. Google’s chief executive Sundar Pichai told a company-wide meeting that providing more services in the world’s most populous country fits with Google’s global mission. (and I bet you did not know that Google even had a “Global Mission”)

    This is the first time the project has been mentioned by any Google executive since details about it were leaked.

    Three former employees told Reuters that current leadership might think that offering limited search results in China is better than providing no information at all. The same rationale led Google to enter China in 2006. It left in 2010 over an escalating dispute with regulators that was capped by what security researchers identified as state-sponsored cyber attacks against Google and other large US firms. One former employees said they doubt the Chinese government will welcome back Google.

    The Chinese human rights community said Google’s acquiescence to China’s censorship would be a “dark day for internet freedom.”

  29. How to make Quick Links for your Essay in MS Word:

    This method requires MS Word. It results in a new document (automatically created) which contains a simple list of all the hypertext links from your essay.
    (see the end of The Fight Against Global Greening — Part 4.

    1. Open the Word document which you want to copy the hyperlinks, and press Alt + F11 to open the Microsoft Visual Based Application Window.

    2. Click Insert > Module, and copy the following VBA code into the Window.

    Sub HyperlinksExtract()
    ‘Updateby20140214
    Dim oLink As Hyperlink
    Dim docCurrent As Document ‘current document
    Dim docNew As Document ‘new document
    Dim rngStory As StoryRanges
    Set docCurrent = ActiveDocument
    Set docNew = Documents.Add
    For Each oLink In docCurrent.Hyperlinks
    oLink.Range.Copy
    docNew.Activate
    Selection.Paste
    Selection.TypeParagraph
    Next

    Set docNew = Nothing
    Set docCurrent = Nothing
    End Sub

    3. Click the Run button then RunSub/UserForm to run the VBA code. Then all the hyperlinks are copied to a new document. You can save the new document later.

    ***************

    Notes:
    1. This VBA only can run when all the hyperlinks are linked with word, if there are pictures with hyperlinks, this VBA code cannot work.
    2. Using this will train you to make human-readable links: for instance, attaching the hyperlink to “the NY Times article” rather than “here”.
    3. The links can be copied from the newly created word document into your Word copy of your essay. I place them at the end in a section called Quick Links. If any of the links don’t read right or communicate clearly, you can edit them in the Quick Links to be more readable, such as “The April 26th NY Times article”.

  30. The new server system does not allow BLOCKQUOTES in comments. The following should appear as a blockquote but does not.

    Epilogue:

    Thanks for reading.

  31. Year   Jan  Feb  Mar  Apr  May  Jun  Jul  Aug  Sep  Oct  Nov  Dec
    1880   -30  -18  -11  -20  -12  -23  -21  -10  -16  -24  -20  -23
    2018    78 
    
    1880   -29  -18  -11  -20  -12  -23  -21   -9  -16  -23  -20  -23 
    2018    78   78
    
    1880   -29  -18  -12  -20  -12  -25  -21  -10  -17  -25  -20  -21
    2018    77   79   89
    
    1880   -28  -18  -12  -20  -12  -25  -22  -10  -18  -25  -20  -21
    2018    75   80   88   86
     
    1880   -29  -18  -11  -20  -12  -23  -21   -9  -16  -23  -20  -23
    2018    77   80   90   85   82
     
    1880   -30  -18  -11  -20  -12  -23  -21   -9  -16  -24  -20  -23
    2018    78   81   91   87   83   77 
    
    1880   -29  -18  -11  -19  -11  -23  -20   -9  -15  -23  -20  -22
    2018    77   81   91   87   82   76   78
    
                     Number of changes made in 2018
    Year   Jan  Feb  Mar  Apr  May  Jun  Jul  Aug  Sep  Oct  Nov  Dec
    1880     4    0    2    1    1    2    2    2    4    4    0    3
    2018     5    3    3    2    1    1    
  32. Attempt monospacing:

                                                   Total
    Absorbed from:  Surface L.Atm  U.Atm  Space    Absorbed
    
    Absorbed by:
    Surface          0.0000 1.0500 0.1500 1.0000 || 2.2000
    Lower Atmosphere 1.6500 0.0000 0.4500 0.0000 || 2.1000
    Upper Atmosphere 0.4125 0.7875 0.0000 0.0000 || 1.2000
    Space            0.1375 0.2625 0.6000 0.0000 || 1.0000
    ------------------------------------------------
    Total Emitted:   2.2000 2.1000 1.2000 1.0000
    

    Done

  33. 9IOt8i4DzQgoptIZ0CxUefb6BDQLZ/TjSfT0ifnkF+sB3Vw83y/miLReJn/UZqG0mXqgEQHFVPoGNMler4Cmv5j/t+LaS9nD1QKqzbGr/6Kt/Ca/oP93AoodCCimMsIaaJrHfLfl7jXQZAXz/pMXb//jWeMaaPNV8YBGBBRTkd8Hmt4bLVibOXq1rSypFNDz/PD4n8/a9oFy9Cf6IqCYSmdAs2ilBSymwrPDjJoDGi/zpqjmrll47bFv4k14dUhAaRZeuwPYhYBiKp0BLY5HinuXX2/+7llnQOPSnhTrjLuOA9VO+Xy2Kh8H+jN9pOiOe+kpnOccxoRuBBRT6Q7o6uCTdvKPOuHy1+vkjjygTfsmo1/9p2da5vQzkdTPawGNf/0k2YSP/3v8NLIzkS70M5HUeVLbH+9XHEiPbgQUU+kM6H3t0KKtfkr7Pz/Tz1CvbVwn54X+rj9Iw7nwxa/rZ74nv1c7F16/gxVQdCKgmEr3LHyarb10Y/wmDerT8iVIGg8FLa2aNl6NSfv191k8j7OdnbWrMd3kRzLRT3QjoJjKjsOYbqPy3X+Z//SP9/vqkpzFcUabN2qzvrZUtWpZWi/Nrgea/i07MjT/9fg/3z++Lk4ETa8HWhxMtYmvGHr/+Jvo68cMEVC458cpk348CwSFgMI9l+kqjhxt/l5koAMBhXuOA7pWm/tqzp0j6GGIgMI9lwHVz31nBRSGCCjcc7r38Qtz7rBGQOGe2+mbH+oLQplzhw0CCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYImAAoAlAgoAlggoAFgioABgiYACgCUCCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYImAAoAlAgoAlggoAFgioABgiYACgCUCCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYImAAoAlAgoAlggoAFgioABgiYACgCUCCgCWCCgAWCKgAGCJgAKAJQIKAJYIKABYIqAAYOn/AHGOt2Gtlg63AAAAAElFTkSuQmCC

  34. Ex-IPCC chief Rajendra Pachauri will stand trial on sexual harassment charges.

    A Delhi court decided there is enough evidence to charge Pachauri with harassing a female colleague.

    There is prime facie evidence to charge Rajendra Pachauri, 78, with sexual harassment and two offences of intending to outrage the modesty of a woman. Pachauri, who was head of the UN Intergovernmental Panel on Climate Change (IPCC) when it was awarded the Nobel prize in 2007, denies any wrongdoing. Pachauri resigned from the IPCC in 2015 when the complaint against him was registered.

    The woman told police Pachauri had flooded her with offensive messages, emails and texts and made several “carnal and perverted” advances over the 16 months they worked together at the Energy and Resources Institute (Teri), a Delhi-based energy and environment research centre Pachauri led for over 30 years.

    An investigation into the complaints questioned more than 50 employees and concluded the woman’s claims were valid. Pachauri claimed text messages and emails submitted by the woman to police had had been tampered with by unknown cyber criminals, but police last year found no evidence of tampering.

    The complainant, who was 29 at the time of the alleged offences, said she was pleased the case would proceed to trial after so long.

    Pachauri’s lawyer, Ashish Dixit, said the court had dropped four other charges, including stalking and intimidation: “The majority of the charges have been dropped by the court on it own, so it’s big step forward,” Dixit said.

  35. This link is more specific about “acrylic polymers” being used by artists such as Andy Warhol, David Hockney, and Mark Rothko. This leaves me the task finding out more about this. I do remember an early batch of acrylic paint by Talens also called “polymers” they stopped producing. So I guess there is not just one kind of “acrylic”. The problem being of course the producers keep their secrets.

  36. Ontario government to scrap Green Energy Act

    The Green Energy Act aimed to bolster the province’s renewable energy industry. It will be scrapped in spring 2019. The Green Energy Act resulted in an increase in electricity costs and saw the province overpay for power it did not need.

    Infrastructure Minister Monte McNaughton said repealing the law will ensure that municipalities regain planning authority over renewable projects, something that was removed under the act. Future renewable energy projects must first demonstrate need for the electricity they generate before being granted approval.

  37. Having read this I believed it was very enlightening.
    I appreciate you taking the time and energy to put this short article together.

    I once again find myself spending a lot of time both reading and leaving comments.
    But so what, it was still worth it!

  38. This is the data from 11/11/2017.

    Samples/day: 288 72 36 24 12 6 4 2 (Tmax+Tmin)/2
    Tmean ( Deg C) -3.3 -3.2 -3.4 -3.4 -3.8 -4.1 -4.0 -4.0 -4.7

    Test

  39. This is the data from 11/11/2017.

    Samples/day Tmean (C)
    288 -3.3
    72 -3.2
    36 -3.4
    24 -3.4
    12 -3.8
    6 -4.1
    4 -4.0
    2 -4.0
    Tmax+Tmin)/2 -4.7

    Test

  40. Try again. Testing how to format a table.

    Samples/day Tmean (C)
    288 -3.3
    72 -3.2
    36 -3.4
    24 -3.4
    12 -3.8
    6 -4.1
    4 -4.0
    2 -4.0
    Tmax+Tmin)/2 -4.7

    End of table

  41. I did this once before – lost the recipe.

    Samples/day Tmean (C)
    288 -3.3
    72 -3.2
    36 -3.4
    24 -3.4
    12 -3.8
    6 -4.1
    4 -4.0
    2 -4.0
    Tmax+Tmin)/2 -4.7

    End of the table.

  42. Samples/day Tmean (C)
    288 -3.3
    72 -3.2
    36 -3.4
    24 -3.4
    12 -3.8
    6 -4.1
    4 -4.0
    2 -4.0
    (Tmax+Tmin)/2 -4.7

    [Good to see you using the Test page. Try the “pre” “/ pre” unformatted, column-like text style for tables. .mod]

  43. 0 45 90 135 180 225 270 315 range
    6 16.3 16.2 16.2 16.3 16.4 16.4 16.3 16.3 0.1
    4 16.1 16.1 16.2 16.4 16.5 16.5 16.4 16.2 0.4
    2 15.3 15.4 16.1 16.7 17.0 17.1 16.8 16.2 1.8

  44. _____0___45___90__135__180__225__270__315_range
    6 16.3 16.2 16.2 16.3 16.4 16.4 16.3 16.3 0.1
    4 16.1 16.1 16.2 16.4 16.5 16.5 16.4 16.2 0.4
    2 15.3 15.4 16.1 16.7 17.0 17.1 16.8 16.2 1.8

    [USE “pre” and “/pre” (within html brackets) to get text in proper column alignment. .mod]

    _____0___45___90__135__180__225__270__315_range
    6 16.3 16.2 16.2 16.3 16.4 16.4 16.3 16.3   0.1
    4 16.1 16.1 16.2 16.4 16.5 16.5 16.4 16.2   0.4
    2 15.3 15.4 16.1 16.7 17.0 17.1 16.8 16.2   1.8
    
    
  45. Thanks for the oven /freezer joke pointing out the problem with using averages. Another favorite quote in that regard:

    Beware of averages. The average person has one breast and one testicle.
                                                                                                            Dixie Lee Ray

  46. On the CRU web site page – https://crudata.uea.ac.uk/cru/data/temperature/crutem4/landstations.htm – under the heading:

    Land Stations used by the Climatic Research Unit within CRUTEM4

    there is a link to the station files (crutem4_asof020611_stns_used_hdr.dat)

    In it you will find Apto Uto (Station No. 800890)

    Below the link it says:

    The file gives the locations and names of the stations used at some time (i.e. in the gridding that is used to produce CRUTEM4) during the period from 1850 to 2010. All these stations have sufficient data to calculate 30-year averages for 1961-90 as defined in Jones et al. (2012). In the file there are five pieces of information

    John McLean said in the paper:

    When constructing the CRUTEM4 dataset the CRU adopts a threshold for outliers of five standard deviations from the mean temperature and although calculating the long-term average temperatures from data over the 30-year period from 1961 to 1990 the standard deviations used for CRUTEM4 are calculated over a minimum of 15 years of data over the 50-year period from 1941 to 1990.
    And

    The analysis used in this section differs from the approach used to create the CRUTEM4 dataset but as noted in the previous chapter, these monthly mean temperatures were included when both the long-term average temperatures and standard deviations were calculated…

    The charge that bad stations such as Apt Uto are not in use, is invalid in the context because though the exclusion of outliers is explicitly suggested, the inclusion of this station is implicitly noted – and listed – in the calculation of the means!

    Perhaps “suggested” is the word we are all struggling with; I’m certainly finding it hard to see past the doublespeak of the CRU!

  47. On the CRU web site page – https://crudata.uea.ac.uk/cru/data/temperature/crutem4/landstations.htm – under the heading:

    Land Stations used by the Climatic Research Unit within CRUTEM4

    there is a link to the station files (crutem4_asof020611_stns_used_hdr.dat)

    In it you will find Apto Uto (Station No. 800890)

    Below the link it says:

    The file gives the locations and names of the stations used at some time (i.e. in the gridding that is used to produce CRUTEM4) during the period from 1850 to 2010. All these stations have sufficient data to calculate 30-year averages for 1961-90 as defined in Jones et al. (2012). In the file there are five pieces of information

    John McLean said in the paper:

    When constructing the CRUTEM4 dataset the CRU adopts a threshold for outliers of five standard deviations from the mean temperature and although calculating the long-term average temperatures from data over the 30-year period from 1961 to 1990 the standard deviations used for CRUTEM4 are calculated over a minimum of 15 years of data over the 50-year period from 1941 to 1990.
    And

    The analysis used in this section differs from the approach used to create the CRUTEM4 dataset but as noted in the previous chapter, these monthly mean temperatures were included when both the long-term average temperatures and standard deviations were calculated…

    The charge that bad stations such as Apt Uto are not in use, is invalid in the context because though the exclusion of outliers is explicitly suggested, the inclusion of this station is implicitly noted – and listed – in the calculation of the means!

    Perhaps “suggested” is the word we are all struggling with; I’m certainly finding it hard to see past the doublespeak of the CRU!

  48. On the CRU web site page – https://crudata.uea.ac.uk/cru/data/temperature/crutem4/landstations.htm – under the heading:

    Land Stations used by the Climatic Research Unit within CRUTEM4

    there is a link to the station files (crutem4_asof020611_stns_used_hdr.dat)

    In it you will find Apto Uto (Station No. 800890)

    Below the link it says:

    The file gives the locations and names of the stations used at some time (i.e. in the gridding that is used to produce CRUTEM4) during the period from 1850 to 2010. All these stations have sufficient data to calculate 30-year averages for 1961-90 as defined in Jones et al. (2012). In the file there are five pieces of information

  49. John McLean said in the paper:

    When constructing the CRUTEM4 dataset the CRU adopts a threshold for outliers of five standard deviations from the mean temperature and although calculating the long-term average temperatures from data over the 30-year period from 1961 to 1990 the standard deviations used for CRUTEM4 are calculated over a minimum of 15 years of data over the 50-year period from 1941 to 1990.
    And

    The analysis used in this section differs from the approach used to create the CRUTEM4 dataset but as noted in the previous chapter, these monthly mean temperatures were included when both the long-term average temperatures and standard deviations were calculated…

    The charge that bad stations such as Apt Uto are not in use, is invalid in the context because though the exclusion of outliers is explicitly suggested, the inclusion of this station is implicitly noted – and listed – in the calculation of the means!

    Perhaps “suggested” is the word we are all struggling with; I’m certainly finding it hard to see past the doublespeak of the CRU!

  50. Steven Mosher – works with BEST

    “No open data. no open code. no science.”
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483888

    “i check his apto uto station. its not used.”
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483949

    Here is a longer and more complex rebuttal:
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483908

    Another comment:
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483923

    “Poor guy. 1 check and his Phd is toast. now some of you will pay for this report. But I wont because he failed the simple requirement of posting his data and code., And more importantly he points to data. THAT CRU DOESNT USE!! For fucks sake skeptics.

    “CRU requires data in the period of 1950-1980. that is HOW the calculate an anomaly. and look. in 30 seconds I checked ONE one his claims. None of you checked. you spent money to get something that FIT YOUR WORLD VIEW. you could have checked. but no. gullible gullible gullible.”

    Nick Stokes (retired, was a Principal Research Scientist with CSIRO)

    First comment:
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2482061

    “OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
    1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
    2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
    3. I can’t find Golden Rock, either in older or current station listings.”

    Another comment:
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2484961

    Has it ever been used?

    “Well, that seems to be a question that John McLean, PhD, did not bother to investigate, nor his supervisor (nor any of his supporters here). But this 2011 post-QC data listing shows the station had its data truncated after 1970. And then, as Steven says, for use in a global anomaly calculation as in CRUTEM 4, the entire station failed to qualify because of lack of data in the anomaly base period. That is not exactly a QC decision, but doubly disqualifies it from HADCRUT 4.”

  51. Steven Mosher – works with BEST

    “No open data. no open code. no science.”

    “i check his apto uto station. its not used.”

    Here is a longer and more complex rebuttal:
    https://wattsupwiththat.com/2018/10/07/bombshell-audit-of-global-warming-data-finds-it-riddled-with-errors/#comment-2483908

    Another comment:

    “Poor guy. 1 check and his Phd is toast. now some of you will pay for this report. But I wont because he failed the simple requirement of posting his data and code., And more importantly he points to data. THAT CRU DOESNT USE!! For fucks sake skeptics.

    “CRU requires data in the period of 1950-1980. that is HOW the calculate an anomaly. and look. in 30 seconds I checked ONE one his claims. None of you checked. you spent money to get something that FIT YOUR WORLD VIEW. you could have checked. but no. gullible gullible gullible.”

    Nick Stokes (retired, was a Principal Research Scientist with CSIRO)

    First comment:

    “OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
    1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
    2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
    3. I can’t find Golden Rock, either in older or current station listings.”

    Another comment:

    Has it ever been used?

    “Well, that seems to be a question that John McLean, PhD, did not bother to investigate, nor his supervisor (nor any of his supporters here). But this 2011 post-QC data listing shows the station had its data truncated after 1970. And then, as Steven says, for use in a global anomaly calculation as in CRUTEM 4, the entire station failed to qualify because of lack of data in the anomaly base period. That is not exactly a QC decision, but doubly disqualifies it from HADCRUT 4.”

  52. This is the time to be honest with ourselves… Just a few days ago the oh-so-capable Kip Hansen wrote about those curious anomalies, https://wattsupwiththat.com/2018/09/25/the-trick-of-anomalous-temperature-anomalies/ A very good post, and very true. Now, let’s drag forward what we learned from there: A thermometer marked on every degree can be read only to that marking, anything in between those markings is not a significant digit, I don’t care what the recorder writes down. If I’m using a Fahrenheit thermometer in Phoenix Arizona, and I read 113°F that has three significant digits, but I argue that’s spurious, because if I’m using a Centigrade thermometer, that same reading is 45°C with only two significant digits.
    There do exist liquid-in-glass (LIG) thermometers marked in tenths of a degree, but they are useful only for a relatively tight range of readings, which atmospheric temperature is not. I haven’t checked into it, but I would guestimate that a LIG thermometer marked in tenths, with enough range to read all of the possible atmospheric temperatures at a given site would be several feet long, probably taller than the average site observer. So we can state right now that any temperature recorded from a LIG thermometer is only accurate to two significant digits.
    What do I mean by Significant Digits (several webpages called them Significant Factors, same thing)? The first reference page that popped up gives The Rules,
    • Rule #1 – Non-zero digits are always significant
    • Rule #2 – Any zeros between two significant digits are significant
    • Rule #3 – A final zero or trailing zeros in the decimal portion ONLY are significant.
    In Kip’s example he used 72. That has two significant digits.
    Significant digits through operations: “When quantities are being added or subtracted, the number of decimal places (not significant digits) in the answer should be the same as the least number of decimal places in any of the numbers being added or subtracted.”, so when adding together two temperatures, each with two significant digits, none to the right of the decimal place, and the sum is >100, you retain no significant digits to the right of the decimal place, but the last significant digit remains the number just to the left of the decimal.
    “In a calculation involving multiplication, division, trigonometric functions, etc., the number of significant digits in an answer should equal the least number of significant digits in any one of the numbers being multiplied, divided etc.” Secondly, “Exact numbers, such as the number of people in a room, have an infinite number of significant figures.” So now I want to do an average of a whole stack of temperatures, add together all the temperatures, retaining the significant digit at the first number to the left of the decimal place, then divide by the exact number of temperatures, and the result will still have at most 2 significant digits, unless I’m <-10°<T<10° (this is why I would prefer to record these things in Kelvin or Rankine, I would get the same number of significant digits for all my atmospheric temperature readings).
    What about numbers of mixed origin? What if I am averaging a temperature taken each day of the year, and I start with a LIG thermometer, and half-way through I switch to an electronic hygrometer with a digital readout, what am I really measuring, what am I reading, and what do I really know (thank you, Dr Tim Ball)? Let’s take the first one (actually the first 3 handheld hygrometers) I find on-line, displays the temperature with a decimal point, one digit to the right of the decimal point, again, what am I reading? Go the specification datasheet, however, and it declares the accuracy to be ±0.5°C. You have no more significant digits than your LIG thermometer, because it may well read 22.2°C, but that temperature could be anywhere from 21.7°C to 22.7°C. You may want to record that last *.2, but it is not a significant digit. “The accepted convention is that only one uncertain digit is to be reported for a measurement.” The correct way to record the reading from that thermometer, even though the display says 22.2, is to write 22 ± 0.5. Your gee whiz high tech handheld hygrometer has not improved your accuracy any. (In fact, after nearly an hour of searching, I could not find any digital thermometer suitable for measuring ambient air with an accuracy finer than ±0.3°C.) So, for significant digits when taking an average, see above. Bottom line, there are NO significant digits to the right of the decimal place in weather! (I started this paragraph to make the point that even if the newer instrument could record temperatures to four figures to the right of the decimal, when you averaged it with a number from a LIG thermometer you still would have only the same number of significant digits as the least accurate of your numbers; i.e., two significant digits of the LIG thermometer. But, no need, the newer instruments, in reality are, no more accurate than the old.)
    You know what this does to an anomaly, right? You can see this coming? Take your 30 year average baseline, with the last significant digit just to the left of the decimal point, and read the current year with the last significant digit just to the left of the decimal point, and subtract one from the other, what do you get? You get an integer. A number with no digits at all to the right of the decimal place.
    And yet, after all that, the opportunists want to chortle about THIS: https://wattsupwiththat.com/2018/10/03/uah-globally-the-coolest-september-in-the-last-10-years/ Well, let’s take a random sample (OK, this is the first thing that popped up on a Duck-Duck-Go search) we have a table of Coldest/Warmest Septembers for three (for our purposes random) locations which will make a good example.

    Top 20 Coldest/Warmest Septembers in Southeast Lower Michigan

    Rank Detroit Area* Flint Bishop** Saginaw Area***
    Coldest Warmest Coldest Warmest Coldest Warmest
    Temp Year Temp Year Temp Year Temp Year Temp Year Temp Year
    1 57.4 1918 72.2 1881 55.4 1924 69.2 1933 54.9 1918 69.0 1931
    2 58.6 1879 69.8 1931 56.3 1993 68.1 1931 56.4 1924 68.0 1933
    3 59.1 1975 69.5 1921 57.3 1975 68.0 2015 56.7 1993 67.6 2015
    4 59.1 1876 69.3 2015 57.4 1966 66.6 2002 56.8 1949 66.8 1921
    5 59.2 1883 68.9 2018 57.4 1949 66.6 1934 56.9 1956 66.2 1961
    6 59.4 1924 68.9 2002 57.5 1956 66.2 1921 57.0 1943 66.2 1927
    7 59.6 1896 68.8 1961 57.7 1981 66.1 1961 57.3 1975 65.9 2005
    8 59.6 1974 68.6 1908 57.8 1962 65.9 1927 57.4 1981 65.9 1998
    9 59.6 1949 68.5 1933 58.0 1967 65.6 1939 58.5 1991 65.6 2017
    10 59.6 1890 68.4 2005 58.5 1995 65.4 1978 58.5 1962 65.6 2016
    11 59.7 1899 68.4 1906 58.6 1928 65.3 1998 58.7 1935 65.5 1971
    12 59.9 1875 68.2 2016 58.7 2006 65.1 2005 58.8 1917 65.3 1930
    13 60.0 1888 68.0 1998 58.8 1963 65.1 1930 58.9 1951 65.2 1936
    14 60.1 1887 67.9 1891 58.9 1974 65.1 1925 58.9 1950 65.1 2018
    15 60.3 1967 67.9 1884 59.1 1957 65.0 1936 59.0 1938 64.7 1948
    16 60.4 1956 67.5 1978 59.3 2001 64.8 2016 59.0 1928 64.6 2004
    17 60.7 1928 67.5 1941 59.3 1937 64.8 2018 59.1 2006 64.5 2002
    18 60.8 1981 67.5 1898 59.5 1943 64.8 1983 59.3 1984 64.4 1941
    19 61.0 1993 67.4 2004 59.6 1991 64.8 1971 59.3 2000 64.3 1968
    20 61.2 1984 67.2 1927 59.6 1989 64.5 1941 59.3 1992 64.0 2007
    * Detroit Area temperature records date back to January 1874.

    ** Flint Bishop temperature records date back to January 1921.

    *** Saginaw Area temperature records date back to January 1912.

    I have copied/pasted the entire table because it was easiest that way. I can make my point, and save myself quite a few mouse-clicks, by taking just the Detroit Area readings, using the Excel nested functions of ROUND(CONVERT()) in one swell foop to show the temperature data with appropriate significant digits.

    Detroit Area*
    Coldest Warmest
    Temp Year Temp Year
    14 1918 22 1881
    15 1879 21 1931
    15 1975 21 1921
    15 1876 21 2015
    15 1883 21 2018
    15 1924 21 2002
    15 1896 20 1961
    15 1974 20 1908
    15 1949 20 1933
    15 1890 20 2005
    15 1899 20 1906
    16 1875 20 2016
    16 1888 20 1998
    16 1887 20 1891
    16 1967 20 1884
    16 1956 20 1978
    16 1928 20 1941
    16 1981 20 1898
    16 1993 20 2004
    16 1984 20 1927

    The sub-heading on the linked article is UAH Global Temperature Update for September, 2018: +0.14 deg. C, without giving any absolute temperature, but if it were talking about a temperature reading taken in the Detroit Area, we could guess that anomaly is relative to something in the vicinity of 15°C, and then that 0.14°C disappears in the noise and is indistinguishable from 10 others that also show up as 15°C when shown with the proper number of significant digits. In fact, given the significant digits discussed above, the temperature anomaly, calculated to the best accuracy available from the instrumentation, becomes 0. ZERO. Zilch. Nada. Nothing. Nothing to write home about. Nothing to make a post on ANY blog about! Thus, you can see why I have sprained eyeball muscles, because every time the “warmunists” call a press conference to declare Hottest Year EVAH!™, my eyes do a spontaneous eyeroll, so hard I believe I have incurred permanent damage, and it’s uncontrollable, it’s so bad! Their Hottest Year EVAH!™ is indistinguishable from at least 10 others just like it, as far as what the thermometers can really measure, and what that database can really tell us.

  53. You might not be aware that Google has dumped its “Don’t be evil” slogan.
    If you watch this video, you will understand why….
    Google is not a search engine.
    Google is not even an advertising company with a search box attached.
    Google is watching every move you make in order to manipulate your view of the world and your actions within it.
    Google’s search returns are serving an agenda, and you are unaware of what that agenda is.
    Google hates competition, so its algorithm actively returns searches designed to attack its rivals, serve its commercial interests and further its political agenda — and all this while spying on its users.
    Don’t believe me? Check this out:

  54. OHC
    This Taillandier 2018 paper is about ”the metrological verification of a biogeochemical observing system based on a fleet of BGC- Argo floats” but the authors gather and report on temperature data gathered by the Argo floats, and compare them with a ship-board temperature sensor (CTD), lowered to depth, by a cable. “less than 1 year after the cruise” the ship-board sensor was checked, and had drifted “0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.” The standard deviation of the Argo ‘ misfits’ is ≈0.02 °C (Table 2). The authors “ascribe misfits as instrumental calibration shifts rather than natural variability.”

    Taillandier2.2.1 ”The BGC-Argo floats were equipped with factory-calibrated CTD modules (SBE41CPs).”

    2.2.2 ”During stations, seawater properties were sampled at 24 Hz with the [ship-board] CTD unit and transmitted on board through an electro-mechanical sea cable and slip-ring-equipped winch.”

    2.2.3 ”There were no independent samples (such as salinity bottles) or double probes in the [ship-board] CTD unit that would have allowed the assessment of the temperature and conductivity sensors’ stability. Thus, the quality of [the ship-board] CTD data relies on frequent factory calibrations operated on the sensors: a pre-cruise bath was performed in April 2015 (less than 1 month before the cruise), and a post-cruise bath performed in March 2016 (less than 1 year after the cruise). The static drift of the temperature sensor [of the ship-board CTD] between baths was 0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.”

    2.2.3 ”Given the reproducibility of the processing method, the uncertainties of measurement provided by the [ship-board] CTD unit should have stayed within the accuracy of the sensors, which is 0.001 °C and 0.003 mS/cm out of lowered dynamic accuracy cases (such as in sharp temperature gradients).”

    2.2.3 ”The data collection of temperature and practical salinity profiles at every station is thus used as reference to assess the two other sensing systems: the TSG [A SeaCAT thermosalinograph (SBE21, serial no. 3146)] and the BGC-Argo floats. Systematic comparisons between the profiles from the CTD unit and the neighboring data were made at every cast.”

    2.2.3 ”Considering TSG data set, the median value of temperature and practical salinity over a time window of 1 h around the profile date was extracted from the 5 min resolution time series. The comparison with the surface value from profiles showed a spread distribution of misfits for temperature, with an average 0.009 °C, and a narrower distribution of misfits for practical salinity with an average of 0.007. Given the nominal accuracy expected by the TSG system and in ab- sence of systematic marked shift in the comparison, no post-cruise adjustment was performed. The uncertainty of measurement in the TSG data set should have stayed under the 0.01 °C in temperature, and 0.01 in practical salinity.”

    2.2.3 ”Considering BGC-Argo floats, the comparison with [ship-board] CTD profiles was performed over the 750–1000 dbar layer, where water mass characteristics remained stable enough to ascribe misfits as instrumental calibration shifts rather than natural variability. The misfits between temperature measurements and practical salinity measurements at geopotential horizons were computed and median values provided for every BGC-Argo float. The median offsets are reported in Table 2. Their amplitudes remained within 0.01 °C in temperature or 0.01 in practical salinity except in two cases. A large temperature offset occurred for WMO 6901769.”

    The Oxygen concentration of seawater had to be calculated. 2.3.2 ”To process the results, the temperature measured from the [ship-board] CTD unit was preferred to the built-in temperature of the sensor.”

    https://uploads.disquscdn.com/images/efa801329b85b88a6b9f212f64865c02e8cffb159dbdfc6aae1771cbfa4eb1d7.jpg

    Taillandier, Vincent, et al. 2018 “Hydrography and biogeochemistry dedicated to the Mediterranean BGC-Argo network during a cruise with RV Tethys 2 in May 2015.” Earth System Science Data
    https://www.earth-syst-sci-data.net/10/627/2018/essd-10-627-2018.pdf

  55. small formatting errors corrected
    OHC
    This Taillandier 2018 paper is about ”the metrological verification of a biogeochemical observing system based on a fleet of BGC- Argo floats” but the authors gather and report on temperature data gathered by the Argo floats, and compare them with a ship-board temperature sensor (CTD), lowered to depth, by a cable. [L]ess than 1 year after the cruise” the ship-board sensor was checked, and had drifted “0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.” The standard deviation of the Argo ‘ misfits’ is ≈0.02 °C (Table 2). The authors “ascribe misfits as instrumental calibration shifts rather than natural variability.”

    Taillandier 2018: 2.2.1 ”The BGC-Argo floats were equipped with factory-calibrated CTD modules (SBE41CPs).”

    2.2.2 ”During stations, seawater properties were sampled at 24 Hz with the [ship-board] CTD unit and transmitted on board through an electro-mechanical sea cable and slip-ring-equipped winch.”

    2.2.3 ”There were no independent samples (such as salinity bottles) or double probes in the [ship-board] CTD unit that would have allowed the assessment of the temperature and conductivity sensors’ stability. Thus, the quality of [the ship-board] CTD data relies on frequent factory calibrations operated on the sensors: a pre-cruise bath was performed in April 2015 (less than 1 month before the cruise), and a post-cruise bath performed in March 2016 (less than 1 year after the cruise). The static drift of the temperature sensor [of the ship-board CTD] between baths was 0.00008 °C, which is 1 order of magnitude lower than the theoretical stability of the probe.”

    2.2.3 ”Given the reproducibility of the processing method, the uncertainties of measurement provided by the [ship-board] CTD unit should have stayed within the accuracy of the sensors, which is 0.001 °C and 0.003 mS/cm out of lowered dynamic accuracy cases (such as in sharp temperature gradients).”

    2.2.3 ”The data collection of temperature and practical salinity profiles at every station is thus used as reference to assess the two other sensing systems: the TSG [A SeaCAT thermosalinograph (SBE21, serial no. 3146)] and the BGC-Argo floats. Systematic comparisons between the profiles from the CTD unit and the neighboring data were made at every cast.”

    2.2.3 ”Considering TSG data set, the median value of temperature and practical salinity over a time window of 1 h around the profile date was extracted from the 5 min resolution time series. The comparison with the surface value from profiles showed a spread distribution of misfits for temperature, with an average 0.009 °C, and a narrower distribution of misfits for practical salinity with an average of 0.007. Given the nominal accuracy expected by the TSG system and in absence of systematic marked shift in the comparison, no post-cruise adjustment was performed. The uncertainty of measurement in the TSG data set should have stayed under the 0.01 °C in temperature, and 0.01 in practical salinity.”

    2.2.3 ”Considering BGC-Argo floats, the comparison with [ship-board] CTD profiles was performed over the 750–1000 dbar layer, where water mass characteristics remained stable enough to ascribe misfits as instrumental calibration shifts rather than natural variability. The misfits between temperature measurements and practical salinity measurements at geopotential horizons were computed and median values provided for every BGC-Argo float. The median offsets are reported in Table 2. Their amplitudes remained within 0.01 °C in temperature or 0.01 in practical salinity except in two cases. A large temperature offset occurred for WMO 6901769.”

    The Oxygen concentration of seawater had to be calculated. 2.3.2 ”To process the results, the temperature measured from the [ship-board] CTD unit was preferred to the built-in temperature of the sensor.”

    https://uploads.disquscdn.com/images/efa801329b85b88a6b9f212f64865c02e8cffb159dbdfc6aae1771cbfa4eb1d7.jpg

    Taillandier, Vincent, et al. 2018 “Hydrography and biogeochemistry dedicated to the Mediterranean BGC-Argo network during a cruise with RV Tethys 2 in May 2015.” Earth System Science Data
    https://www.earth-syst-sci-data.net/10/627/2018/essd-10-627-2018.pdf

  56. But fortunately, anomalies are much more homogeneous. If it is warmer than usual, it tends to be warm high and low. – Nick Stokes

    This is the heart of the problem of attempting to calculate global temperature.

    Essentially two important differences are conflated and then glossed over.

    Spatial sampling is a three dimensional problem, while anomalies may deal – nominally (per-say!) – with
    altitude issues, they don’t deal with directionality, or more accurately the symmetry of the two dimensional temperature distribution.

    It is assumed that because anomalies are used, the temporal correlation between any two points will have the same spatial scale in any direction. However, spatial anisotropy in the coherence of climate variations has been well documented and it is an established fact that the spatial scale of climate variables varies geographically and depends on the choice of directions. (Chen, D. et al.2016).

    The point I am making here is completely uncontroversial and well know in the literature.

    What Nick and all climate data apologists are glossing over is that despite the ubiquity of spatial averaging, its application – the way it is applied particularly – is inappropriate because it assumes spatial coherence. But climate data has long been know to be incoherent across changing topography. (Hendrick & Comer 1970).

    In layman’s terms (Although I am a layman!), station records are aggregated over a grid box assuming that the fall-off or change in correlation between different stations is constant. So conventionally, you would imagine a point on the map for your station and a circle (or square) area around it overlapping other stations or the grid box border. However, in reality this “areal” area is actually much more likely to be elongated, forming an ellipse or rectangle stretched in one direction – commonly and topographical north/south in Australia.

    But it is actually worse than this in reality because unless the landscape is completely flat, coherence will not be uniform. And that is an understatement because to calculate correlation decay correctly, spatial variability actually has to be mapped in and from the real world.

    Unfortunately, directionality would be a very useful factor in the accurate determination of UHI effects, due to the dominant north/south sprawl of urban settlement. Coincidentally, all weather moves from west to east and associated fronts with their troughs and ridges typically align roughly north/south.

    The other consequence of areal averaging is that it is a case of the classical ecological fallacy, in that conclusions about individual sites are incorrectly assumed to have the same properties as the average of a group of sites. Simpson’s paradox – confusion between the group average and total average – is one of the four most common statistical ecological fallacies. If you have the patience, it is well worth making your own tiny dataset on paper and working through this paradox as it is mind blowing to apprehend!

    What I believe this all means is that the temperature record is dominated by smearing generally and a latitudinal smearing i.e east/west particularly. And this means for Australia and probably the US as well, that the UHI effect of north/south coastal urban sprawl is tainting the record.

    Either way, if real changes in climate are actually happening locally, then this local affect will be smeared into a global trend – by the current practice – despite or in lieu of any real global effect.

    So, yes I do think the globe has warmed since the LIA or at least the last glaciation but I don’t believe it can be detected in any of the global climate data products.

    Chen, D. et al. Satellite measurements reveal strong anisotropy in spatial coherence of
    climate variations over the Tibet Plateau. Sci. Rep. 6, 30304; doi: 10.1038/srep30304
    (2016).

    Director, H., and L. Bornn, 2015: Connecting point-level and gridded moments in the
    analysis of climate data. J. Climate, 28, 3496–3510, doi:10.1175/JCLI-D-14-00571.1.
    Hendrick, R. L. & Comer, G. H. Space variations of precipitation and implications for raingauge
    network designing. J Hydrol 10,
    151–163 (1970).

    Jones, P. D., T. J. Osborn, and K. R. Briffa, Estimating sampling
    errors in large-scalet emperaturea veragesJ, . Clim.,
    10, 2548-2568, 1997a.

    Robinson, W., 1950: Ecological correlations and the behaviour of
    individuals. Amer. Sociol. Rev., 15, 351–357, doi:10.2307/2087176.

  57. test again, take 2:

    But fortunately, anomalies are much more homogeneous. If it is warmer than usual, it tends to be warm high and low. – Nick Stokes

    This is the heart of the problem of attempting to calculate global temperature.

    Essentially two important differences are conflated and then glossed over.

    Spatial sampling is a three dimensional problem, while anomalies may deal – nominally (per-say!) – with altitude issues, they don’t deal with directionality, or more accurately, the symmetry of the two dimensional temperature distribution.

    It is assumed that because anomalies are used, the temporal correlation between any two points will have the same spatial scale in any direction. However, spatial anisotropy in the coherence of climate variations has been well documented and it is an established fact that the spatial scale of climate variables varies geographically and depends on the choice of directions. (Chen, D. et al.2016).

    The point I am making here is completely uncontroversial and well know in the literature.

    What Nick and all climate data apologists are glossing over is that despite the ubiquity of spatial averaging, its application – the way it is applied particularly – is inappropriate because it assumes spatial coherence. But climate data has long been know to be incoherent across changing topography. (Hendrick & Comer 1970).

    In layman’s terms (Although I am a layman!) station records are aggregated over a grid box assuming that the fall-off or change in correlation between different stations is constant. So conventionally, you would imagine a point on the map for your station and a circle (or square) area around it overlapping other stations or the grid box border. However, in reality this “areal” area is actually much more likely to be elongated, forming an ellipse or rectangle stretched in one direction – commonly and topographical north/south in Australia.

    But it is actually worse than this in reality because unless the landscape is completely flat, coherence will not be uniform. And that is an understatement because to calculate correlation decay correctly, spatial variability actually has to be mapped in and from the real world.

    Unfortunately, directionality would be a very useful factor in the accurate determination of UHI effects, due to the dominant north/south sprawl of urban settlement. Coincidentally, all weather moves from west to east and associated fronts with their troughs and ridges typically align roughly north/south.

    The other consequence of areal averaging is that it is a case of the classical ecological fallacy, in that conclusions about individual sites are incorrectly assumed to have the same properties as the average of a group of sites. Simpson’s paradox – confusion between the group average and total average – is one of the four most common statistical ecological fallacies. If you have the patience, it is well worth making your own tiny dataset on paper and working through this paradox as it is mind blowing to apprehend!

    What I believe this all means is that the temperature record is dominated by smearing generally and a latitudinal smearing i.e east/west particularly. And this means for Australia and probably the US as well, that the UHI effect of north/south coastal urban sprawl is tainting the record.

    Either way, if real changes in climate are actually happening locally, then this local affect will be smeared into a global trend – by the current practice – despite or in lieu of any real global effect.

    So, yes I do think the globe has warmed since the LIA or at least the last glaciation but I don’t believe it is or can be detected in any of the global climate data products.

    Chen, D. et al. Satellite measurements reveal strong anisotropy in spatial coherence of
    climate variations over the Tibet Plateau. Sci. Rep. 6, 30304; doi: 10.1038/srep30304
    (2016).

    Director, H., and L. Bornn, 2015: Connecting point-level and gridded moments in the
    analysis of climate data. J. Climate, 28, 3496–3510, doi:10.1175/JCLI-D-14-00571.1.
    Hendrick, R. L. & Comer, G. H. Space variations of precipitation and implications for raingauge
    network designing. J Hydrol 10,
    151–163 (1970).

    Jones, P. D., T. J. Osborn, and K. R. Briffa, Estimating sampling
    errors in large-scalet emperaturea veragesJ, . Clim.,
    10, 2548-2568, 1997a.

    Robinson, W., 1950: Ecological correlations and the behaviour of
    individuals. Amer. Sociol. Rev., 15, 351–357, doi:10.2307/2087176.

  58. TRIAL TEST OF WORDPRESS LIMITATIONS ON THE TOTAL NUMBER OF CHARACTERS ALLOWED IN A COMMENT

    —- This is the first draft of a comment on the UCS’ decision to embrace nuclear power —–
    ——– It will be posted if all characters still remain in the text without truncation. ———

    Here in the US, including the options of nuclear, wind, solar, and hydro in the power generation mix is strictly a public policy decision. Left to its own devices, the power market in the US would swing decisively towards gas-fired generation given that among all the choices available for the next several decades, gas-fired generation has the least technical, environmental, and financial risks. It also has the highest profit making potential for private investors.

    More than a decade ago, in about 2006 when the initial cost estimates for pursuing a 21st century nuclear renaissance were being done, the 6 billion dollar estimate for a pair of new technology AP1000’s was thought by many to be too low. With twenty-five years passing without construction of a clean-sheet reactor design having been initiated, the US nuclear industrial base was in a deeply withered state. It was recognized that the steep learning curve for doing nuclear construction in the US had to be passed through for a second time, and that the cost estimates for initiating new projects had to include the costs of rebuilding the nuclear industrial base and of passing through the nuclear construction learning curve for yet another time.

    More realistic estimates for two AP1000’s were developed in 2009 and later in 2012 — 9 billion dollars and 12 billion dollars respectively. It cannot be emphasized enough here that the estimate of 12 billion dollars when onsite construction began in 2012 included the expected costs of full compliance with NRC regulations and of passing through the nuclear learning curve for a second time. These estimates also assumed that all the difficult lessons learned from the nuclear projects of the 1980’s would be diligently applied to the latest projects as they were being initiated and while they were in progress.

    How did 2012’s estimate of 12 billion dollars for two AP1000’s grow to 2017’s estimate of 25 billion dollars in just five years?

    The answer here is that all the lessons learned from the 1980’s were ignored. Thirty years ago, a raft of studies and reports were published which analyzed the cost growth problems and the severe quality assurance issues the nuclear construction industry was then experiencing, and made a series of recommendations as to how to solve these problems. Those studies had a number of common threads:

    Complex, First of a Kind Projects: Any large project that is complicated, involves new and/or high technology, has several phases, involves a diversity of technical specialties, involves a number of organizational interfaces, and has significant cost and schedule pressures—any project which has these characteristics is a prime candidate for experiencing significant quality assurance issues, cost control issues, and schedule growth problems.

    Strength of the Industrial Base: Nuclear power requires competent expertise in every facet of design, construction, testing, and operations. This kind of competent expertise existed in the early 1980’s but was not being effectively utilized in many of the power reactor construction projects, the ones that experienced the most serious cost and schedule growth issues.

    A Changing Technical Environment: The large reactor projects, the 1300 megawatt plants, were being built for the first time. They were being built without a prototype, and they were substantially different from previous designs. Those big plants had many new and significantly revised systems inside them, systems that had to be designed, constructed, tested, and subsequently operated.

    A Changing Regulatory Environment: In the late 1970’s and early 1980’s, there was a continual increase in the regulatory requirements being placed on power reactors. The Three Mile Island accident, the Brown’s Ferry fire, the Calvert Cliffs environmental decision, all of those events required the power utilities to change the way they were dealing with their projects in the middle of the game. Some power utilities were successful in making the necessary changes, others were not.

    Project Management Effectiveness: Those nuclear projects which had a strong management team and strong management control systems at all levels of the project organization generally succeeded in delivering their projects on cost and on schedule. Those that didn’t were generally incapable of dealing with the changing technical and regulatory environment and became paralyzed in the face of the many QA issues, work productivity issues, and cost control issues they were experiencing.

    Overconfidence Based on Past Project Success: Many of the power utilities which had a record of past success in building non-nuclear projects, and which were constructing nuclear plants for the first time, did not recognize that nuclear is different. Those utilities which did not take their regulatory commitments seriously and which did not do an adequate job of assessing whether or not the management systems and the project methods they had been using successfully for years were up to the task of managing a nuclear project.

    Reliance on Contractor Expertise: The projects which succeeded had substantial nuclear expertise inside the power utility’s own shop. Those utilities who were successful in building nuclear plants were knowledgeable customers for the nuclear construction services they were buying. They paid close and constant attention to the work that was being done on the construction site, in the subcontractor fabrication shops, and in the contractor’s technical support organization. Emerging issues and problems were quickly and proactively identified, and quick action was taken to resolve those problems.

    Management Control Systems: The nuclear projects which failed did not have effective management control systems for contractor and subcontractor design interface control; for configuration control and management of design documentation and associated systems and components; and for proper and up-to-date maintenance of contractor and inter-contractor cost and schedule progress information. Inadequate management control systems prevented an accurate assessment of where the project actually stood, and in many cases were themselves an important factor in producing substandard technical work.

    Cost & Schedule Control Systems: For those projects which lacked a properly robust cost & schedule control system, many activities listed on their project schedules were seriously mis-estimated for time, cost, scope, and complexity. Other project activities covering significant portions of the total work scope were missing altogether, making it impossible to accurately assess where the project’s cost and schedule performance currently stood, and where it was headed in the future.

    Quality Assurance: For those nuclear projects which lacked the necessary management commitment to meeting the NRC’s quality assurance expectations, the added cost of meeting new and existing regulatory requirements was multiplied several times over as QA deficiencies were discovered and as significant rework of safety-critical systems and components became necessary.

    Construction Productivity & Progress: For those nuclear projects which lacked a strong management team; and which lacked effective project control systems and a strong management commitment to a ‘do-it-right the first time’ QA philosophy, the combined impacts of these deficiencies had severe impacts on worker productivity at the plant site, on supplier quality and productivity at offsite vendor facilities, and on the overall forward progress of the entire project taken as a whole.

    Project Financing and Completion Schedule: As a result of these emerging QA and site productivity problems, many of the power utilities were forced to extend their construction schedules and to revise their cost estimates upward. Finding the additional money and the necessary project resources to complete these projects proved extremely difficult in the face of competition from other corporate spending priorities and from other revenue consuming activities.

    A Change in Strategy by the Anti-nuclear Activists: In the late 1970’s and early 1980’s, the anti-nuclear activists were focusing their arguments on basic issues of nuclear safety. They got nowhere with those arguments. Then they changed their strategic focus and began challenging the nuclear projects on the basis of quality assurance issues, i.e., that many nuclear construction projects were not living up to the quality assurance commitments they had made to the public in their NRC license applications.

    Regulatory Oversight Effectiveness: In the early 1980’s, the NRC was slow to react to emerging problems in the nuclear construction industry. In that period, the NRC was focusing its oversight efforts on the very last phases of the construction process when the plants were going for their operating licenses. Relatively little time and effort was being devoted to the earlier phases of these projects, when emerging QA problems and deficiencies were most easily identified and fixed. Quality assurance deficiencies that had been present for years were left unaddressed until the very last phases of the project, and so were much more difficult, time consuming, and expensive to resolve.

    Working Relationships with Regulators: The successful nuclear projects from the 1970’s and 1980’s, the ones that stayed on cost and on schedule, did not view the NRC as an adversary. The successful projects viewed the NRC as a partner and a technical resource in determining how best to keep their project on track in the face of an increasingly more complex and demanding project environment. On the other hand, for those projects which had significant deficiencies in their QA programs, for those that did not take their QA commitments seriously, the anti-nuclear activists introduced those deficiencies into the NRC licensing process and were often successful in delaying and sometimes even killing a poorly managed nuclear project.

    If it’s done with nuclear, it must be done with exceptional dedication to doing a professional job in all phases of project execution from beginning to end.

    Once again, it cannot be emphasized enough here that the estimate of 12 billion dollars for two AP1000’s when onsite construction at VC Summer and at Vogtle 3 & 4 began in 2012 included the expected costs of full compliance with NRC regulations and of passing through the nuclear learning curve for a second time. These estimates also assumed that all the difficult lessons learned from the nuclear projects of the 1980’s, as I’ve described them above, would be diligently applied to the latest projects as they were being initiated and while they were in progress.

    For those of us who went through the wrenching experiences of the 1980’s in learning how to do nuclear construction right the first time, what we’ve seen with VC Summer and Vogtle 3 & 4 has been deja vu all over again. The first indications of serious trouble came in 2011 when the power utilities chose contractor teams that did not have the depth of talent and experience needed to handle nuclear projects of this level of complexity and with this level of project risk. That the estimated cost eventually grew to 25 billion dollars in 2017 should be no surprise.

    The project owners and managers ignored the hard lessons of the 1980’s. They did not do a professional job in managing their nuclear projects; and they did not meet their commitments to the public as these commitments are outlined in their regulatory permit applications. Just as happened in the 1980’s, the anti-nuclear activists and the government regulatory agencies are now holding these owners and managers to account for failures that were completely avoidable if sound management practices had been followed.

  59. I went into reading this with my hackles up. than I clamed down and said to myslef, “Shelly, you have to be open to new information and maybe they know something you don’t.” So I carefully and calmly read it inbetween deep yoga breaths.

    This caught my attention:

    early warnings could be issued that include information on what people can do to protect themselves and to protect crops and ecosystems,” Ebi said.

    We already do this every single day. It’s called the weather channel.

    I agree heartily with Zigmaster :

    In the scheme of climate cycles 1980- 2016 is not a long period. The last two years may not have been on trend and certainly in the 1930s and the 1890s there were extreme heat conditions which appear to have been worse than the period they looked at. Typical cherry picking by warmist extremists.

    And Samuel was pretty coherent

    What the above means is that Sheridan and his co-author simply used the “temperature data” they obtained from NOAA’s National Climatic Data Center (NCDC), …….. and everyone knows that just the “adjustments” introduced by NOAA proved that “every year is hotter than the previous year”.

    But Richard M allowed me to stop with the deep breahing and go for the deep belly laughs

    It is time to start a climate comedy channel.

    You know, my dog Moxie helps me predict the weather–or Climate–ah, well warming. If its warming she runs all over the yard and I have to hollar for her to come back, if its cooling, she pees and runs back inside. She pretty good at it–

    http://www.day-by-day.org/weatherpredictor.jpg

    Although word press isn’t displaying images, it’s worth clicking to see Moxie predicting the weather for me.

  60. This comment didn’t get approved:

    I just wish to point out that the following:

    π(U) ≥ 0.9: sea level rise up to 0.3 m; corroborated possibilities
    0.5 > π(U) > 0.9: sea level rise exceeding 0.3 m and up to 0.63 m; verified possibilities contingent on DT, based on IPCC AR5 likely range (but excluding RCP8.5).
    0.5 ≥ π(U) > 0.1: sea level rise exceeding 0.63 m and up to 1.6 m; unverified possibilities
    0.1 ≥ π(U) > 0: sea level rise between 1.6 and 2.5 m; borderline impossible
    π(U) = 0: sea level rise exceeding 2.5 m; impossible based upon background knowledge
    π(U) = 0: negative values of sea level change; impossible based on background knowledge

    Is mathematically incorrect. It should have been written:

    π(U) ≥0.9: sea level rise up to 0.3 m; corroborated possibilities
    0.5 < π(U) < 0.9: sea level rise exceeding 0.3 m and up to 0.63 m; verified possibilities contingent on DT, based on IPCC AR5 likely range (but excluding RCP8.5).
    0.1 < π(U) ⇐ 0.5: sea level rise exceeding 0.63 m and up to 1.6 m; unverified possibilities
    0 < π(U) ⇐ 0.1: sea level rise between 1.6 and 2.5 m; borderline impossible
    π(U) = 0: sea level rise exceeding 2.5 m; impossible based upon background knowledge
    π(U) = 0: negative values of sea level change; impossible based on background knowledge

    Note the use of LESS THAN signs. This (e.g. 0.1 < π(U) ⇐ 0.5) is verbalized as "0.1 is less than π(U) and π(U) is less than or equal to 0.5″, meaning π(U) is strictly between and excluding 0.1 and including 0.5. Mathematicians might not be so stickery, but we computer scientists are. Many an algorithm has failed for want of proper greater-than and less-than signs.

    Just saying
    GoatGuy

  61. Test 1

    I’m not a climate scientist but I can read. So why does my climatology text book* take the exact opposite position to that of Nick Stokes and Steven Mosher above? They both have this whole tail wagging the dog thing going with their odd notion that a global statistic derived from local measurements is somehow forcing those same measurements!

    Here are its conclusions, the concluding paragraph in fact [emphasis added]:

  62. test 2

    Statistical methods are used to summarize patterns within data; however, the most useful summary is often not a temporal or spatial average.

    Climate variability, in the form of temporal fluctuations and spatial variations, is often more important than changes in average values.

    Much of the global warming debate, for instance, focuses on changes in global average air temperature anomalies; however, there is always important interannual variability, not necessarily systematic change, in air temperature that has important implications for applied climatological research (Katz and Brown, 1992).

    In addition, global and hemispheric averages disregard the spatial distribution of climatic changes and variability. There are often years with very similar global average air temperature or precipitation; however the spatial distributions of these variables (and their climatic impacts) can be vastly different.

    When using statistical analysis in applied climatological research, therefore, one must consider not only the ’average’ conditions at a given location, but also the variability of important climatological variables over a wide range of temporal and spatial scales. – Scott M. Robeson, Statistical Considerations

  63. test 3

    Hang on! So climate change could cause volcanic eruptions that could cause climate change? That could be a real problem if volcanoes could warm the earth but they could only do that in the past, now all they could do is cool the atmosphere; apparently!

  64. Mark,

    That’s an important point! I kept this post focused narrowly on climate science, but the broad loss of confidence in America’s institutions is an important factor. And loss of confidence in government officials is the core of this. To see why, read The Big List of Lies by Government Officials.

    Also see Gallup’s annual Confidence in Institutions surveys. Terrifying data to anyone interested in America’s future:

    https://news.gallup.com/poll/1597/confidence-institutions.aspx

  65. [CO2−3]
    ⁠, and thus Ω, is most likely incorrect.

    Rather, as outlined above, it is most likely the decrease in seawater pH and associated problems of pH homeostasis within organisms that governs changes in calcification rates under OA conditions.

  66. “They can do. But they have to work at it. And if they are to retain CaCO₃ in a low carbonate solution, they have to work harder.”

  67. The Global Warming Policy Forum are inviting you to take part in a competition, with a chance to win some excellent prizes.

    Tell them about what you think was the tallest green tale of 2018, and explain why it was so daft.

    Nominations together with rebuttals should be emailed to harry.wilkinson@thegwpf.com
    Deadline: 31 December 2018.
    Prize: Two GWPF books (Group Think and Population Bombed) plus a bottle of House of Lords whisky.
    The GWPF team will decide the winner of the competition early in in the new year.
    Good luck, and Merry Christmas!

  68. If I write something innocuous, will it be published on WUWT? IOW, is it me personally that is blocked?

    [Printed and promptly published. .mod]

  69. “If you can chuckle at it, you can stay with it.”

    – Erma Bombeck

    Norman Cousins is frequently explained as the man who laughed himself again to health and fitness.
    According to his autobiography, Norman Cousins-a well known political journalist, writer, professor, and
    globe peace advocate-was identified with ankylosing spondylitis, a unpleasant backbone affliction. He place himself
    on significant doses of vitamin C and humor-which involved seeing a whole lot
    of Marks Brothers’ movies. He suggests, “I made the joyous discovery that ten minutes of real tummy laughter experienced an anesthetic outcome and would give me at least two hours of discomfort cost-free snooze. When the discomfort-killing impact of the laughter wore off, we would swap on the movement picture projector all over again, and not occasionally, it would direct to one more ache-absolutely free interval.”

    We all know how great it feels to giggle.
    Have you at any time “laughed ’til it hurts?” Nicely, possibly
    that is a signal that individuals laughing muscles are not employed frequently enough.
    Any time feasible and ideal, chortle. Do not laugh at the expense of
    someone else’s thoughts. A healthful giggle calls for a wholesome
    frame of mind. A hearty chuckle should really
    embrace those people around you, not alienate them.

    I adore to snicker. Every time I’m sensation down, I just start out smiling.
    There is no way you can experience negative, unfortunate or depressed if you pressure
    yourself to smile and chuckle. Consider it! Yeah, correct now.
    Won’t that feel good? Drs. Gael Crystal and Patrick Flanagan, authors of the post entitled Laughter-Nonetheless the Ideal Drugs
    (1995), say, “Laughter is a sort of interior jogging that routines the physique and stimulates the release of advantageous mind neurotransmitters and hormones. Positive outlook and laughter is essentially good for our well being!”

    Try to see the humor in day to day predicaments you might discover your self in. Do not be overly sensitive to what somebody says or to
    a different person’s position of perspective.

  70. This site needs updating to WordPress 5 and Gutenberg 5.0.2.
    Then we will be able to properly edit our posts and ADD LINKED IMAGES.

  71. “pre” test:

    The following radiation quantities are consistent with those assumptions but show that the surface emits 2.2 W/m^2 for every 1 W/m^2 it absorbs from the sun. And only that 1 W/m^2 escapes back to space. Yet the emissions equal the absorptions: no energy is created or destroyed.

    Total
    Absorbed from: Surface L.Atm U.Atm Space Absorbed

    Absorbed by:
    Surface 0.0000 1.0500 0.1500 1.0000 || 2.2000
    Lower Atmosphere 1.6500 0.0000 0.4500 0.0000 || 2.1000
    Upper Atmosphere 0.4125 0.7875 0.0000 0.0000 || 1.2000
    Space 0.1375 0.2625 0.6000 0.0000 || 1.0000
    ————————————————
    Total Emitted: 2.2000 2.1000 1.2000 1.0000

  72. Each atmosphere layer in this (no-convection, no-conduction, lumped-parameter) hypothetical absorbs ¾ of the radiation it receives, and it emits all the radiation it absorbs. Also, 1 W/m^2 comes from space and the same amount is returned to space, but the surface emits 2.2 W/m^2. If you go through the arithmetic you can confirm this. If you so change it that each atmosphere layer absorbs all the radiation it receives, then the surface will emit 3.0 W/m^2.

    The point is that no energy is created or destroyed, yet the surface emits 2.2 times as much power as the system receives from space (the sun). Each atmospheric layer receives more, too.

    \begin{array}{lcccccc}  &&&&&&\mathrm{Total}\\  \mathrm{Absorbed\,from:}&\mathrm{Surface}&\mathrm{L.Atm}&\mathrm{U.Atm}&\mathrm{Space}&&\mathrm{Absorbed}\\  &&&&&&\\  \mathrm{Absorbed\,by:}&&&&&\\  \mathrm{Surface}&0.0000&1.0500&0.1500&1.0000&||&2.2000\\  \mathrm{Lower Atmosphere}&1.6500&0.0000&0.4500&0.0000&||&2.1000\\  \mathrm{Upper Atmosphere}&0.4125&0.7875&0.0000&0.0000&||&1.2000\\  \mathrm{Space}&0.1375&0.2625&0.6000&0.0000&||&1.0000\\  &&&&&&\\  \mathrm{Total\,Emitted:}&2.2000&2.1000&1.2000&1.0000  \end{array}

  73. Don132

    Sokath, his eyes opened!

    Congratulations on your breakthrough. And congratulations to PJF for the comment responsible.
    I withdraw my assessment of your limitations. Perhaps I need to reassess my ability to explain things; I had thought I’d made the same point. Indeed, I had been under the illusion that what I’d said here was pellucid: “Here’s the reason why Mr. Eschenbach is right that almost all of those arguments are irrelevant: there would be no average conduction between the earth’s surface and the atmosphere if the atmosphere were perfectly non-radiative.” Apparently not.

    However that may be, now that you’ve made one breakthrough and recognized that the greenhouse effect is needed, I commend to your attention the Steve Goddard / Luboš Motl explanation of why its effect eventually becomes negligible in comparison to the integral of lapse rate with respect to altitude.

  74. Virtual particle – the magic of quantum mechanics
    E t o (o + B + 3)/(o – B – 1)
    Where: p is Rayleigh number = 28, o is Prandt number = 10, B is a geometric factor = 8/3

  75. Virtual particle – the magic of quantum mechanics
    E t < h’/2
    Where: E is energy, t is time, h’ is reduced Planck constant

  76. Lorenz attractor – the butterfly of chaos
    p > o (o + B + 3)/(o – B – 1)
    Where: p is Rayleigh number = 28, o is Prandt number = 10, B is a geometric factor = 8/3

  77. table {
    border-collapse: collapse;
    }
    td {
    border: 1px solid #000000;
    }

    Greenland

    Location
    ID No.
    Elev.(m)
    Lat.
    Long.
    Coldest Month
    Yearly Avg
    Hottest Month

    Moriusaq
    24597
    25
    76.8
    -69.9
    -30.8
    -13
    6.6

  78. Moreover, the traditional method overestimates the daily average temperature at 134 stations (62.3%) underestimates it at 76 stations (35.4%), and shows no difference at only 5 stations (2.3%).

    On average, the traditional method overestimates the daily average temperature compared to hourly averaging by approximately 0.16°F, though there is strong spatial variability.

    The explanation for the long-term difference between the two methods is the underlying assumption for the twice-daily method that the diurnal curve of temperature is symmetrical.

    In particular, the Yule–Kendall index is positive for all 215 CONUS stations, indicating that the daily temperature curve is right skewed; that is, more hourly observations occur near the bottom of the distribution of hourly temperatures (i.e., around Tmin) than near the top of the distribution (around Tmax). – Thorne et al. 2016*

  79. The 131 reuses number is only for cotton bags. For the non-woven polypropylene bags that dominate the market the number is only 11 reuses to equal a single use HDPE bag. I have been using some of my PP Safeway bags a couple of times a week for years so that kinda blows single use HDPE out of the water. Here’s the UK study that produced all of the numbers: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/291023/scho0711buan-e-e.pdf

  80. The math comes from this. If you have a sinusoid frequency f Hz (sin(2πft)) samples at s Hz, the samples are sin(2πfn/s), n=0,1,2…
    But this is indistinguishable from sin(2π(fn/s+m*n)) for any integer (positive or negative), because you can add a multiple of 2π to the argument of sin without changing its value.

    But sin(2π(fn/s+m*n)) = sin(2π(f+m*s)n/s)

  81. I saw your last hostile comments and I see this one. Why do conclude that I’m prevaricating? Why isn’t it a reasonable conclusion that I didn’t understand your comment or question and therefore missed your point? I took the time to write a longer post with the attempt to be helpful. If it wasn’t helpful or insightful, well ok, that is something I can handle. But it sounds like your actual goal was to set a “honey trap.” Is it really necessary to bring that kind of hostility to the conversation? Constructive feedback is something I will consider. Hostility is just dismissed. Your hostility speaks to your character – or lack thereof – not to mine. I’m not going to engage you in some adolescent spat on a forum. I have not seen any sincerity from you and therefore your attacks have no significance to me. I don’t think your hostility shows anything good about you. If something I said or did got under your skin, if you tell me about it constructively I’ll try to address it. If my personality or style is not to your liking then just stay away from me because it is not going to change for someone that has no significance to me and comes at me with hostility.

    Three paragraphs of nothing follow:

    And though you don’t deserve for me to dignify any other response from you, I will address your specific technical comments for anyone else who may be reading. Thank you for sharing some detailed information about the kinds of lag that the Stevenson screen and thermometers introduce. Why didn’t you just make that point without the hostility?  Scott said: “At some point you have to call BS on the navel gazing of abstraction and return to the real world.” And: “So that’s my bottom line! And talk of the comparison of those records with higher sampling rates is pointless!”

    Five paragraphs now and 300 words in and “you” are yet to say anything of substance. 

    Finally – six paragraphs in – you may have actually asked a question but only by restating the initial “argument” for FFS! Are you actually thinking about (Or computing) what I said? :

    [1] My reply: Why would you say that sampling and sampling properly is not the real world? It is how it is done in every other application I can think of except climate science. Why is it pointless to compare the correct method to the method currently used that doesn’t give us accurate information? Maybe I don’t fully get your drift, but it sounds like you are saying the problem is the lag. [2]Well why can’t there be more than 1 problem? Mercury in glass thermometers can be replaced with other faster instruments. Screens can be redesigned. (Is this what you are recommending?) The max/min method will still not give you what is correct. [3] From an engineering perspective, capture all of the content available and once sampled properly you are free to filter out what you don’t want or need. [4] When you start talking about exhausts and vehicular wakes then aren’t we now speaking about improperly sited stations? (Yet another problem with the record).

  82. The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution. More specifically, where {\displaystyle X_{1},\ldots ,X_{n}} {\displaystyle X_{1},\ldots ,X_{n}} are independent and identically distributed random variables with the same arbitrary distribution, zero mean, and variance {\displaystyle \sigma ^{2}} \sigma ^{2} and {\displaystyle Z} Z is their mean scaled by {\displaystyle {\sqrt {n}}} {\sqrt {n}}

    {\displaystyle Z={\sqrt {n}}\left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)} Z={\sqrt {n}}\left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)
    Then, as {\displaystyle n} n increases, the probability distribution of {\displaystyle Z} Z will tend to the normal distribution with zero mean and variance {\displaystyle \sigma ^{2}} \sigma ^{2}.

  83. how about a code tag?

    USW00026510 924 JAN -16.5 11.3 0.37
    USW00026510 839 FEB -11.0 9.3 0.32
    USW00026510 920 MAR -4.0 6.8 0.22
    USW00026510 894 APR 4.8 6.1 0.20
    USW00026510 924 MAY 14.1 5.5 0.18
    USW00026510 894 JUN 20.1 4.1 0.14
    USW00026510 924 JUL 20.8 4.3 0.14
    USW00026510 924 AUG 17.5 4.1 0.13
    USW00026510 894 SEP 11.8 4.9 0.16
    USW00026510 924 OCT 0.0 6.0 0.20
    USW00026510 894 NOV 10.4 7.8 0.26
    USW00026510 922 DEC -14.8 9.3 0.31

    or blockquote

    USW00026510 924 JAN -16.5 11.3 0.37
    USW00026510 839 FEB -11.0 9.3 0.32
    USW00026510 920 MAR -4.0 6.8 0.22
    USW00026510 894 APR 4.8 6.1 0.20
    USW00026510 924 MAY 14.1 5.5 0.18
    USW00026510 894 JUN 20.1 4.1 0.14
    USW00026510 924 JUL 20.8 4.3 0.14
    USW00026510 924 AUG 17.5 4.1 0.13
    USW00026510 894 SEP 11.8 4.9 0.16
    USW00026510 924 OCT 0.0 6.0 0.20
    USW00026510 894 NOV 10.4 7.8 0.26
    USW00026510 922 DEC -14.8 9.3 0.31


  84. ID No. Recs. MON AVG TMAX STD DEV Est Error in Mean
    USW00026510 924 JAN -16.5 11.3 0.4
    USW00026510 839 FEB -11.0 9.3 0.3
    USW00026510 920 MAR -4.0 6.8 0.2
    USW00026510 894 APR 4.8 6.1 0.2
    USW00026510 924 MAY 14.1 5.5 0.2
    USW00026510 894 JUN 20.1 4.1 0.1
    USW00026510 924 JUL 20.8 4.3 0.1
    USW00026510 924 AUG 17.5 4.1 0.1
    USW00026510 894 SEP 11.8 4.9 0.2
    USW00026510 924 OCT 0.0 6.0 0.2
    USW00026510 894 NOV - 10.4 7.8 0.3
    USW00026510 922 DEC -14.8 9.3 0.3

    Now let’s look at the data for 2013:

    ID No. Rrecs MON AVG TMAX STD DEV Est Error in Mean Anomaly Error in anomaly
    USW00026510 31 JAN -11.9 9.4 1.7 4.6 1.7
    USW00026510 28 FEB -12.2 4.9 0.9 -1.2 1.0
    USW00026510 31 MAR -4.5 6.1 1.1 -0.5 1.13
    USW00026510 30 APR -0.1 6.4 1.2 -4.9 1.2
    USW00026510 31 MAY 12.4 9.2 1.6 -1.7 1.7
    USW00026510 30 JUN 23.7 5.2 1.0 3.6 1.0
    USW00026510 31 JUL 21.0 5.5 1.0 0.2 1.0
    USW00026510 31 AUG 18.0 3.8 0.7 0.5 0.7
    USW00026510 30 SEP 9.1 4.4 0.8 -2.6 0.8
    USW00026510 31 OCT 6.5 2.9 0.5 6.5 0.6
    USW00026510 30 NOV -6.4 7.5 1.4 4.0 1.4
    USW00026510 31 DEC -14.5 9.0 1.6 0.2 1.6

  85. ID No. Recs. MON AVG TMAX STD DEV Est Error in Mean
    USW00026510 924 JAN -16.5 11.3 0.4
    USW00026510 839 FEB -11.0 9.3 0.3
    USW00026510 920 MAR -4.0 6.8 0.2
    USW00026510 894 APR 4.8 6.1 0.2
    USW00026510 924 MAY 14.1 5.5 0.2
    USW00026510 894 JUN 20.1 4.1 0.1
    USW00026510 924 JUL 20.8 4.3 0.1
    USW00026510 924 AUG 17.5 4.1 0.1
    USW00026510 894 SEP 11.8 4.9 0.2
    USW00026510 924 OCT 0.0 6.0 0.2
    USW00026510 894 NOV – 10.4 7.8 0.3
    USW00026510 922 DEC -14.8 9.3 0.3

    Now let’s look at the data for 2013:

    ID No. Rrecs MON AVG TMAX STD DEV Est Error in Mean Anomaly Error in anomaly
    USW00026510 31 JAN -11.9 9.4 1.7 4.6 1.7
    USW00026510 28 FEB -12.2 4.9 0.9 -1.2 1.0
    USW00026510 31 MAR -4.5 6.1 1.1 -0.5 1.13
    USW00026510 30 APR -0.1 6.4 1.2 -4.9 1.2
    USW00026510 31 MAY 12.4 9.2 1.6 -1.7 1.7
    USW00026510 30 JUN 23.7 5.2 1.0 3.6 1.0
    USW00026510 31 JUL 21.0 5.5 1.0 0.2 1.0
    USW00026510 31 AUG 18.0 3.8 0.7 0.5 0.7
    USW00026510 30 SEP 9.1 4.4 0.8 -2.6 0.8
    USW00026510 31 OCT 6.5 2.9 0.5 6.5 0.6
    USW00026510 30 NOV -6.4 7.5 1.4 4.0 1.4
    USW00026510 31 DEC -14.5 9.0 1.6 0.2 1.6

  86. ID No. Recs. MON AVG TMAX STD DEV Est Error in Mean
    USW00026510 924 JAN -16.5 11.3 0.4
    USW00026510 839 FEB -11.0 9.3 0.3
    USW00026510 920 MAR -4.0 6.8 0.2
    USW00026510 894 APR 4.8 6.1 0.2
    USW00026510 924 MAY 14.1 5.5 0.2
    USW00026510 894 JUN 20.1 4.1 0.1
    USW00026510 924 JUL 20.8 4.3 0.1
    USW00026510 924 AUG 17.5 4.1 0.1
    USW00026510 894 SEP 11.8 4.9 0.2
    USW00026510 924 OCT 0.0 6.0 0.2
    USW00026510 894 NOV – 10.4 7.8 0.3
    USW00026510 922 DEC -14.8 9.3 0.3

    Now let’s look at the data for 2013:

    ID No. Rrecs MON AVG TMAX STD DEV Est Error in Mean Anomaly Error in anomaly
    USW00026510 31 JAN -11.9 9.4 1.7 4.6 1.7
    USW00026510 28 FEB -12.2 4.9 0.9 -1.2 1.0
    USW00026510 31 MAR -4.5 6.1 1.1 -0.5 1.13
    USW00026510 30 APR -0.1 6.4 1.2 -4.9 1.2
    USW00026510 31 MAY 12.4 9.2 1.6 -1.7 1.7
    USW00026510 30 JUN 23.7 5.2 1.0 3.6 1.0
    USW00026510 31 JUL 21.0 5.5 1.0 0.2 1.0
    USW00026510 31 AUG 18.0 3.8 0.7 0.5 0.7
    USW00026510 30 SEP 9.1 4.4 0.8 -2.6 0.8
    USW00026510 31 OCT 6.5 2.9 0.5 6.5 0.6
    USW00026510 30 NOV -6.4 7.5 1.4 4.0 1.4
    USW00026510 31 DEC -14.5 9.0 1.6 0.2 1.6

  87. again

    $latex \Delta \overline{X} _{est} = \frac{\overline{X}}{\sqrt{N}}

    This worked fine at the online latex tester

  88. I’ve been interested in some time with the calculations used to get the global anomaly, especially the error calculation and propagation throughout the process. I don’t know exactly which set of stations are used in the calculations, so I grabbed the file ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd_all.tar.gz file. From that I loaded the station metadata into my database and selected for the GSN stations using the GSN_FLAG column in the table.

    I was surprised how sparse some of the station data was; out of all of the stations with the GSN flag set (991), only 139 had enough valid data to fill out the baseline of 30 years from 1981 to 2010. I ran the statistics against that set to see what I’d find.

    The raw daily station data tags data as MISSING with the -9999 flag, but the flag wasn’t consistent;, it appeared in the data set not only as -9999, but as 999, 99, and -99. I converted all of the varieties to -9999.

    I think I understand the basics of creating an anomaly: a baseline data set of a station’s mean monthly temps is created from each station’s data over a 30-year period from 01-JAN-1981 to 31-DEC-2010, so that the result is an average of all the January temps from 1981 to 2010, all of the Feb temps, etc. The station’s data is then averaged over a month’s time, and the baseline mean for that month is subtracted from the current month’s mean, and that difference is the anomaly.

    For my statistical rules, I used a website from the University of Toronto that I thought did a good job of explaining how to calculate standard error, how to propagate error through calculations, and how to determine the estimated error in the mean. The website’s url is https://faraday.physics.utoronto.ca/PVB/Harrison/ErrorAnalysis/Precision.html. There’s more to the site than where I linked to, but my link is where the equations I used were.

    I picked a station at random that had a mostly full data set for the 30-year baseline, and also for the sample year of 2013. Both sets are a few days short, but six days or so out of 930 in a 30-year period shouldn’t make that much difference. The station name is AK MCGRATH AP amd the 30-year baseline data for that station looked like this:

    https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png?w=450

    The data for 2013:

    https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png?w=450

    Looking at the January in the 30-year baseline, you see the standard deviation is pretty large. Using this equation:
    https://jaschrumpf.files.wordpress.com/2019/02/standard_error_formula.png?w=450
    we get 11.3C/30.4C = 0.4C for the estimated error in the mean for the month of January.

    Over at the annual calculations for 2013 for that station, we see that the standard deviation is 9.4C and the estimated error in the mean is 1.7C. I double-checked those numbers because they are pretty large — but they are correct.

    So that leaves us to calculate the anomaly. The baseline for January subtracted from the mean for January for 2013 is -11.9C – (-16.5) = 4.6C with an error calculated with:
    https://jaschrumpf.files.wordpress.com/2019/02/error_propagation_addition.png?w=450

    which equals 1.7C. The final value for the anomaly should be reported as 4.6C+/-1.7C. If the same procedure is performed with the entire year of 2013, the result is a mean anomaly of 0.7C+/-0.9C.

    I ran these calculations against the entire 139-station data set that had very good data, and the final mean anomaly and error for the year of 2013 was 0.5C +/- 0.15C.

    It’s my understanding that nothing can be done statistically to a set of measurements can improve its accuracy. The error in the mean can be reduced, but the mean itself can only have the same number of significant digits as in the original measurements. In this case, that’s one decimal point.

    If my calculations of the estimated error in the mean are correct, I can certainly understand why we don’t see the error published along with the “hottest year ever” claims. It would be ludicrous to claim a year was 0.07C warmer than before when the error could be an order of magnitude larger.

  89. Corrected the image urls:

    I’ve been interested for some time in the calculations used to get the global anomaly, especially the error calculation and propagation throughout the process. I thought I’d have a try at seeing what the numbers look like , but as I don’t know exactly which set of stations are used in the calculations, so I grabbed the file ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd_all.tar.gz from the NOAA website. From that I loaded the station metadata into my database and selected for the GSN stations using the GSN_FLAG column in the table.

    I was surprised how sparse some of the station data was; out of all of the stations with the GSN flag set (991), only 139 had enough valid data to fill out the baseline of 30 years from 1981 to 2010. I ran the statistics against that set to see what I’d find.

    The raw daily station records tags missing data with the -9999 flag, but the flag wasn’t consistent;, it appeared in the data set not only as -9999, but as 999, 99, and -99. I converted all of the varieties to -9999.

    I think I understand the basics of creating an anomaly: a baseline data set of a station’s mean monthly temps is created from each station’s data over a 30-year period from 01-JAN-1981 to 31-DEC-2010, so that the result is an average of all the January temps from 1981 to 2010, all of the Feb temps, etc. The station’s data is then averaged over a month’s time, and the baseline mean for that month is subtracted from the current month’s mean, and that difference is the anomaly.

    For my statistical rules, I used a website from the University of Toronto that I thought did a good job of explaining how to calculate standard error, how to propagate error through calculations, and how to determine the estimated error in the mean. The website’s url is https://faraday.physics.utoronto.ca/PVB/Harrison/ErrorAnalysis/Precision.html. There’s more to the site than where I linked to, but that’s where the equations I used were.

    I picked a station at random that had a mostly full data set for the 30-year baseline, and also for the sample year of 2013. Both sets are a few days short, but six days or so out of 930 in a 30-year period shouldn’t make that much difference. The station name is AK MCGRATH AP amd the 30-year baseline data for that station looked like this:

    https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png

    The data for 2013:

    https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png

    Looking at January in the 30-year baseline, you see the standard deviation is pretty large. Using this equation:
    https://jaschrumpf.files.wordpress.com/2019/02/standard_error_formula.png
    we get 11.3C/30.4C = 0.4C for the estimated error in the mean for the month of January.

    Over at the annual calculations for 2013 for that station, we see that the standard deviation is 9.4C and the estimated error in the mean is 1.7C. I double-checked those numbers because they are pretty large — but they are correct.

    So that leaves us to calculate the anomaly. The baseline for January subtracted from the mean for January for 2013 is -11.9C – (-16.5) = 4.6C with an error calculated as:
    https://jaschrumpf.files.wordpress.com/2019/02/error_propagation_addition.png

    which equals 1.7C. The final value for the anomaly should be reported as 4.6C+/-1.7C. If the same procedure is performed with the entire year of 2013, the result is a mean anomaly of 0.7C+/-0.9C.

    I ran these calculations against the entire 139-station data set that had very good data, and the final mean anomaly and error for the year of 2013 was 0.5C +/- 0.15C.

    It’s my understanding that nothing can be done statistically to a set of measurements can improve its accuracy. The error in the mean can be reduced, but the mean itself can only have the same number of significant digits as in the original measurements. In this case, that’s one decimal point. While I was looking at the NOAA site, I noticed the files of monthly temperature means did not include any error information. They took a month’s worth of errors in the data and made it go away, and those error calculations are significant.

    It’s also apparent that using the anomaly removes the variance in the Earth’s temperatures. Rather than stating the temperature difference between Ecuador and Antarctica averaged 40C in the last year, the anomaly smooths it all out so that it can be said that Ecuador’s anomaly for 2018 was 0.1C less than that at the South Pole.

    In any event, if my calculations of the estimated error in the mean are correct, I can certainly understand why we don’t see the error published along with the “hottest year ever” claims. It would be ludicrous to claim a year was 0.007C warmer than before, and then put an error bar on the number that could be an order of magnitude larger.

  90. I’ve been interested for some time in the calculations used to get the global anomaly, especially the error calculation and propagation throughout the process. I thought I’d have a try at seeing what the numbers look like , but as I don’t know exactly which set of stations are used in the calculations, so I grabbed the file ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd_all.tar.gz from the NOAA website. From that I loaded the station metadata into my database and selected for the GSN stations using the GSN_FLAG column in the table.

    I was surprised how sparse some of the station data was; out of all of the stations with the GSN flag set (991), only 139 had enough valid data to fill out the baseline of 30 years from 1981 to 2010. I ran the statistics against that set to see what I’d find.

    The raw daily station records tags missing data with the -9999 flag, but the flag wasn’t consistent;, it appeared in the data set not only as -9999, but as 999, 99, and -99. I converted all of the varieties to -9999.

    I think I understand the basics of creating an anomaly: a baseline data set of a station’s mean monthly temps is created from each station’s data over a 30-year period from 01-JAN-1981 to 31-DEC-2010, so that the result is an average of all the January temps from 1981 to 2010, all of the Feb temps, etc. The station’s data is then averaged over a month’s time, and the baseline mean for that month is subtracted from the current month’s mean, and that difference is the anomaly.

    For my statistical rules, I used a website from the University of Toronto that I thought did a good job of explaining how to calculate standard error, how to propagate error through calculations, and how to determine the estimated error in the mean. The website’s url is https://faraday.physics.utoronto.ca/PVB/Harrison/ErrorAnalysis/Precision.html. There’s more to the site than where I linked to, but that’s where the equations I used were.

    I picked a station at random that had a mostly full data set for the 30-year baseline, and also for the sample year of 2013. Both sets are a few days short, but six days or so out of 930 in a 30-year period shouldn’t make that much difference. The station name is AK MCGRATH AP amd the 30-year baseline data for that station looked like this:

    https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png

    The data for 2013:

    https://jaschrumpf.files.wordpress.com/2019/02/2013_station_usw00026510_data-1.png

    Looking at January in the 30-year baseline, you see the standard deviation is pretty large. Using this equation:
    https://jaschrumpf.files.wordpress.com/2019/02/standard_error_formula.png
    we get 11.3C/30.4C = 0.4C for the estimated error in the mean for the month of January.

    Over at the annual calculations for 2013 for that station, we see that the standard deviation is 9.4C and the estimated error in the mean is 1.7C. I double-checked those numbers because they are pretty large — but they are correct.

    So that leaves us to calculate the anomaly. The baseline for January subtracted from the mean for January for 2013 is -11.9C – (-16.5) = 4.6C with an error calculated as:
    https://jaschrumpf.files.wordpress.com/2019/02/error_propagation_addition.png

    which equals 1.7C. The final value for the anomaly should be reported as 4.6C+/-1.7C. If the same procedure is performed with the entire year of 2013, the result is a mean anomaly of 0.7C+/-0.9C.

    I ran these calculations against the entire 139-station data set that had very good data, and the final mean anomaly and error for the year of 2013 was 0.5C +/- 0.15C.

    It’s my understanding that nothing can be done statistically to a set of measurements can improve its accuracy. The error in the mean can be reduced, but the mean itself can only have the same number of significant digits as in the original measurements. In this case, that’s one decimal point. While I was looking at the NOAA site, I noticed the files of monthly temperature means did not include any error information. They took a month’s worth of errors in the data and made it go away, and those error calculations are significant.

    It’s also apparent that using the anomaly removes the variance in the Earth’s temperatures. Rather than stating the temperature difference between Ecuador and Antarctica averaged 40C in the last year, the anomaly smooths it all out so that it can be said that Ecuador’s anomaly for 2018 was 0.1C less than that at the South Pole.

    In any event, if my calculations of the estimated error in the mean are correct, I can certainly understand why we don’t see the error published along with the “hottest year ever” claims. It would be ludicrous to claim a year was 0.007C warmer than before, and then put an error bar on the number that could be an order of magnitude larger.

  91. On a more regional note, a period of below average rain is followed by a period of above average rain, which looks as if it is happening now in Queensland, Australia, irrespective man made climate change or not.

  92. I do not even know the way I finished up right here, however I believed this publish was
    good. I don’t understand who you are however definitely
    you’re going to a well-known blogger if you happen to aren’t already.
    Cheers!

  93. Trying to do what Louis did

    Test #1 (https​:​/​/​www​.​youtube​.​com​/​watch?v=gO17hN-YvBc):
    https://www.youtube.com/watch?v=gO17hN-YvBc
     

    Test #2 (http​:​/​/​www​.​youtube​.​com/watch?v=gO17hN-YvBc):
    http://www.youtube.com/watch?v=gO17hN-YvBc
     

    Test #3 (https​:​/​/​youtu​.​be/gO17hN-YvBc):
    https://youtu.be/gO17hN-YvBc
     

    Test #4 (https​:​/​/​www​.​youtube​.​com​/​embed​/​gO17hN-YvBc):
    https://www.youtube.com/embed/gO17hN-YvBc

    Test #5 (bracket format):
    [youtube https://www.youtube.com/watch?v=gO17hN-YvBc&w=640&h=480%5D

  94. Test…. cleared cookies and changed some settings. Don’t seem to be able to view my comments again.

    Testing, testing, Testing

    …and a formatting test for fun.

  95. “I would rather be governed by the first two thousand people in the Boston telephone directory than by the two thousand people on the faculty of Harvard University.”
    — William Buckley on “Meet the Press”, 17 October 1965.

  96. “I would rather be governed by the first two thousand people in the Boston telephone directory than by the two thousand people on the faculty of Harvard University.”
    — William Buckley on “Meet the Press”, 17 October 1965.

  97. Test lead

    “I would rather be governed by the first two thousand people in the Boston telephone directory than by the two thousand people on the faculty of Harvard University.”
    — William Buckley on “Meet the Press”, 17 October 1965.

    Follow up commenting

  98. Test lead

    “I would rather be governed by the first two thousand people in the Boston telephone directory than by the two thousand people on the faculty of Harvard University.”
    — William Buckley on “Meet the Press”, 17 October 1965.

    Follow up comment

    • “I would rather be governed by the first two thousand people in the Boston telephone directory than by the two thousand people on the faculty of Harvard University.”
      — William Buckley on “Meet the Press”, 17 October 1965.

      Followup comment

  99. “How did you get here?”

    Republican Wyoming Rep. Liz Cheney grilled environmental experts on their travel methods at a House hearing on climate change while discussing the Green New Deal’s call to phase out air travel.

    After a moment of silence, one of the environmental experts on the panel chimed in to say she supports many of the recommendations outlined in the Green New Deal.

    Cheney replied: “I would just say that it’s going to be crucially important for us to recognize and understand when we outlaw plane travel, we outlaw gasoline, we outlaw cars, I think actually probably the entire U.S. military, because of the Green New Deal, that we are able to explain to our constituents and to people all across this country what that really means. Even when it comes down to something like air travel … that means the government is going to be telling people where they can fly to and where they can’t. I would assume that means our colleagues from California are going to be riding their bicycles back home to their constituents.”

    https://www.dailycaller.com/2019/02/12/liz-cheney-green-new-deal-question/

  100. 28 Jan 2019

    “They Ruined YouTube” Mark Dice Shows How YouTube Rigged Their Search Algorithm to Suppress AltMedia
    Chris Menahan
    InformationLiberation

    Google has begun changing search results in response to liberal journalists complaining to them that they don’t like the results they’re getting from their searches.

    If websites such as WUWT have already been effectively banished from Google’s search results, they can look forward to worse happening to their YouTube videos. YouTube has decided to “disappear” so-called “conspiracy” videos and what they call “alternative media” from their search results and the sidebar. From now on, only “whitelisted” corporate videos will be visible.

    http://www.informationliberation.com/?id=59731

  101. In your post the link is added to a Whatts Up link (right-click “copy link location” or look at the source code). When I go for your link proper it works:

    sci-hub.tw/10.1080/02626667.2019.1567925

    So I guess WordPress is the culprit here.

  102. Generally I don’t learn article on blogs, however I wish to say that this write-up very forced me
    to try and do so! Your writing taste has been amazed
    me. Thank you, very great article.

  103. M <- 100
    for(j in 2:4){
    m <- numeric(M)
    N <- 10 ^ j
    for(i in 1:M){
    x <- 10 * rnorm(N)
    d <- rnorm(N)
    y <- round(x + d)
    m[i] <- mean(y) – mean(x)
    }
    hist(m)
    }

  104. Near the end of one of Scott Adams’ daily Periscope podcasts, he mentions his desire to moderate a debate between climate scientists and their skeptics concerning the existence and dangers of climate change.

    As Scott Adams would manage it, the debaters would not be facing each other in the same venue. Rather, they would be asked in a Periscope interview to present their five most persuasive arguments for their position.

    As I myself view Adam’s proposal, his podcasted debate might serve as a dry run for a larger public debate over today’s mainstream climate science, one which might go critical mass if America’s voters are ever asked to make serious personal and economic sacrifices in the name of fighting climate change.

    Among his other pursuits, Scott Adams is an expert in the art and science of persuasion. He is looking for a short list of arguments from each side of the AGW question that would be persuasive to those people who are not climate scientists themselves but who have an interest in hearing summaries of the competing arguments.

    It is clear from listening to Adam’s thoughts on his proposed debate that he does not have a grasp of the most basic fundamentals of each side of the question. Nor does he understand how those basic fundamentals influence the content and rhetoric of the science debate. Anyone who participated in this debate would have to educate Scott Adams on the basics in a way that is comprehensible to the layman.

    The other problem for those representing the skeptical position is that Adams views the question as having only two very distinct sides. He does not understand that a middle position exists which covers the many uncertainties of today’s climate science.

    In his look at how the scientific debate is being pursued by both sides, Scott Adams frames the science question in a stark terms. Is the earth warming, or is not warming? If it is warming, is CO2 the cause or is it not the cause? Is warming dangerous or is it not dangerous? If it is dangerous, then how dangerous is it?

    Judith Curry’s name was mentioned as a climate scientist who might be a good representative for the skeptic side of the debate.

    Presumably, each representative would be asked at some point to refute the five most persuasive arguments offered by the opposition. I would suggest that these arguments might cover some or all of these topics:

    — The fundamental basis of today’s mainstream climate science including the postulated water vapor feedback mechanism.

    — Ocean warming versus atmospheric warming as the true measure of the presence and the rate of increase of climate change.

    — The accuracy, validity, and uncertainties of the modern temperature record and of the paleoclimate temperature record.

    — The accuracy, validity, and uncertainties of the general circulation models, the sea level rise projections, and the projections of AGW-related environmental, human, and economic impacts.

    — The costs and benefits of alternative public policy responses to climate change including the practicality of relying on wind and solar for our energy needs and the role of nuclear power,

    — The costs and benefits of massive government spending on the Green New Deal versus the use of government-mandated carbon pricing mechanisms combined with an aggressive application of the Clean Air Act.

    If Scott Adams goes forward with his podcasted debate, will anyone show up to defend the mainstream climate science side of the question?

    If no one does, then someone from the skeptic side must present the mainstream’s side in a way that is both true to the mainstream position but which also drastically condenses the raw science into something the layman can understand.

    Here is an example of just how condensed a basic description of today’s mainstream climate science might have to be in order to be comprehensible to the laymen — and also to Scott Adams himself — as a description of today’s mainstream theory:

    —————–

    Mainstream Climate Science Theory: CO2 as the Earth’s Temperature Control Knob

    Over time periods covering the last 10,000 years of the earth’s temperature history, carbon dioxide has been the earth’s primary temperature control knob.

    Although Water vapor is the earth’s primary greenhouse gas, adding carbon dioxide further warms the atmosphere thus allowing it to hold more water vapor than it otherwise could. The additional carbon dioxide amplifies the total warming effect of both greenhouse gases, CO2 and water vapor, through a feedback mechanism operating between CO2’s warming effects and water vapor’s warming effects.

    For example, if carbon dioxide’s pre-industrial concentration of 280 ppm is doubled to 560 ppm by adding more CO2 to the atmosphere, CO2’s basic effect of a 1C to 1.5C warming per CO2 doubling is amplified by the water vapor feedback mechanism into a much larger 2.5C to 4C range of total warming.

    Atmospheric and ocean circulation mechanisms affect the rate and extent of atmospheric and ocean warming. These mechanisms transport heat within the atmosphere and the oceans, and move heat between the atmosphere and the oceans. The circulation mechanisms also affect how much of the additional trapped heat is being stored in the oceans, how much is being stored in the atmosphere, and how much is being lost to outer space.

    Uncertainties in our basic knowledge of atmospheric and ocean circulation mechanisms make it difficult to predict with exact precision how much warming will occur if CO2 concentration is doubled from 280 ppm to 560 ppm.

    These uncertainties also limit out ability to predict exactly how fast the warming will occur and to predict with exact certainty where and how much of the additional trapped heat will be stored in the oceans versus in the atmosphere. Thus a range of warming predictions can be expected and must be studied further.

    Climate modeling exercises now indicate that a range of from 2.5C to 4C of total global warming over and above pre-industrial temperatures is likely to occur, depending upon which assumptions are being made concerning atmospheric and ocean circulation mechanisms and concerning how much CO2 will be added to the atmosphere over the next 100 years.

    (End of Summary)

    —————–

    As a layman in trying to understand climate science topics, you have to crawl before you can walk.

    If the skeptics arguments are to be persuasive to the non-scientist layman, then explaining the basics of today’s mainstream climate science is a necessary step prior to explaining the uncertainties of the science and its predictions. It’s even a necessary prior step if one completely rejects the basic tenants of today’s mainstream climate science. An informed debate has to start somewhere.

    As someone who is not a climate scientist myself, the description I’ve written above is my own highly-condensed summary of what I understand to be the mainstream climate scientist’s basic theory. The description is presented in terms that might be understandable to the non-scientist layman while also being true to the basic tenants of the mainstream climate science narrative.

    Is my example of a highly summarized description actually understandable to the non-scientist layman? Is it actually true to the scientific position mainstream climate scientists now hold? Is it useful as a starting point for understanding the overall context of the debate?

    Here is a most important point concerning what Scott Adams is trying to accomplish.

    Adams is not asking the opposing sides to prove scientifically that their side of the climate change question is the scientific truth. He is asking them to offer a defense of their side of the question that is understandable to the non-scientist and is persuasive as debating arguments go.

    In his look at how the scientific debate is being pursued by both sides, Scott Adams frames the science question in stark terms. Is the earth warming, or is not warming? If it is warming, is the cause CO2, or is it not CO2? Is the warming dangerous, or is it not dangerous? If it is dangerous, how dangerous is it?

    Logically, any level of warming regardless of its rate of increase could become dangerous if the warming continues indefinitely into the future. A 0.2C per decade rate of warming will produce a 2C increase in a hundred years time, 4C in two-hundred years time. If we continue adding CO2 to the atmosphere, and if CO2 will indeed be the earth’s temperature control knob for the next several thousand years, then when will the warming stop?

    What is left out the current debate over climate change is the question of certainty versus uncertainty.

    If America’s voting public is ever asked to make serious personal and economic sacrifices in the name of fighting climate change, and if the debate over mainstream climate science then goes critical mass, the question of certainty versus uncertainty will become a deciding factor as to who wins or loses that debate.

  105. I see no account of why this is supposed to be caused by a momentary pressure drop rather than nucleation by exhaust pollutants. If they are ready to stretch causation to 6 miles, they are not able to differentiate between wing depression and exhaust trails.

    The authors quote “Woodley et al. (1991) have shown that aircraft exhaust is of negligible importance in aircraft-produced ice particle formation”, which made that case quite nicely. Exhaust pollutants are excluded by stipulation via Woodley, and Woodley et al were pretty thorough in their investigations.

    The authors showed how the ice particles from the aircraft wake were measurably different than the surrounding precipitation, matched with individual aircraft, and could take between 20 and 40 minutes to fully develop into precipitation. During that time the LIP tracked with the advection movement of the weather system from where the aircraft had been. So, the up to 6 mile displacement in space is really just a displacement in time.

    Finally, engine contrails are not the result of pollutant caused nucleation, but instead are caused by direct injection of excess moisture into the atmosphere as a byproduct of combustion.


  106. Station ID USW00093729 Cape Hatteras ap
    TEMPS 2013 TEMPS 1981-2010
    month N std temp avg temp err in mean N se temp avg temp err in mean anomaly error in mean
    JAN 62 6.1 9.6 0.8 1848 6.8 7.8 0.16 1.8 0.8
    FEB 56 5.1 8.84 0.7 1676 6.3 8.5 0.15 0.4 0.7
    MAR 62 4.7 9.46 0.6 1846 6.3 11.2 0.15 -1.7 0.6
    APR 60 5.3 15.1 0.7 1788 5.9 15.6 0.14 -0.5 0.7
    MAY 62 5.1 20.48 0.6 1848 5.3 19.8 0.12 0.7 0.7
    JUN 60 3.6 25.59 0.5 1786 4.5 24.2 0.11 1.4 0.5
    JUL 62 3.3 27.38 0.4 1838 3.9 26.4 0.09 1.0 0.4
    AUG 62 3.7 26.61 0.5 1846 4.0 26.1 0.09 0.5 0.5
    SEP 60 4.9 23.52 0.6 1788 4.3 23.8 0.10 -0.3 0.6
    OCT 62 5.2 19.7 0.7 1848 5.4 19.1 0.13 0.6 0.7
    NOV 60 6.3 13.69 0.8 1784 6.0 14.5 0.14 -0.9 0.8
    DEC 62 6.5 11.42 0.8 1844 6.5 9.9 0.15 1.5 0.8

    ann anomaly 0.38
    stdev anomaly 1.00
    error in mean 0.29
    final +0.38+/-0.29c


  107. Station ID USW00093729 Cape Hatteras ap
    TEMPS 2013 TEMPS 1981-2010
    month N std temp avg temp err in mean N se temp avg temp err in mean anomaly error in mean
    JAN 62 6.1 9.6 0.8 1848 6.8 7.8 0.16 1.8 0.8
    FEB 56 5.1 8.84 0.7 1676 6.3 8.5 0.15 0.4 0.7
    MAR 62 4.7 9.46 0.6 1846 6.3 11.2 0.15 -1.7 0.6
    APR 60 5.3 15.1 0.7 1788 5.9 15.6 0.14 -0.5 0.7
    MAY 62 5.1 20.48 0.6 1848 5.3 19.8 0.12 0.7 0.7
    JUN 60 3.6 25.59 0.5 1786 4.5 24.2 0.11 1.4 0.5
    JUL 62 3.3 27.38 0.4 1838 3.9 26.4 0.09 1.0 0.4
    AUG 62 3.7 26.61 0.5 1846 4.0 26.1 0.09 0.5 0.5
    SEP 60 4.9 23.52 0.6 1788 4.3 23.8 0.10 -0.3 0.6
    OCT 62 5.2 19.7 0.7 1848 5.4 19.1 0.13 0.6 0.7
    NOV 60 6.3 13.69 0.8 1784 6.0 14.5 0.14 -0.9 0.8
    DEC 62 6.5 11.42 0.8 1844 6.5 9.9 0.15 1.5 0.8

    ann anomaly 0.38
    stdev anomaly 1.00
    error in mean 0.29
    final +0.38+/-0.29c

  108. non breaking spaces EVERYWHERE

    Station ID USW00093729 Cape Hatteras ap 
    TEMPS 2013                                              TEMPS 1981-2010
    month   N      std temp    avg temp    err in mean   N         se temp     avg temp    err in mean     anomaly  error in mean
    JAN     62     6.1         9.6         0.8           1848       6.8          7.8     0.16             1.8        0.8
    FEB     56     5.1         8.84        0.7           1676       6.3          8.5     0.15             0.4        0.7
    MAR     62     4.7         9.46        0.6           1846       6.3          11.2     0.15             -1.7          0.6
    APR     60     5.3         15.1        0.7           1788       5.9          15.6     0.14             -0.5          0.7
    MAY     62     5.1         20.48       0.6           1848       5.3          19.8     0.12             0.7        0.7
    JUN     60     3.6         25.59       0.5           1786       4.5          24.2     0.11             1.4        0.5
    JUL     62     3.3         27.38       0.4           1838       3.9          26.4     0.09             1.0        0.4
    AUG     62     3.7         26.61       0.5           1846       4.0          26.1     0.09             0.5        0.5
    SEP     60     4.9         23.52       0.6           1788       4.3          23.8     0.10             -0.3          0.6
    OCT     62     5.2         19.7        0.7           1848       5.4          19.1     0.13             0.6        0.7
    NOV     60     6.3         13.69       0.8           1784       6.0          14.5     0.14             -0.9          0.8
    DEC     62     6.5         11.42       0.8           1844       6.5          9.9     0.15             1.5        0.8
                                                                
             ann anomaly 0.38
             stdev anomaly 1.00
             error in mean 0.29
             final +0.38+/-0.29c

  109. 26 Feb 2019

    Global Warming? Los Angeles Has Coldest February in 60 Years

    Los Angeles is officially experiencing the coldest February in nearly 60 years, according to the National Weather Service, as the city has endured a series of storms and is bracing for more later this week.

    The Los Angeles Times reported Monday evening:
    https://www.latimes.com/local/lanow/la-me-ln-cold-february-20190225-story.html
    “This month is the coldest February in downtown Los Angeles in nearly 60 years, with the average high temperature at 60.6 degrees as of Sunday. That’s a full 8 degrees below the normal average temperature, the National Weather Service said in a news release announcing the record lows. It hasn’t been this cold since 1962, when the average high temperature for the month in downtown L.A. was 59.8 degrees.”

    The state is experiencing even more storms and cold weather, as a new “atmospheric river”[1] — a front of moisture from the Pacific — is expected to dump rain on Northern California through mid-week. According to CBS San Francisco[2], rainfall totals were expected to reach 6 to 12 inches in the mountains, threatening mudslides in areas affected by last year’s wildfires.

    1. https://www.noaa.gov/stories/what-are-atmospheric-rivers
    2, https://sanfrancisco.cbslocal.com/2019/02/25/san-francisco-weather-potent-atmospheric-river-brings-torrential-rains-threat-of-mudslides/

    Los Angeles is also expecting more rain, albeit with warmer temperatures than it is currently experiencing, before the end of the month.

    Last week saw a rare snowfall within the urban parts of the city, including West Hollywood.
    https://www.breitbart.com/local/2019/02/21/global-warming-snow-in-los-angeles/

    Currently, California’s snowpack is already at 119% above its April 1 average.
    https://postimg.cc/7bs4qsWz

  110. Grrr…Hotel Wi-Fi… DL2185 BOS-MSP as it flies over my head in Mississauga Ontario as 3:16 PM EST Tuesday 26 February 2019
    Snowfall warning in effect for:

    Burlington - Oakville
    Halton Hills - Milton
    Mississauga - Brampton

    Snowfall with total amounts of about 15 cm is expected.

    Significant snowfall of near 15 cm on tap Wednesday.

    Snow is expected to reach the Golden Horseshoe early Wednesday morning and will continue through the day before ending in the evening.

    Many areas will receive near 15 centimetres of snow by evening. Locally higher amounts up to 20 cm are possible at a locale or two near Western Lake Ontario.

    It appears that the heaviest snow will fall in the afternoon, resulting in a significant impact on the commute home late in the day. Motorists should allow extra time to reach their destination.

    This snow event will be from yet another in a series of low pressure systems which have formed over the Southern Plains States. This latest low will pass by just to the south of Lakes Erie and Ontario on Wednesday.

    Rapidly accumulating snow could make travel difficult over some locations. Visibility may be suddenly reduced at times in heavy snow. There may be a significant impact on rush hour traffic in urban areas.

    Please continue to monitor alerts and forecasts issued by Environment Canada. To report severe weather, send an email to ONstorm@canada.ca or tweet reports using #ONStorm. approaches.

    https://www.flightradar24.com/DAL2185/1fa185b3

  111. This is a heads-up for anyone that has been censored by Facebook, YouTube, Twitter, Patreon or any of the other silicone valley Technocracy.

    You can express your “unacceptable” views on this website: https://usa.life

    They will allow you to say things that will get you banned by the Technocracy. For example, there is this article….

    The Greatest Scam in History
    https://usa.life/read-blog/4_the-greatest-scam-in-history.html

    “Catastrophic anthropogenic climate change is the greatest scam in world history. With the current influx of wildly irresponsible politicians the fanatics in this twisted cult are renewing their horrific ‘solutions’ to this imaginary problem. Get involved and stop them before it…”

    ….which would not be published on Facebook, YouTube, Twitter, etc. without some sort of penalty, demonetization, shadow-banning or outright suspension of your account.

    Check it out. It is a real breath of fresh air!

  112. @Robert Kernodle Leftist talking points get SO tiresome. An impoverished-person IQ test looks like this: https://en.wikipedia.org/wiki/Raven's_Progressive_Matrices. The same as for rich people. Explainable by pointing and grunting, if need be. No reading, no comprehending of developed language and developed concepts of comparatively affluent people involved.

    RPM (Raven’s Progressive Matrices) are even leftist-Progressive, it says so right in the name!

    Technically, RPM measure fluid intelligence, not crystallized intelligence. Since fluid intelligence is how you get crystallized intelligence, fluid intelligence and crystallized intelligence are well-correlated with each other and with “g” . g is sometimes called Generalized Cognitive Ability. The SAT, GRE, & etc. measure crystallized intelligence.

    I had a bad experience with statins. It was strange. My fluid intelligence went way down while my crystallized intelligence stayed the same. It

    • @Robert Kernodle Leftist talking points get SO tiresome. An ” impoverished-person IQ test” looks like this: https://en.wikipedia.org/wiki/Raven's_Progressive_Matrices. The same as for rich people. Explainable by pointing and grunting, if need be. Usable with children who haven’t learned to read yet. No reading, no “comprehending of developed language” and “developed concepts of comparatively affluent people” involved.

      RPM (Raven’s Progressive Matrices) are even leftist/Progressive, it says so right in the name! 😉

      Technically, RPM measure fluid intelligence, not crystallized intelligence. Since fluid intelligence is how you get crystallized intelligence, fluid intelligence and crystallized intelligence are well-correlated with each other and with “g” . g is sometimes called Generalized Cognitive Ability. The SAT, GRE, & etc. measure crystallized intelligence and are also well-correlated with g.

      I had a bad experience with statins. It was strange. My fluid intelligence went way down while my crystallized intelligence stayed the same. It was eye-opening and frightening to get a taste of how the other half lives.

  113. @Robert Kernodle Leftist talking points get SO tiresome. An ” impoverished-person IQ test” looks like this: https://en.wikipedia.org/wiki/Raven's_Progressive_Matrices. The same as for rich people. Explainable by pointing and grunting, if need be. Usable with children who haven’t learned to read yet. No reading, no “comprehending of developed language” and “developed concepts of comparatively affluent people” involved.

    RPM (Raven’s Progressive Matrices) are even leftist/Progressive, it says so right in the name! 😉

    Technically, RPM measure fluid intelligence, not crystallized intelligence. Since fluid intelligence is how you get crystallized intelligence, fluid intelligence and crystallized intelligence are well-correlated with each other and with “g” . g is sometimes called Generalized Cognitive Ability. The SAT, GRE, & etc. measure crystallized intelligence and are also well-correlated with g.

    I had a bad experience with statins. It was strange. My fluid intelligence went way down while my crystallized intelligence stayed the same. It was eye-opening and frightening to get a taste of how the other half lives.

  114. This is a test to see if I can paste a file from my PC.
    /Users/cynthiamaher/Documents/Science/Climate/HypothesesOfCause/arc-continent_collision_sutures_to_ice_ages.png

  115. Wonder why the comments show up immediately here and take half a day over at the “real” comment sections. Interesting. 🙂

  116. Testing why moderation now loves me or hates me, not sure which.

    Always for killing people with substandard labor in countries we don’t live in, aren’t we? Hatred for humanity and children runs rampant with this. One supposes dead kids and adults are no big deal as long as they aren’t your relatives. Interesting, too, that a job that kills is so okay in some other country. Hey, if it’s good there, let’s fix poverty HERE by having kids break up streets with jack hammers and haul away concrete, toil 6 or 8 hours a day cleaning trash and feces from our streets, etc. They don’t need school. Just money. No more welfare. Put em out there in dangerous jobs. If they die, they save us all money. What’s great for the Congo is perfect for the USA.

  117. My comments are now disappearing entirely. Oh well, ever since the crash, commenting has been a nightmare anyway. Have a nice day. I give up.

  118. For those interested in Climate (or ‘Climate Change™) here a link to an old book on the latest research in 1905 of climate. Called Climatic Changes: Their Nature and Causes by Huntington and Visher it is freely available at http://www.gutenberg.org/ebooks/37855 . And yes CO2, terrestrial variations, solar variations, and stellar effects are discussed. Remarkably modern considering its age.

  119. Some comments by one of Gavin Schmidt’s grad students.

    When I (Duane Thresher) was at NASA GISS I pointed this out to Dr. Gavin Schmidt, current head of NASA GISS (anointed by former head Dr. James Hansen, the father of global warming) and leading climate change spokesperson. His response was, “We just have to hope they are on the same attractor”, literally using the word “hope”. They are almost certainly not so a climate model can’t predict nature’s climate.

    Similarly, some climate modelers study whether climate systems have multiple equilibria — different possible steady-states. If there are multiple equilibria then you can’t predict which equilibrium will occur and thus you can’t predict climate.

    Real Climatologists

  120. Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.
    –Andrew Tanenbaum, 1981

    Entertainingly described at …
    https://what-if.xkcd.com/31/

    But seriously, we do it all the time. We have Internet 2 connections (largest capacity network feeds, and no porn to clog up the network … unless you count some of the climate data that is surely crossing that network) and we still routinely use Snowballs to move petabyte sided data sets between data centers.

    https://aws.amazon.com/snowball/

    Clever packaging … LCD shipping labels, permanently attached power cords, network cable with GBICs attached for plug compatibility with any modern network switch, all in a self storing molded impact resistant case.

    [youtube https://www.youtube.com/watch?v=yl25W7LZAMU&w=560&h=315%5D

  121. Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.
    –Andrew Tanenbaum, 1981

    Entertainingly described at …
    https://what-if.xkcd.com/31/

    But seriously, we do it all the time. We have Internet 2 connections (largest capacity network feeds, and no porn to clog up the network … unless you count some of the climate data that is surely crossing that network) and we still routinely use Snowballs to move petabyte sided data sets between data centers.

    https://aws.amazon.com/snowball/

    Clever packaging … LCD shipping labels, permanently attached power cords, network cable with GBICs attached for plug compatibility with any modern network switch, all in a self storing molded impact resistant case.

    https://www.youtube.com/embed/yl25W7LZAMU

  122. Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.
    –Andrew Tanenbaum, 1981

    Entertainingly described at …
    https://what-if.xkcd.com/31/

    But seriously, we do it all the time. We have Internet 2 connections (largest capacity network feeds, and no porn to clog up the network … unless you count some of the climate data that is surely crossing that network) and we still routinely use devices like Snowballs to move petabyte sided data sets between data centers.

    https://aws.amazon.com/snowball/

    Clever packaging … LCD shipping labels, permanently attached power cords, network cable with GBICs attached for plug compatibility with any modern network switch, all in a self storing molded impact resistant case.

    [youtube https://www.youtube.com/embed/yl25W7LZAMU&w=640&h=480%5D

  123. Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.
    –Andrew Tanenbaum, 1981

    Entertainingly described at …

    https://what-if.xkcd.com/31/

    But seriously, we do it all the time. We have Internet 2 connections (largest capacity network feeds, and no porn to clog up the network … unless you count some of the climate data that is surely crossing that network) and we still routinely use devices like Snowballs to move petabyte sided data sets between data centers.

    https://aws.amazon.com/snowball/

    Clever packaging … LCD shipping labels, permanently attached power cords, network cable with GBIC attached for plug compatibility with any modern network switch, all in a self storing molded impact resistant case.

    [youtube https://www.youtube.com/embed/yl25W7LZAMU%5D

    Do videos work any more?

  124. Looks like the A HREF is there, but the text that should be inside the link is not inside the link. Another TEST with the word TEST in the link. And “quotes” around the URL.

  125. A test without quotes. Looks like the A HREF is there, but the text that should be inside the link is not inside the link. Another A href=https://wattsupwiththat.com/2019/04/18/the-left-has-made-themselves-vulnerable-on-climate-change/>BTESTB with the word TEST in the link. And NO “quotes” around the URL.

  126. Flubbed the last one. A test without quotes. Looks like the A HREF is there, but the text that should be inside the link is not inside the link. Another CTESTC with the word TEST in the link. And NO “quotes” around the URL.

  127. try againn fixing booboos

    Okay I think I am zeroing in on the ‘bug’ now, it seems to affect URLs with trailing slashes. My hypothesis is, UN-QUOTED URLs in A HREF= tag with trailing slash is WRONGLY interpreted as an ’empty’ element. The empty element syntax, or tags with no closing element such as <BR> should only become to <… /> where there is a SPACE before the slash. But WUWT is _incorrectly_ translating
    <A HREF=urlwithtrailingslash/>LINKTEXT</A> blah blah
    to this,
    <A HREF=urlwithtrailingslash/></A>LINKTEXT blah blah

    If you view source, the link is there but does not appear because the translator (incorrectly) closed out the A so LINKTEXT becomes bare text, and there is no visible link. There has been confusion about HTML vs XHTML syntax.

    If my assumption is correct, only #3 will not work.

    1. A test of LINKTEXT unquoted URL with no trailing slash
    2. A test of LINKTEXT quoted URL with no trailing slash
    3. A test of LINKTEXT unquoted URL with trailing slash
    4. A test of LINKTEXT quoted URL with trailing slash

    The takeaway: If your URL ends in slash you MUST put it in “quotes”.

  128. Both of these are surprising news, contrary to the doomster narrative. I doubt that either will be mentioned in the mainstream press or the liberal websites (that report every doomster paper as gospel). Red emphasis added.

    Non-uniform contribution of internal variability to recent Arctic sea ice loss” by Mark England in Journal of Climate, in press.
    “Over the last half century, the Arctic sea ice cover has declined dramatically. Current estimates suggest that, for the Arctic as a whole, nearly half of the observed loss of summer sea ice cover is not due to anthropogenic forcing, but due to internal variability. …”
    A new 200‐year spatial reconstruction of West Antarctic surface mass balance” by Yetang Wang et al. in JGR Atmospheres, in press.
    “High‐spatial resolution surface mass balance (SMB) over the West Antarctic Ice Sheet (WAIS) spanning 1800‐2010 is reconstructed by means of ice core records combined with the outputs of the European Centre for Medium‐range Weather Forecasts ‘Interim’ reanalysis (ERA‐Interim) and the latest polar version of the Regional Atmospheric Climate Model (RACMO2.3p2). The reconstruction reveals a significant negative trend (‐1.9 ± 2.2 Gt yr‐1 decade‐1) in the SMB over the entire WAIS during the 19th century, but a statistically significant positive trend of 5.4 ± 2.9 Gt yr‐1 decade‐1 between 1900 and 2010, in contrast to insignificant WAIS SMB changes during the 20th century reported earlier. …”

  129. Stop the presses on the multi-century forecasts! The science isn’t settled.

    Both of these are surprising news, contrary to the doomster narrative. I doubt that either will be mentioned in the mainstream press or the liberal websites (that report every doomster paper as gospel). Red emphasis added.

    Non-uniform contribution of internal variability to recent Arctic sea ice loss” by Mark England in Journal of Climate, in press.

    “Over the last half century, the Arctic sea ice cover has declined dramatically. Current estimates suggest that, for the Arctic as a whole, nearly half of the observed loss of summer sea ice cover is not due to anthropogenic forcing, but due to internal variability. …”

    A new 200‐year spatial reconstruction of West Antarctic surface mass balance” by Yetang Wang et al. in JGR Atmospheres, in press.

    “High‐spatial resolution surface mass balance (SMB) over the West Antarctic Ice Sheet (WAIS) spanning 1800‐2010 is reconstructed by means of ice core records combined with the outputs of the European Centre for Medium‐range Weather Forecasts ‘Interim’ reanalysis (ERA‐Interim) and the latest polar version of the Regional Atmospheric Climate Model (RACMO2.3p2). The reconstruction reveals a significant negative trend (‐1.9 ± 2.2 Gt yr‐1 decade‐1) in the SMB over the entire WAIS during the 19th century, but a statistically significant positive trend of 5.4 ± 2.9 Gt yr‐1 decade‐1 between 1900 and 2010, in contrast to insignificant WAIS SMB changes during the 20th century reported earlier. …”

  130. New York Times Climate Reporter Admits Paper “Categorically Excludes Deniers”

    Reporter Somini Sengupta boiled down scientific dissent on the subject of climate change to the insulting and loaded phrase “three climate deniers.”

    “What we don’t do is engage in a false debate. We don’t turn to the three climate science deniers on every story. And no one has ever asked me as a reporter to do that. We don’t categorically do that,” she said.

    Somini Sengupta continued: “We try to make personal connections. We try to move people. Into identifying with someone on the other side of the world or the other side of the country, feeling the human toll of climate change. And that’s principally what I do.”

    Although Sengupta claimed the paper covers “real debates that happen in the science and policy,” her admission made it clear the New York Times censors dissent on the issue as it deems fit. Promoting journalistic activism in this way (as the entire conference was designed to do), is a far cry from the unbiased, fair-minded and objective reporting it should be.

    Conference speakers also included representatives from The Washington Post, MSNBC and activists like anti-capitalist Naomi Klein.

    https://www.newsbusters.org/blogs/business/julia-seymour/2019/04/30/ny-times-climate-reporter-admits-paper-categorically

  131. testing 1,2,3…

    Local v micro is probably true as it goes but UHI clearly has an affect well beyond the local scale, particularly at coastal locations. The sea breeze where I grew up is a strong onshore created by the land sea heating differential it can extend to 60km and beyond out to sea depending on the angle of the coastline. It is known that the mountains of the Great Dividing range limits this circulation inland by various amounts on the East Coast of Australia while in Western Australia there are no topographical barriers and it is felt further inland and starts further out. Beyond the affect of synoptic winds, that the coastal urban sprawl has an effect on the strength duration and extent of this “local” circulation pattern is well documented.

    In the almost 50 years since my father built his house in undeveloped bush, the surroundings have become an urban sprawl completely filling the 20km gap between it and the next town along the coast. How then, is it possible to ignore the effect of UHI at the larger scale of its surroundings, be they rural or sea surface temperatures?

    To be very clear, its seem completely arbitrary to make the distinction between UHI and rural when clearly there is a demonstrated continuum in-between that might very well, turn out to be impossible separate out!

  132. Local v micro is probably true as it goes but UHI clearly has an affect well beyond the local scale, particularly at coastal locations. The sea breeze where I grew up is a strong onshore created by the land sea heating differential it can extend to 60km and beyond out to sea depending on the angle of the coastline. It is known that the mountains of the Great Dividing range limits this circulation inland by various amounts on the East Coast of Australia while in Western Australia there are no topographical barriers and it is felt further inland and starts further out. Beyond the affect of synoptic winds, that the coastal urban sprawl has an effect on the strength duration and extent of this “local” circulation pattern is well documented.

  133. If we could only post pictures on WUWT, those pictures could end up on Google images, where they could be seen by everybody. Then we could beat those alarmist b*st*rds! The average person doesn’t read WUWT.
    What a shame.

  134. Move to Hudson Yards Proves CNN Knows Global Warming is a Hoax

    CNN regularly and relentlessly abuses its broadcast megaphone to spread fear about Global Warming, to demand we all change our lifestyles, give up our freedoms, vote for Democrats, pay higher taxes, turn our lives over to central planners, and publicly testify to our belief in global warming lest we be denounced as “deniers.” Meanwhile, CNN commits the ultimate (and expensive) act of climate denialism by moving a large part of its multi-million dollar headquarters to very area we are told will soon be the ground zero of Global Warming flooding.

    CNN has moved to the water’s edge of Manhattan, the very same Manhattan that will be underwater as soon as 2015. Oops, sorry, that was an old scientific prediction. Obviously, 2015 has passed without Manhattan flooding. But Manhattan will be underwater as soon as 2018, which can only mean that– Oh. Sorry again, that was another prediction our global warming scientific experts got wrong. But soon, very soon Manhattan will be underwater because the scientists CNN takes very, very seriously say so.

    As recently as seven months ago, and without a hint of skepticism, CNN warned that if nothing is done by 2030, in 11 short years, “the planet will reach the crucial threshold of 1.5 degrees Celsius (2.7 degrees Fahrenheit) above pre-industrial levels by as early as 2030, precipitating the risk of extreme drought, wildfires, floods and food shortages for hundreds of millions of people.”

    CNN’s move to the Manhattan waterline is the ultimate act of faith that Global Warming is a hoax, is the ultimate proof CNN knows it is a hoax, even as it spends billions and billions of corporate dollars to spread this hoax, to scare the rest of us into voting a certain way.

    No one who believes in Global Warming spends piles of money to move its operations into a danger zone.

    FLASHBACK: ABC’s ’08 Prediction: NYC Under Water from Climate Change By June 2015
    https://www.newsbusters.org/blogs/scott-whitlock/2015/06/12/flashback-abcs-08-prediction-nyc-under-water-climate-change-june

    John Kerry, who has logged over one million air miles in a gas-guzzling jet while at State, announced that the Big Apple could make it to the end of the century
    http://granitegrok.com/blog/2016/04/manhattan-underwater-by-2018-not-so-much

    CNN: Top climate change scientists’ letter to policy influencers
    https://edition.cnn.com/2013/11/03/world/nuclear-energy-climate-change-scientists-letter/index.html
    Climate hysterics supplied by:
    Dr. Ken Caldeira, Senior Scientist, Department of Global Ecology, Carnegie Institution
    Dr. Kerry Emanuel, Atmospheric Scientist, Massachusetts Institute of Technology
    Dr. James Hansen, Climate Scientist, Columbia University Earth Institute
    Dr. Tom Wigley, Climate Scientist, University of Adelaide and the National Center for Atmospheric Research

  135. Trump says John Kerry should be prosecuted under Logan Act

    President Trump said Thursday that former Secretary of State John Kerry should be prosecuted under the Logan Act for speaking with Iranian officials and criticizing Trump’s policies in Iran. Trump told reporters at the White House that he would not rule out the possibility of military action in Iran amid escalating tensions before laying into Kerry for his involvement: “What I’d like to see with Iran, I’d like to see them call me. John Kerry speaks to them a lot, and John Kerry tells them not to call. That’s a violation of the Logan Act, and frankly he should be prosecuted on that.”

    Tensions between the U.S. and Iran have ratcheted up in recent weeks after several actions taken by the Trump administration.

  136. Alexandria Ocasio-Cortez Has A Sense Of Humor

    During a live stream in April, [https://www.realclearpolitics.com/video/2019/04/04/ocasio-cortez_to_critics_of_green_new_deal_i_pity_you_for_your_role_in_history_right_now.html] Ocasio-Cortez mocked critics who blew off the claim (of The World Ending In 12 Years) as a joke, making it clear that she was serious when she said, “We have 12 years left to cut emissions by at least 50%, if not more, and for everyone who wants to make a joke about that, you may laugh, but your grandkids will not.” She compared those who ignore climate change with those who sat on the sidelines during the Civil Rights movement. “So just know that while a lot of people can hide that their grandparents did that in the civil rights movement, you should also know that the internet documents everything,” Ocasio-Cortez warned. “And your grandchildren will not be able to hide the fact that you fought against acknowledging and taking bold action against climate change.”

    She now says The World Ending In 12 Years Was “Dry Humor,” and “you’d have to have the social intelligence of a sea sponge to think it’s literal.”

    • Same happened to me Janice…. Did a clean out of my cookies and web history, tidied all my favorites…. and now can’t post on WUWT without doing “Test”…. sometimes it makes my post visible in the threads, sometimes it doesn’t.

      Can’t figure out if I changed a security setting or it simply didn’t like all the cookies being deleted and redone… Dunno.

  137. In case someone reads this who can do anything… I can’t post on WUWT today. Same e mail address. Same laptop. I’ve tried on a regular thread and on “Test.” Janice 🙁

    Have I been personally banned??? Aaaaack! Could I at least know why? Please?

  138. Apple News Bans Site Questioning Global Warming Junk Science

    Natural News was banned from Apple News after publishing a report challenging the global warming hoax in April 2019. Apple wrote Natural News to inform them that their website was being banned based on its claims that are “rejected by the scientific community.” Natural News was quoting data that was confirmed by NASA.

    https://www.naturalnews.com/2019-04-26-nasa-declares-carbon-dioxide-is-greening-the-earth.html

    “Natural News dares to state the simple scientific fact — now confirmed by NASA — that carbon dioxide boosts the growth of green plants all across the Earth. This is not allowed by the techno-fascists of the Left who run Apple and insist that CO2 is somehow a poison that’s responsible for “climate change.” … Even though NASA has also admitted that CO2 is greening the Earth, Apple says that Natural News stories are diverging from the “scientific community,” somehow justifying Apple’s censorship of all news from Natural News, including the hundreds of articles each month that cite scientific journals.

    https://www.naturalnews.com/2019-05-13-techno-dictatorship-apple-bans-natural-news-scientific-community.html#

  139. New York Faces Natural Gas Shortages

    Cuomo wants New York to adopt his version of the Green New Deal to cut greenhouse gas emissions and end fossil fuel use. Now millions face natural gas shortages.

    Cuomo’s administration rejected several major pipeline projects in recent years. New York’s opposition to pipelines is a major contributor to natural gas shortages in the Northeast, particularly during winter.
    Environmentalists cheered the state’s blocking of the Northeast Supply Enhancement pipeline.

    Natural Resources Defense Council senior attorney Kim Ong told Politico: “The state has made it clear that dangerous gas pipelines have no place in New York. This is a victory for clean water, marine life, communities and people’s health across the state. Along with our allies, we will continue to ensure this reckless project is shelved forever.”

    Now natural gas shortages aren’t just relegated to winters anymore. Consolidated Edison put a moratorium on new natural gas hookups across parts of Westchester County. The moratorium is sure to hurt the economy and raise energy prices.

  140. Professor Stephan Lewandowsky, an Australian cognitive scientist lol cow, told ThinkProgress in an email. “There is little doubt that his government will do precisely that.”

    Let’s hope he is right, though history suggests otherwise.

  141. Any action plan offered by climate activists which greatly reduces America’s carbon emissions between now and 2050 must rely heavily upon the authorities granted by Congress to the Executive Branch under the Clean Air Act and under existing national security legislation.

    Congress might well enact a massive new spending program for achieving a green energy future, but that program cannot produce the quick reductions in America’s carbon emissions climate activists say are necessary. Quick reductions can only come about if the federal government intervenes in the energy marketplace to directly raise the price of all carbon fuels and to directly limit their production and distribution.

    Congress will not put a price on carbon, nor will it adopt a carbon trading scheme. It will most certainly not act to place direct caps on the production and availability of carbon fuels. As a practical matter, if quick action is to be taken against America’s carbon emissions, it must be done by the President using authorities already granted by the Congress to the Executive Branch.

    Historically, the EPA has been the agency of government charged with managing and reducing emissions of substances identified as pollutants. It could not be otherwise simply because the pollutant in question is carbon. If climate activists ever get truly serious about reducing America’s carbon emissions, the EPA’s current and future endangerment findings will play a central role in justifying full application of Clean Air Act provisions to our carbon emissions.

    Donald Trump will not always be president. Sooner or later, a climate activist Democrat will replace him in the Oval Office and will use Executive Branch authorities to quickly undo everything Donald Trump did while he was president, including all of his environmental and climate change policies.

    Over the next several years, the Green New Deal’s goal posts will shift towards President Obama’s original target of an 80% reduction in America’s carbon emissions by 2050. Even with a target date of 2050, it is impossible to reach an 80% reduction without imposing considerable sacrifice on the American people in the form of strictly enforced energy conservation measures combined with steep increases in the cost of all forms of fossil fuel energy.

    Furthermore, it is impossible to reach 80% by 2050 without a massive commitment to nuclear power, and to manage the energy market transition in a way which leaves no choice but to sacrifice the benefits of market competition with natural gas in keeping a lid on nuclear’s costs.

    If a climate activist Democrat takes office in 2021, the Democrats will then be forced to put real meat on their Green New Deal energy policies. Will the Democrat’s 2021 plan be something new and different, or will it be a rehash of Barack Obama’s plan from a decade earlier updated with Green New Deal rhetoric, but containing little else of real substance?

    If a quick reduction in America’s carbon emissions is the actual goal, then everything which climate activists want done in greatly reducing America’s carbon emissions can be done by the President and the Executive Branch. As an example of how such a plan might be written and eventually enabled, I am outlining a plan to reduce America’s GHG emissions 80% by 2050 using the Clean Air Act augmented by existing national security legislation.

    This plan is similar to the one that was being pushed a decade ago in 2009 by 350.org and by other environmental activist groups. More recently, activist groups allied with 350.org have filed a petition with the EPA to have carbon emissions declared as Hazardous Air Pollutants (HAPs) under Section 212 of the Clean Air Act.

    In this new version of 350.org’s 2009 action plan, one geared to a Democratic Party takeover of the Executive Branch in January, 2021, the original 350.org plan is augmented by a system of carbon pollution fines which is the functional equivalent of a legislated tax on carbon. Moreover, if carbon pricing combined with massive new spending on green energy projects doesn’t prove to be fully effective, the updated plan adds a provisional system for imposing direct government control over the production and distribution of all carbon fuels.

    Here are the major phases of the revised plan. The start and end dates listed for each major phase assume a climate activist Democrat is elected president in 2020.

    Phase I: Establish a legal basis for regulating carbon dioxide and other carbon GHG’s as pollutants. (2007-2012)

    Phase II: Expand and extend EPA regulation of carbon GHG’s to all major sources of America’s carbon emissions. (2021-2022)

    Phase III: Establish a fully comprehensive EPA-managed regulatory framework for carbon. (2023-2025)

    Phase IV: Implement the EPA’s carbon pollution regulatory framework. (2026-2050)

    Phase V: Implement the provisional system for direct carbon fuel rationing. (Start and End dates contingent upon Phase IV progress.)

    Phase VI: Declare success in reducing America’s carbon emissions 80% by 2050. (If complete by 2050 or earlier.)

    These are the details of the plan, organized by phase:

    Phase I: Establish a legal basis for regulating carbon dioxide and other carbon GHG’s as pollutants. (2007-2012, i.e. Complete.)

    — File and win lawsuits to allow regulation of CO2 and other carbon GHG’s as pollutants under the Clean Air Act.
    — Publish a CAA Section 202 Endangerment Finding as a prototype test case for regulation of carbon GHG’s.
    — Defend the Section 202 Endangerment Finding in the courts.

    Phase II: Expand and extend EPA regulation of carbon GHG’s to all major sources of America’s carbon emissions. (2021-2022)

    — Issue a presidential executive order declaring a carbon pollution emergency.
    — Publish a CAA Section 108 Endangerment Finding which complements 2009’s Section 202 finding.
    — Declare carbon emissions as Hazardous Air Pollutants (HAPs) under CAA Section 112.
    — Establish a National Ambient Air Quality Standard (NAAQS) for carbon pollution.
    — Use the NAAQS for carbon pollution as America’s tie-in to international climate change agreements.
    — Defend the Section 108 Endangerment Finding, the NAAQS, and the Section 112 HAP Declaration in the courts.

    Phase III: Establish a fully comprehensive EPA-managed regulatory framework for carbon. (2023-2025)

    — Publish a regulatory framework for carbon pollution under Clean Air Act sections 108, 111, 112, 202, and other CAA sections as applicable.
    — Establish cooperative agreements with the states to enforce the EPA’s anti-carbon regulations.
    — Establish a system of carbon pollution fines which is the functional equivalent of a legislated tax on carbon.
    — Establish the legal basis for assigning all revenues collected from these carbon pollution fines to the states.
    — Research and publish a provisional system of direct carbon fuel rationing as a backup to the carbon fine system.
    — Defend the EPA’s comprehensive system of carbon pollution regulations in the courts.

    Phase IV: Implement the EPA’s carbon pollution regulatory framework. (2026-2050)

    — Commence operation of prior agreements with the states for enforcement of the EPA’s anti-carbon regulations.
    — Commence the collection of carbon pollution fines and the distribution of fine revenues to the states.
    — Monitor the effectiveness of the EPA’s carbon regulatory framework in reducing America’s GHG emissions.
    — Monitor the effectiveness of renewable energy projects in reducing America’s GHG emissions.
    — Monitor the effectiveness of energy conservation programs in reducing America’s GHG emissions.
    — Adjust the schedule of carbon pollution fines upward if progress in reducing America’s GHG emissions lags.
    — Assess the possible need for invoking the provisional system of direct carbon fuel rationing.
    — Defend the EPA’s system of carbon pollution regulations against emerging lawsuits.

    Phase V: Implement the provisional system for direct carbon fuel rationing. (Start and End dates contingent upon Phase IV progress.)

    — Issue a presidential proclamation declaring that Phase IV anti-carbon measures cannot meet the 80% by 2050 target.
    — Initiate the provisionally established system for imposing direct government control over production and distribution of all carbon fuels.
    — Apply the Phase IV system of carbon pollution fines in escalating steps as needed to incentivize Phase V compliance.
    — Defend the government-mandated carbon fuel rationing program in the courts.

    Phase VI: Declare success in reducing America’s carbon emissions 80% by 2050. (If complete by 2050 or earlier.)

    — Assess the need for continuing the EPA’s anti-carbon regulations and the US Government’s mandatory fuel rationing program beyond 2050.
    — Defend the government’s anti-carbon measures against emerging lawsuits if these measures continue beyond 2050.

    Remarks:

    Some history concerning 350.org’s previous efforts at pursuing climate action lawsuits through the courts is in order.

    Phase I of the plan outlined above, establishment of a legal basis for regulating carbon dioxide as a pollutant, was complete in 2012. The legal foundation needed to impose aggressive across-the-board regulation of all major sources of America’s carbon emissions remains in place awaiting the appearance of a president willing to use it .

    When Barack Obama was Chief Executive, his Clean Power Plan and his other anti-carbon measures might have achieved possibly one-third of his Year 2050 GHG reduction goals. But the remainder depended upon a highly uncertain combination of accelerated technological advancements and raw unvarnished hope.

    And yet, when President Obama had the opportunity and the means to move forward with the 350.org plan, he refused to go through with it. Nor were 350.org itself and the other climate activist groups willing to push hard for adoption of their 2009 plan after their initial victories in the courts.

    From 2012 onward, climate activists could have worked closely with the EPA using the ‘sue and settle’ process to put their 2009 plan into effect. If the dangers of climate change are as severe as they claim, then why didn’t the activists go forward with it while they were still in control of the federal government?

    As previously noted, activist groups allied with 350.org have now filed a petition with the EPA to have carbon emissions declared as Hazardous Air Pollutants (HAPs) under Section 212 of the Clean Air Act.

    Would climate activists make full use of a HAP declaration if they had a sympathetic president in the oval office in 2021? Or is their 2019 petition nothing more than a public relations gimmick intended to build support among environmentally concerned voters?

  142. Climate change politics is anti-white racism

    Red Pill Response – Soft-Eyed Leftist is TERRIFIED of ICE MELTING!

    • Thank you so much YouTube for deleting this whole channel for no good reason at all.

      The technocracy (Google) strikes again at popular ideas they don’t like.

  143. Top climatologist Declares: “Our Models Can’t Be Trusted”

    Professor John Christy of the University of Alabama in Huntsville told MPs and peers the climate models are way off in their predictions of rapid warming at high altitudes in the tropics. “They all have rapid warming above 30,000 feet in the tropics – it’s effectively a diagnostic signal of greenhouse warming. But in reality it’s just not happening. It’s warming up there, but at only about one third of the rate predicted by the models.”

    A similar discrepancy between empirical measurements and computer predictions has been confirmed at the global level: “The global warming trend for the last 40 years, starting in 1979 when satellite measurements began, is +0.13°C per decade or about half of what climate models predicted.”

    And Dr Christy says that lessons are not being learned: “An early look at some of the latest generation of climate models reveals they are predicting even faster warming. This is simply not credible.”

    https://www.thegwpf.org/content/uploads/2019/05/JohnChristy-Parliament.pdf

  144. How long will it take for one of my comments to appear? (over 3 hours, so far, for one today — sure looks like someone is slow-walking WUWT….. why? If annoying delay in publishing is not intentional, not fixing the issue appears to be intentional ….. Is the goal to make it so frustrating that all of us truth-tellers will quit? aaarrrrrrgh!!!!! — has M0shr taken over? The lukewarm swamp editorial bent of the past 3 years or so is bad enough around here, but, now we can’t even readily counter it. What IS UP WITH THIS??)

    Attempted to post: 1754, May 20, 2019

  145. An example of activists communicating danger to the public.

    Holthaus is a meteorologist.

    The Greenland ice sheet is currently going through a major melting this week, covering almost half its surface — unprecedented in its extent for this early in the year.

    This has not happened before. pic.twitter.com/vvh3scodLy
    — Eric Holthaus (@EricHolthaus) June 13, 2019

    First, how does a meteorologist state never happened in the ~40-year record as “never happened before”? That’s a material error, one often made by climate activists.

    Second, there are thousands – tens of thousands – of such weather metrics taken daily. Looked at as a whole, a ~3 standard deviation spike is commonplace. Looked at in isolation it is cherry-picking – perhaps the most common fallacy (of both sides) in the public climate debates. But I seldom see this mentioned.

  146. It simply does not matter what might have happened at 0 K or 100 K or 200 K or some other arbitrarily-chosen value <255 K. What we are concerned with are the feedback processes that obtained from 1850 onward, in the industrial era.

    If it doesn’t matter how feedback processes respond below 255K, then what’s the point of your 287.5 / 265 formula?
    You are claiming you can predict how feedbacks will respond to post 1850 warming by taking an average over all temperature responses from 0K to 265K, and assuming the same average increase will happen over the next few degrees increase.
    If you accept that feedback processes might not have caused a proportionate response at 100K then why would you assume an increase from 265K to 266K will be equal to the average?

    Next, it is necessary to address the question whether the system-gain factor is likely to prove invariant over the next two or three degrees of global warming. The answer is Yes…

    And as I said at the start, even if the system-gain factor is invariant over the next two or three degrees, it does not follow that it is the same as the factor from 0K to 256K.

    … because official climatology finds the climate-sensitivity parameter to be near-invariant (see IPCC 2007 ch. 6 passim…

    I think you mean IPCC 2001, but no matter. As I keep saying I doubt the IPCC means that the parameter is near constant for all radiative forcings, rather than the range of forcings that the earth will actually experience.
    They also explain that the value is only used for a first order estimation.
    It seems strange to insist on the idea that \lambda must be invariant for all time and temperatures, whilst insisting it doesn’t matter what it’s real value would be below 255K.

  147. If 269W/m^2 what is required to achieve conservation of energy at the top of the atmosphere, then no matter what you do to the system internally (more GHGs or whatever) when equilibrium is achieved again the upwelling LW is going to be 269W/m^2. MODTRAN cannot calculate conservation of energy within the atmosphere or on the ground because it is not a heat transfer code. However, use a standard u.s. atmosphere 1976 version, use a ground surface temperature of 288.2 and a CO_2 proportion of 280, and you get 269 W/m^2. That’s it. You can postulate all you want about this being too much irradiance, or that the surface is too warm or whatever. But you are now dealing with different problems than I considered.

  148. {\displaystyle A_{r}=1} A_{r}=1 (incoming particle), {\displaystyle A_{l}=r} A_{l}=r (reflection), {\displaystyle C_{l}} C_{l}=0

  149. {\displaystyle=A_{r}=1} A_{r}=1 (incoming particle), {\displaystyle=A_{l}=r} A_{l}=r (reflection), {\displaystyle=C_{l}} C_{l}=0

    [You are required to login with a valid email address to write, including test comments, on this site. .mod]

  150. David Attenborough says price of plane tickets should go UP “to save the environment”

    He confessed that “I’m afraid such a hike would hit poorer families the hardest.”

    I wonder how does David Attenborough and his crew and the massive weight of all their equipment get to all the countries to make his programmes then travel all over them? Does he walk? swim? row? sail? bicycle?

    I’ll bet he has the carbon footprint of a small country, the hypocrite.

    https://www.mirror.co.uk/news/politics/david-attenborough-suggests-air-fares-17712866

  151. Just Another Global Warming Hypocrite

    Barbra Streisand, global warming hysteric, flew her dogs around the world to watch her sing at London’s British Summer Time concert.

    Per Page Six: “She took her three Coton de Tuléar pooches — Miss Scarlet, Miss Violet and Fanny — on a 10,000-mile roundtrip flight to watch her perform at London’s British Summer Time concert on Sunday. There was no expense spared for her three beloved pets in the backstage area,” The Sun reported. “They had free roam of her dressing room and a dedicated member of staff to push them in the buggy or walk them round at their free will.”

    “Bringing some visitors to the show…” Barbra Streisand wrote in an Instagram post, which included a video of the pups (two of the Coton du Tulears are clones) and her entourage traveling in London.

    https://www.instagram.com/p/BzoAlkrHYJM/?utm_source=ig_embed&utm_campaign=embed_video_watch_again

    In June, Streisand went after President Trump for daring to doubt the dubious science behind anthropogenic climate change. “Among the worst of Trump’s legacies will be the denial of climate change and the increase in pollution,” Streisand said. “The blonde buffoon who can’t put one sentence together denies climate change! Greed and self interest take precedence as mussels are cooking themselves on the shores of California!” Streisand wrote. Streisand declared that all climate change deniers need to be voted out of office: “Last week it was 114 in Paris and Guadalajara was buried in 3 feet of ice from a hailstorm. Climate change is here now and it is time for voters to remove the climate deniers from office. Starting with Trump, she tweeted.

    After a few days in London, Streisand and her dogs hopped in a private jet and flew another 800 miles to Saint-Tropez, France.

    https://pagesix.com/2019/07/10/barbra-streisands-dogs-fly-from-london-to-st-tropez/

    https://www.thesun.co.uk/tvandshowbiz/9464716/barbra-streisand-dogs-british-summer-time-hyde-park/

  152. Delingpole: “Climategate Was Fake News,” Lies the BBC…
    https://www.breitbart.com/europe/2019/07/11/bbc-tries-airbrush-climategate-out-history/

    The BBC’s Newsnight program was a sad, desperate, and unconvincing attempt to debunk the Climategate scandal.

    The BBC’s blurb:
    https://www.bbc.co.uk/news/av/science-environment-48925015/climategate-10-years-on-what-s-changed
    ‘Climategate’: 10 years on, what’s changed?
    It’s almost 10 years since hackers stole thousands of emails and other documents from the University of East Anglia’s Climatic Research Unit.
    The scandal, known as “Climategate”, rocked the scientific world.
    Sceptics picked up on a small number of emails that seemed to suggest scientists had been deliberately manipulating data to exaggerate evidence of climate change.
    It wasn’t true, but what followed is thought to have shaped public opinion and could possibly have influenced the UN climate agreement that year.
    Kayleen Devlin reports for Newsnight on how battle lines and public opinions have changed since then.

    See it on YT:
    How climate sceptics tricked the public – BBC Newsnight

  153. text

    What do you want to be when you grow up? Pick up to three.

    CHINA USA UK

    56% Astronaut 30% Vlogger/YouTuber 29% Vlogger/YouTuber
    52% Teacher 25% Teacher 26% Teacher
    47% Musician 21% Athelete 23% Athelete
    37% Athlete 18% Musician 19% Musician
    18% Vlogger/YouTuber 11% Astronaut 11% Astronaut

    text

  154. text

    What do you want to be when you grow up? Pick up to three.

    CHINA                  USA                    UK

    56% Astronaut          30% Vlogger/YouTuber   29% Vlogger/YouTuber
    52% Teacher            25% Teacher            26% Teacher
    47% Musician           21% Athelete           23% Athelete
    37% Athlete            18% Musician           19% Musician
    18% Vlogger/YouTuber   11% Astronaut          11% Astronaut

    text

  155. Bellman should work through the problem I have set for the reader at the end of the head posting.

    OK, let me try. (Apologies in advance for any $\LaTeX$ issues.)

    $R_0 = 255$ is the global temperature sans Greenhouse gases and feedbacks, and $B_0 = 5$ the imagined temperature increase at 255K caused by feedbacks.

    $\frac{B_0}{R_0} = \frac{5}{255} \approx 0.02$

    This 0.02 is the value to which you attach great importance, but what does it represent? Unless you assume the feedback fraction is a linear function on $R$ it’s just one number divided by another number. The 5K of feedbacks divided by the entirety of K from 0 – 255.

    You then contrast this with the local rise,

    $\frac{\Delta B_0}{\Delta R_0} = \frac{28}{10} = 2.8$.

    This value has more meaning. It represents the average unit feedback response to a change in temperature, over a short range of values where feedbacks might be expected to be operating most vigorously.

    You seem to be astonished and claim it is impossible that the second value is bigger than the first, and I find it puzzling you cannot see why the difference is not impossible. But you were asking about $X$, and what it’s relationship with $a_0$, where

    $a_0 = \frac{\Delta R_0 + \Delta B_0}{\Delta R_0} = 1 + \frac{\Delta B_0}{\Delta R_0}$

    So,

    $X = \frac{\Delta B_0 / \Delta R_0}{B_0 / R_0} = \frac{a_0 – 1}{B_0 / R_0}$

    I’m not sure what that is meant to prove, over than if $B_0 / R_0$ is small, $X$ will be large.

    Imagine a car is stuck in a traffic jam and only moves 1km in 5 hours. Its average speed is 0.2km per hour. Then it gets out of its jam and drives 50km in the next hour, at an average speed of 50 km per hour. Would you compare the ratio of these two values, $50 / 0.2 = 250$ and conclude this ratio was impossible?

    Then he will come to understand why there is a limit to the growth in feedback response over time.

    That doesn’t follow at all. Say $B_0$ tended to zero. Your $X$ would tend to $\infty$, but there’s nothing unreasonable in assuming zero feedbacks at certain values. All this illustrates is that $X$ has no real world use.

    TBC.

  156. Bellman should work through the problem I have set for the reader at the end of the head posting.

    OK, let me try. (Apologies in advance for any \LaTeX issues.)

    R_0 = 255 is the global temperature sans Greenhouse gases and feedbacks, and B_0 = 5 the imagined temperature increase at 255K caused by feedbacks.

    \frac{B_0}{R_0} = \frac{5}{255} \approx  0.02

    This 0.02 is the value to which you attach great importance, but what does it represent? Unless you assume the feedback fraction is a linear function on $R$ it’s just one number divided by another number. The 5K of feedbacks divided by the entirety of K from 0 – 255.

    You then contrast this with the local rise,

    \frac{\Delta B_0}{\Delta R_0} = \frac{28}{10} = 2.8.

    This value has more meaning. It represents the average unit feedback response to a change in temperature, over a short range of values where feedbacks might be expected to be operating most vigorously.

    You seem to be astonished and claim it is impossible that the second value is bigger than the first, and I find it puzzling you cannot see why the difference is not impossible. But you were asking about $X$, and what it’s relationship with a_0, where

    a_0 = \frac{\Delta R_0 + \Delta B_0}{\Delta R_0} = 1 + \frac{\Delta B_0}{\Delta R_0}

    So,

    X = \frac{\Delta B_0 / \Delta R_0}{B_0 / R_0} = \frac{a_0 - 1}{B_0 / R_0}

    I’m not sure what that is meant to prove, over than if B_0 / R_0 is small, $X$ will be large.

    Imagine a car is stuck in a traffic jam and only moves 1km in 5 hours. Its average speed is 0.2km per hour. Then it gets out of its jam and drives 50km in the next hour, at an average speed of 50 km per hour. Would you compare the ratio of these two values, 50 / 0.2 = 250 and conclude this was impossible?

  157. French MPs Insult and Boycott Teenage Climate Activist Greta Thunberg

    “We do not need gurus of the apocalypse,” said one politician

    Politicians in France mocked the 16-year-old climate change campaigner Greta Thunberg as a “guru of the apocalypse” before boycotting her appearance in the French parliament. Legislators from the conservative Les Republicains and far-right Rassemblement National parties refused to attend the session and hurled a series of insults at the teenager – calling her the “Joan of Arc of climate change”, the “Justin Bieber of ecology” and a “prophetess in shorts.”

    French right-wingers made clear they were unimpressed by her achievements after she arrived in Paris. Guillaume Larrive, who is running to be leader of Les Republicains, said on Twitter: “I call on my colleagues to boycott Greta Thunberg. We do not need gurus of the apocalypse.” Julien Aubert, another candidate to lead the conservatives, tweeted: “Do not count on me to go and applaud a prophetess in shorts, Nobel prize of fear.”

    Jordan Bardella, a member of Marine Le Pen’s Rassemblement National party in the European Parliament, said: “Using children to hawk a fatalist message about the world going up in flames, and skipping school and going on strike, that is a deeply defeatist approach,” before referring to Thunberg as “the Joan of Arc of climate change”

  158. Month NOAA USCRN
    201810 -0.44°F -0.18°F
    201811 -2.71°F -2.56°F
    201812 2.31°F 2.39°F
    201901 0.74°F 0.63°F
    201902 -3.58°F -3.15°F
    201903 -2.86°F -2.81°F
    201904 1.08°F 1.55°F
    201905 -1.30°F -1.13°F
    201906 -0.31°F -0.14°F

  159. Bill Gates Pays Harvard Scientists To Begin Spraying Particles Into The Sky To Dim The Sun

    Gates is no stranger to funding controversial experiments. He has funded many of them, including one that would implant devices into babies to automatically give them vaccines.

    While the Harvard team’s experiment may sound like something out of a dystopian science fiction movie, the reality is that it has long been on the table of governments and think tanks from around the world. In November 2018, a study published in Environmental Research Letters, talked about doing the exact same thing—geoengineering and planes spraying particulates into the atmosphere to curb global warming.

    Harvard has formed an advisory board to begin moving forward with their plan to spray particles into the stratosphere to test the geoengineering method of dimming the sun. Harvard scientists will attempt to replicate the climate-cooling effect of volcanic eruptions. The university announced that it has created an external advisory panel to examine the potential ethical, environmental and geopolitical impacts of this geoengineering project, which has been developed by the university’s researchers.

    According to Nature/i> magazine, Louise Bedsworth, executive director of the California Strategic Growth Council, a state agency that promotes sustainability and economic prosperity, will lead the Harvard advisory panel, the university said on 29 July. The other seven members include Earth-science researchers and specialists in environmental and climate law and policy.

    https://www.nature.com/articles/d41586-019-02331-y

    Known as the Stratospheric Controlled Perturbation Experiment (SCoPEx), the experiment will spray calcium carbonate particles high above the earth to mimic the effects of volcanic ash blocking out the sun to produce a cooling effect.

    The experiment was announced in Nature magazine last year, who was one of few outlets to look into this unprecedented step toward geoengineering the planet. “If all goes as planned, the Harvard team will be the first in the world to move solar geoengineering out of the lab and into the stratosphere, with a project called the Stratospheric Controlled Perturbation Experiment (SCoPEx). The first phase — a US$3-million test involving two flights of a steerable balloon 20 kilometres above the southwest United States — could launch as early as the first half of 2019. Once in place, the experiment would release small plumes of calcium carbonate, each of around 100 grams, roughly equivalent to the amount found in an average bottle of off-the-shelf antacid. The balloon would then turn around to observe how the particles disperse.”

    Naturally, the experiment is concerning to many people, including environmental groups, who, according to Nature, say such efforts are a dangerous distraction from addressing the only permanent solution to climate change: reducing greenhouse-gas emissions. The idea of injecting particles into the atmosphere to cool the earth also seems outright futile considering what scientists are trying to mimic—volcanic eruptions. If we look at the second largest eruption of the 20th century, Mount Pinatubo, which erupted in the Philippines in 1991, it injected 20 million tons of sulfur dioxide aerosols into the stratosphere. Scientists from the USGS estimated that this 20 million tons only lowered the temperature of the planet by about 1°F (0.5°C) and this only lasted a year because the particles eventually fell to back to Earth.

    The Harvard team, led by scientists Frank Keutsch and David Keith, has been working on the SCoPEx project for several years but they have not always been in total agreement. In fact, as Nature reported, Keutsch—who is not a climate scientist—previously thought the idea to be “totally insane.” He has since changed his mind. As Nature reports: “When he saw Keith talk about the SCoPEx idea at a conference after starting at Harvard in 2015, he says his initial reaction was that the idea was “totally insane.” Then he decided it was time to engage. “I asked myself, an atmospheric chemist, what can I do?” He joined forces with Keith and Anderson, and has since taken the lead on the experimental work.”

  160. Gore claims his climate-change predictions about 2016 have now come true

    Al Gore said his predictions from 2006 about climate change over the next ten years have come true and claimed part of the damage has been irreversible: “You said back in 2006 that the world would reach the point of no return if drastic measures weren’t taken to reduce greenhouse gases by 2016. Is it already too late?” ABC News’ Jonathan Karl asked during “This Week with George Stephanopolous” on Sunday. “Well, some changes, unfortunately, have already been locked in place,” Gore replied. “Sea level increases are going to continue no matter what we do now. But, we can prevent much larger sea level increases — much more rapid increases in temperatures. The heat wave was in Europe. Now, it’s in the Arctic, and we’re seeing huge melting of the ice there.”

    Gore, who wrote and starred in the 2006 climate documentary “An Inconvenient Truth,” praised the field of Democrats aiming to unseat President Trump in 2020 for making the environment a central issue in many of their campaigns. “So, the warnings of the scientists 10 years ago, 20 years ago, 30 years ago, unfortunately, were accurate,” he said. “Here’s the good news… In the Democratic contest for the presidential nomination this year, virtually all of the candidates are agreed that this is either the top issue or one of the top two issues. There’s both bad news and good news. The problem’s getting worse faster than we are mobilizing to solve it,” Gore added. However, there’s also good news. We now have an upsurge in climate activism at the grassroots in all 50 states here in this country, and in every country in the world.”

  161. TEST POST TO CHECK HTML and DISPLAY OF CONTENT:’
    A comment made by ‘Beta Blocker’ in response to:
    ‘From The B-School: A Plan To Win The Millennials’ War On Carbon’, August 17, 2019
    Guest post by Timothy Nerenz, Ph.D.
    =========================================

    I live and work in the US Northwest. Most of the Democrats who now hold state and federal elective offices in California, Hawaii, Oregon, and Washington State strongly support the Green New Deal. They will have much influence in Washington DC if Democrats gain full control of the federal government in 2021.

    In its current form, the Green New Deal has a target date of 2030 for a 100% carbonless economy. However, the 2030 target date for 100% is merely an opening bargaining position for political negotiation. Over the next several years, the GND’s goal posts will gradually shift back towards President Obama’s original target of an 80% reduction in America’s carbon emissions by 2050.

    Even with a target date of 2050, it is impossible to reach an 80% reduction without imposing considerable sacrifice on the American people in the form of strictly enforced energy conservation measures combined with steep increases in the cost of fossil fuel energy. For one example, according to Mark Jacobson’s analysis, per capita consumption of electricity in America must drop to half of what it is today if we are to achieve 100% renewables without nuclear.

    Regardless of what Mark Jacobson believes, it is impossible to reach the Green New Deal’s targets without a massive commitment to nuclear power, and to manage the energy market transition in a way which leaves no choice but to sacrifice the benefits of market competition with natural gas in keeping a lid on nuclear’s costs.

    If the national polls are accurate, Green New Deal advocates are likely to be in full control of the federal government in 2021. They will then be forced to put real meat on their GND energy policies. Will their 2021 plan be something new and different? Or will it be a rehash of Barack Obama’s plan updated with Green New Deal rhetoric, but containing little else of real substance?

    If Democrats are truly serious about quickly reducing America’s GHG emissions, they must do what current law and what past practice demand they do. They must give the president and the EPA full responsibility for the tough job of reducing America’s carbon emissions.

    Here is a plan to reduce America’s GHG emissions 80% by 2050 using the Clean Air Act augmented by existing national security legislation. This plan is similar to the one that was being pushed a decade ago in 2009 by 350.org. In this new version, the original 350.org plan is augmented by a system of carbon pollution fines which is the functional equivalent of a legislated tax on carbon.

    Moreover, if carbon pricing combined with massive new spending on green energy projects doesn’t prove to be fully effective, the updated plan adds a provisional system for imposing direct government control over the production and distribution of all carbon fuels, implemented in the form of a carbon fuel rationing scheme.

    These are the six major phases of this plan:

    PHASE I: Establish a legal basis for regulating carbon dioxide and other carbon GHG’s as pollutants. (2007-2012. Now complete.)
    PHASE II: Expand and extend EPA regulation of carbon GHG’s to all major sources of America’s carbon emissions. (2021-2022)
    PHASE III: Establish a fully comprehensive EPA-managed regulatory framework for carbon. (2023-2025)
    PHASE IV: Implement the EPA’s carbon pollution regulatory framework. (2026-2050)
    PHASE V: Implement the provisional system for direct carbon fuel rationing. (Start and End dates contingent upon Phase IV progress.)
    PHASE VI: Declare success in reducing America’s carbon emissions 80% by 2050. (If complete by 2050 or earlier.)
    .

    A Detailed Description of the 80% by 2050 Plan:

    The plan might be accurately described as implementing Virtual Peak Oil. That is to say, the regulatory powers of government are applied in ways that make all fossil fuels as scarce and expensive by the year 2050 as they otherwise would be fifty years later in the year 2100.

    Phase I: Establish a legal basis for regulating carbon dioxide and other carbon GHG’s as pollutants. (2007-2012. Now Complete.)

    — File and win lawsuits to allow regulation of CO2 and other carbon GHG’s as pollutants under the Clean Air Act.
    — Publish a CAA Section 202 Endangerment Finding as a prototype test case for regulation of carbon GHG’s.
    — Defend the Section 202 Endangerment Finding in the courts.

    Phase II: Expand and extend EPA regulation of carbon GHG’s to all major sources of America’s carbon emissions. (2021-2022)

    — Issue a presidential executive order declaring a carbon pollution emergency.
    — Publish a CAA Section 108 Endangerment Finding which complements 2009’s Section 202 finding.
    — Establish a National Ambient Air Quality Standard (NAAQS) for carbon pollution.
    — Use the NAAQS for carbon pollution as America’s tie-in to international climate change agreements.
    — Defend the Section 108 Endangerment Finding and the NAAQS in the courts.

    Phase III: Establish a fully comprehensive EPA-managed regulatory framework for carbon. (2023-2025)

    — Publish a regulatory framework for carbon pollution under Clean Air Act sections 108, 111, 202, and other CAA sections as applicable.
    — Establish cooperative agreements with the states to enforce the EPA’s anti-carbon regulations.
    — Establish a system of carbon pollution fines which is the functional equivalent of a legislated tax on carbon.
    — Establish the legal basis for assigning all revenues collected from these carbon pollution fines to the states.
    — Research and publish a provisional system of direct carbon fuel rationing as a backup to the carbon fine system.
    — Defend the EPA’s comprehensive system of carbon pollution regulations in the courts.

    Phase IV: Implement the EPA’s carbon pollution regulatory framework. (2026-2050)

    — Commence operation of prior agreements with the states for enforcement of the EPA’s anti-carbon regulations.
    — Commence the collection of carbon pollution fines and the distribution of fine revenues to the states.
    — Monitor the effectiveness of the EPA’s carbon regulatory framework in reducing America’s GHG emissions.
    — Monitor the effectiveness of renewable energy projects in reducing America’s GHG emissions.
    — Monitor the effectiveness of energy conservation programs in reducing America’s GHG emissions.
    — Adjust the schedule of carbon pollution fines upward if progress in reducing America’s GHG emissions lags.
    — Assess the possible need for invoking the provisional system of direct carbon fuel rationing.
    — Defend the EPA’s system of carbon pollution regulations against emerging lawsuits.

    Phase V: Implement the provisional system for direct carbon fuel rationing. (Start and End dates contingent upon Phase IV progress.)

    — Issue a presidential proclamation declaring that Phase IV anti-carbon measures cannot meet the 80% by 2050 target.
    — Initiate the provisionally established system for imposing direct government control over production and distribution of all carbon fuels.
    — Apply the Phase IV system of carbon pollution fines in escalating steps as needed to incentivize Phase V compliance.
    — Defend the government-mandated carbon fuel rationing program in the courts.

    Phase VI: Declare success in reducing America’s carbon emissions 80% by 2050. (If complete by 2050 or earlier.)

    — Assess the need for continuing the EPA’s anti-carbon regulations and the US Government’s mandatory fuel rationing program beyond 2050.
    — Defend the government’s anti-carbon measures against emerging lawsuits if these measures continue beyond 2050.
    .

    The Political Landscape of 2021 and Beyond:

    If current polls are to be believed, Donald Trump will be soundly defeated in the 2020 election. Democrats will remain in control of the House of Representatives in 2021, and control of the Senate will pass into the hands of the Democrats as well. What will the Democrats do once they are back in full control of the federal government?

    If past history is any guide, it is unlikely the Democrats in Congress will enact a legislated tax on carbon. It is just as unlikely the Congress will acknowledge the need for carbon fuel rationing if their massive spending on green energy projects isn’t achieving their carbon reduction targets.

    So the question arises, is new legislation from the Congress needed to pursue a highly aggressive, nationally-enforced anti-carbon policy based on strict enforcement of the Clean Air Act?

    The answer is no, not another word of new legislation is needed from Congress to begin the process of greatly reducing America’s GHG emissions as far and as fast as climate change activists claim is necessary.

    The Supreme Court has already ruled that the EPA has full authority under the Clean Air Act to regulate all sources of America’s carbon emissions. Furthermore, the court has ruled that the process used by the EPA in 2009 to determine that CO2 is a pollutant was properly followed.

    Some history concerning 350.org’s previous efforts at pursuing climate lawsuits through the courts is in order.

    Phase I of this plan, establishment of a legal basis for regulating carbon dioxide as a pollutant, was complete in 2012. The legal foundation needed to impose aggressive across-the-board regulation of all major sources of America’s carbon emissions remains in place awaiting the appearance of a president willing to use it.

    When Barack Obama was Chief Executive, his Clean Power Plan and his other anti-carbon measures might have achieved possibly one-third of his Year 2050 GHG reduction goals. But the remainder depended upon a highly uncertain combination of accelerated technological advancements and raw unvarnished hope.

    And yet, when President Obama had the opportunity and the means to move forward with the 350.org plan, he refused to go through with it. Nor were 350.org itself and the other climate activist groups willing to push hard for adoption of their 2009 plan after their initial victories in the courts.

    From 2012 onward, climate activists could have worked closely with the EPA using the ‘sue and settle’ process to put their 2009 plan into effect. If the dangers of climate change are as severe as they claim, then why didn’t the activists go forward with it while they were still in control of the Executive Branch?
    .

    The Intersection of Climate Action Moral Obligation with Climate Action Policy:

    Let’s examine the intersection of climate action moral obligation with climate action policy as it concerns those who have taken a strong stand in favor of the Green New Deal.

    As currently envisioned, the Green New Deal will eliminate most forms of carbon-fueled transportation in favor of electric vehicles and passenger trains. The GND relies on wind and solar backed by batteries and by pumped-hydro storage to power all of these EV cars and trains.

    As I said previously, most of the Democrats who now hold state and federal elective offices in California, Hawaii, Oregon, and Washington State support the Green New Deal. These politicians will have much greater influence if Democrats gain full control of the federal government in 2021.

    Because Boeing builds jet airliners in Everett and Renton, let’s address the question of what role the large jet aircraft manufactured in Washington State will be playing once the Green New Deal is adopted as our national economic and energy policy.

    1 – Jet airliners of the kind Boeing now manufactures cannot fly in a Green New Deal world. The GND recognizes this fact and shifts most air travel onto trains or onto other forms of EV-powered ground transportation.

    2 – If Governor Jay Inslee and his likely successor Bob Ferguson are to act in accordance with their professed beliefs, they must demand that Boeing commit to developing and producing an airliner which does not emit carbon dioxide.

    3 – If Boeing will not commit to producing a hydrogen-fueled or electric-powered airliner by the end of the 2020’s at the latest, then Inslee and Ferguson have a clear moral obligation to force an end to production of Boeing airliners in Washington State.

    4 – If current jet airliners continue to fly into the early 2030’s, then GND politicians in California, Hawaii, Oregon, and Washington must begin restricting the number of flights passing through the major airports in their respective states.

    Most voters in Washington State wouldn’t particularly care if Boeing was pushed out of the Northwest. Our regional businesses combined with the spending activities of a number of government agencies are easily capable of supporting the Northwest’s economy without Boeing’s presence.

    On the other hand, the situation is different concerning what most Washington and Oregon residents might think about restricting air travel for business and for pleasure.

    What happens in 2030 if you work in Seattle, Bellevue, Redmond, Spokane, or Portland and you can’t fly to Los Angeles or to San Francisco on business whenever you want to? What happens if you want to take a two-week vacation in Hawaii but the limited space available on airliners is booked eight months to a year in advance?

    What if you run a tourist business in Honolulu and most of your customers must travel to Hawaii using a wind & solar-powered ocean liner, because airline flights are restricted and the large fossil-powered cruise ships have been banned from Hawaiian waters?

    Furthermore, what if your tourists arrive in Hawaii after a two-week journey across the ocean only to discover that much of what had been a tropical forest is now covered with solar panels, wind turbines, and energy storage facilities?

    All this said, the EPA, operating under the authority of the Clean Air Act, has determined that carbon dioxide is a dangerous pollutant if present in the atmosphere in excessive concentrations. The courts have given the EPA full authority to regulate all of America’s carbon emissions.

    If Jay Inslee, Gavin Newsom, David Ige, Kate Brown, and our Representatives and Senators in Congress are to avoid taking extreme political heat for enforcing strong anti-carbon measures, they must do what current law and past practice demand they do, and that is to give the president and the EPA full responsibility for the tough job of reducing America’s carbon emissions 80 percent by 2050.
    .

    The Moral Imperatives of Climate Change Activism:

    The climate activists who are pushing the Green New Deal have not yet been forced to come to grips with the basic conundrum of their own position regarding the true dangers of climate change.

    As the activists are now promoting it, the Green New Deal can quickly reduce our greenhouse gas emissions while imposing little or no hardship on the American people. But what if the climate activists are wrong and their Green New Deal plan is completely unrealistic in terms of how far and how fast it can go?

    If the dangers of climate change are real and are severe, and if quick action is needed to reduce America’s GHG emissions — but if the Green New Deal can’t get us there nearly as far and as fast as the climate activists say is necessary — then how could hardship and sacrifice not be demanded of the American people?

    Phrasing the question of a moral imperative another way, if the Green New Deal won’t work, are the dangers of climate change so serious and so close on the horizon that GHG reductions must be quickly and forcefully imposed, not simply encouraged?

    If this is indeed the case — if the massive new spending legislated under the Green New Deal doesn’t prove effective — then is there not a strong moral imperative to begin using the broad powers of the federal government in coercing the needed reductions?

    Historically, over a period of more than forty years, the EPA operating under the authority of the Clean Air Act has been our most effective means of managing and coordinating the hard choices which have to be made in reducing emissions of substances identified as dangerous atmospheric pollutants.

    Predicting that the Green New Deal won’t prove effective in actual practice — and assuming America’s voters won’t voluntarily commit to the necessary hardships and sacrifices if the GND doesn’t work — then climate activists must impose their GHG reductions through aggressive enforcement of anti-carbon regulations.
    .

    Summary:

    The US Supreme Court has determined that the EPA has authority under the Clean Air Act to regulate all of America’s carbon emissions. The basic legal foundation necessary to support what climate activists say needs to be done is already in place, waiting to be augmented by the enhanced regulatory tools needed to reduce America’s GHG emissions 80% by 2050.

    All that is lacking is a climate activist president and an EPA administrator who are willing to take the heat for doing what current law and past historical practice demand should be done in reducing an atmospheric pollutant identified through an Endangerment Finding as representing a clear threat to human health and the environment — assuming that CO2 is in fact the main driver of climate change, and assuming the dangers of climate change are in fact real and severe.

  162. i have been posting on WUWT for some time now, off and on.

    always under the handle “redc1c4” which i have been consistently using, to the point that many friends now call be “red” instead of my meat space name.

    i tried to comment the other day, from a new computer, and got the “moderation” notification

    that post never appeared. the next time i tried to comment, i realized i had typoed my e-mail address the first time and corrected it. (BTW: the moronhorde e-mail is an Ace of Spades HQ thing)

    that post didn’t even get the moderation warning, it just disappeared.

    can i please be un-banned?

    • it gets better: apparently there is a glitch with the domain: vanity addys are like that. i can also be reached via the same nick at “sbcglobal.net”

      TIA