IPCC Objectives and Methods Mandated Elimination, Reduction, Manipulation of Inadequate Real Data and Creation of False Data.

Guest opinion: Dr. Tim Ball

Intergovernmental Panel on Climate Change (IPCC) computer model projections are unfailingly wrong. Projections for three scenarios, High, Medium and Low, are consistently high compared to the actual temperature. There is something fundamentally wrong with their work and claims. They should not be the basis of any policy, public or private. The following statement from Assessment Report (AR4) is untenable given the projection results.

There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above. This confidence comes from the foundation of the models in accepted physical principles and from their ability to reproduce observed features of current climate and past climate changes. Confidence in model estimates is higher for some climate variables (e.g., temperature) than for others (e.g., precipitation). Over several decades of development, models have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases.

This is like saying that a soap box car is a good approximation of a Rolls Royce or a Ferrari. Their proof is that the soap box car appears to have some trace characteristics and moves down the road in the same direction – if it is on a hill.


Figure 1: Soap-Box-Car basic kit

Even a simple systems diagram of the atmosphere (Figure 2) is a thousand times more complicated than the soap box kit shown.


Figure 2; After Kellogg and Schneider (1974)

In the list of variables in the diagram, and the many excluded, there are no meaningful data. By meaningful, I mean if they exist they are inadequate in length, coverage, or accuracy. They are insufficient as the basis for a computer model, and the model results are completely unrepresentative of reality and inadequate as the basis for any policy. Proof of that claim is in the failure to validate the models, except by adding or adjusting variables until it appears to recreate known conditions. The failure of that sleight of hand is the failed projections. The only lesson they yield is the need for a total focus on data collection because climate science is already fulfilling Sherlock Holmes’s warning:

“It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”

I have written about the data problem often, but it is so fundamental that it requires constant repetition. It appears the Trump government is acting to stop the twisting of facts to suit theories. They may choose, justifiably considering the misuse of funds to date, to cancel funding to further climate change research. However, if they choose to go forward, any approach will founder on the lack of data.

Hubert Lamb, who probably gathered more climate data than any person before or since, explained in his autobiography (1997) that he created the Climatic Research Unit (CRU) because

“…it was clear that the first and greatest need was to establish the facts of the past record of the natural climate in times before any side effects of human activities could well be important.”

In our personal communications, he described the problems that lack of data would create and regretted that it was occurring at the CRU. As he wrote,

“My immediate successor, Professor Tom Wigley, was chiefly interested in the prospects of world climates being changed as result of human activities, primarily through the burning up of wood, coal, oil and gas reserves…” “After only a few years almost all the work on historical reconstruction of past climate and weather situations, which first made the Unit well known, was abandoned.”

Wigley was the grandfather of the IPCC, the ‘go-to-person,’ the arbiter among the central players during the debacle that became the CRU as it prepared and controlled the Third 2001 Assessment Report (TAR). Read the released Climategate emails and notice how often they seek his input to resolve disputes. It is a surreal experience because, invariably, his comments are restricted to warning about issues that might be threatening to their AGW objective, not to seeking the truth.

Many commentators pithily describe some home truths about data and statistics.

“If you torture the data enough, nature will always confess” – Ronald Coase.

Facts are stubborn things, but statistics are more pliable. – Anonymous.

“All of us are exposed to huge amounts of material, consisting of data, ideas, and conclusions — much of it wrong or misunderstood or just plain confused. There is a crying need for more intelligent commentary and review.” Murray Gell-Mann

“Science is built of facts the way a house is built of bricks; but an accumulation of facts is not more science than a pile of bricks is a house.” Henri Poincare


I have written about the lack of data several times on this site and elsewhere.







In one article, I pointed out that the IPCC and key players in the AGW deception knew there was no data.

In 1993, Stephen Schneider, a primary player in the anthropogenic global warming hypothesis and the use of models went beyond doubt to certainty when he said,

“Uncertainty about important feedback mechanisms is one reason why the ultimate goal of climate modeling – forecasting reliably the future of key variables such as temperature and rainfall patterns – is not realizable.”

A February 3, 1999, US National Research Council Report said,

Deficiencies in the accuracy, quality, and continuity of the records place serious limitations on the confidence that can be placed in the research results.

To which Kevin Trenberth responded,

It’s very clear we do not have a climate observing system….This may come as a shock to many people who assume that we do know adequately what’s going on with the climate, but we don’t.

Two CRU Directors, Tom Wigley, and Phil Jones said,

Many of the uncertainties surrounding the causes of climate change will never be resolved because the necessary data are lacking.

They didn’t hide the fact because it allowed them to produce the data they needed to prove their hypothesis and present the models as representative of the real world. They also knew, as they did with most climate science, the public didn’t know there was inadequate data.

This article was triggered by listening to a powerful advocate of the anthropogenic global warming (AGW) hypothesis talk about “synthetic’ data as if it was real data. Most people don’t know that most data are synthetic data. To offset the dearth of data for global climate models, it is common practice to create synthetic data in one model and use it as ‘real’ data in another model. Since there is almost no data for any of the multitude of variables that combine to create weather and climate they create a virtual reality.

The IPCC froze climate science progress since its inception in 1990 by deliberately turning the focus on human causes. They then contradicted the scientific method by proving rather than disproving the Anthropogenic Global Warming (AGW) hypothesis. This required actions that created several dilemmas.

· National Weather Agencies appointed the IPCC members and controlled most climate funding in their countries.

· This essentially put it beyond political or normal scientific control. Skeptics were eliminated.

· Most funding was directed to theories and research that ‘proved’ the AGW hypothesis.

· In many cases, money was diverted from data collection as the political focus intensified. Here are comments by Ken Green from an investigation into what was happening at Environment Canada.

Properly supporting the contention of scientific dishonesty has been difficult, however, due to the confidential nature of much of the government’s internal communications and the obvious reluctance of civil servants to speak out. However, as a result of a recent Access to Information request, scientific dishonesty and “conclusion rigging” has been discovered in one of Canada’s largest ever science and technology projects – the effort to “manage” climate change and the science that supposedly “supports” the Kyoto Accord. This is revealed by the following analysis of the attached contract report by the consulting company, “The Impact Group”, and related internal communications by Environment Canada’s (EC) Meteorological Service of Canada (MSC) and others on the department’s “ADM Committee on Climate Change Science.”

· Weather stations were closed across the World, ostensibly to be replaced satellite data. NASA GISS graphed these changes showing two distinct declines in the 1960s and the 1990s.


· The problem is even greater as data is lost, records terminated or truncated, and proxy reconstructions perverted by rewriting history for a political agenda.

I heard the comments about synthetic data following my involvement in Australia with Senator Malcolm Roberts and Tony Heller. I went to Australia to support Senator Roberts demand empirical data as proof of their AGW claims from the bureaucrats of the Commonwealth Scientific and Industrial Research Organization (CSIRO), the organization that advises the government of climate change. We sought actual data with accompanying evidence of the cause and effect, not computer model generations. They responded with a Report that provided nothing but the computer-generated claims of the Intergovernmental Panel on Climate Change (IPCC). But that was no surprise because they were using data provide for them by IPCC member the Australian Bureau of Meteorology.

But it isn’t just about empirical data and evidence of AGW. The amount of real, directly measured, data used to create the computer models, and their policy directing evidence is completely inadequate. The synthetic data the person spoke about was created in a computer model of one part of the vast atmospheric /ocean system and then input as real data in a larger model.

One of the biggest deliberate deceptions is the difference between the certainties of the IPCC Summary for Policymakers (SPM) and the frightening list of inadequacies in the Physical Science Report of Working Group I. The following quotes are directly from that Report, but very few people ever read them. This is because the SPM is released to great fanfare months before the Science report is published. As IPCC expert reviewer David Wojick wrote

Glaring omissions are only glaring to experts, so the “policymakers”—including the press and the public—who read the SPM will not realize they are being told only one side of a story. But the scientists who drafted the SPM know the truth, as revealed by the sometimes artful way they conceal it.

What is systematically omitted from the SPM are precisely the uncertainties and positive counter evidence that might negate the human interference theory. Instead of assessing these objections, the Summary confidently asserts just those findings that support its case. In short, this is advocacy, not assessment.

The Assessment Report 5 (AR5) SPM claims

Anthropogenic greenhouse gas emissions have increased since the pre-industrial era, driven largely by economic and population growth, and are now higher than ever. This has led to atmospheric concentrations of carbon dioxide, methane and nitrous oxide that are unprecedented in at least the last 800,000 years. Their effects, together with those of other anthropogenic drivers, have been detected throughout the climate system and are extremely likely to have been the dominant cause of the observed warming since the mid-20th century.

The designation “extremely likely” is 95-100%. Consider that assessment in the context of the following data limitations from the Science Report. They begin by acknowledging the serious limitations in a general statement.

Uncertainty in Observational Records


The vast majority of historical (and modern) weather observations were not made explicitly for climate monitoring purposes. Measurements have changed in nature as demands on the data, observing practices and technologies have evolved. These changes almost always alter the characteristics of observational records, changing their mean, their variability or both, such that it is necessary to process the raw measurements before they can be considered useful for assessing the true climate evolution. This is true of all observing techniques that measure physical atmospheric quantities. The uncertainty in observational records encompasses instrumental/ recording errors, effects of representation (e.g., exposure, observing frequency or timing), as well as effects due to physical changes in the instrumentation (such as station relocations or new satellites). All further processing steps (transmission, storage, gridding, interpolating, averaging) also have their own particular uncertainties. Because there is no unique, unambiguous, way to identify and account for non-climatic artefacts in the vast majority of records, there must be a degree of uncertainty as to how the climate system has changed. The only exceptions are certain atmospheric composition and flux measurements whose measurements and uncertainties are rigorously tied through an unbroken chain to internationally recognized absolute measurement standards (e.g., the CO2 record at Mauna Loa; Keeling et al., 1976a).

Uncertainty in data set production can result either from the choice of parameters within a particular analytical framework—parametric uncertainty, or from the choice of overall analytical framework— structural uncertainty. Structural uncertainty is best estimated by having multiple independent groups assess the same data using distinct approaches. More analyses assessed now than in AR4 include published estimates of parametric or structural uncertainty. It is important to note that the literature includes a very broad range of approaches. Great care has been taken in comparing the published uncertainty ranges as they almost always do not constitute a like- for-like comparison. In general, studies that account for multiple potential error sources in a rigorous manner yield larger uncertainty ranges. This yields an apparent paradox in interpretation as one might think that smaller uncertainty ranges should indicate a better product. However, in many cases this would be an incorrect inference as the smaller uncertainty range may instead reflect that the published estimate considered only a subset of the plausible sources of uncertainty. Within the time series figures, where this issue would be most acute, such parametric uncertainty estimates are therefore not generally included. Consistent with AR4 HadCRUT4 uncertainties in GMST are included in Figure 2.19, which in addition includes structural uncertainties in GMST.

To conclude, the vast majority of the raw observations used to monitor the state of the climate contain residual non-climatic influences. Removal of these influences cannot be done definitively and neither can the uncertainties be unambiguously assessed. Therefore, care is required in interpreting both data products and their stated uncertainty estimates. Confidence can be built from: redundancy in efforts to create products; data set heritage; and cross-comparisons of variables that would be expected to co-vary for physical reasons, such as LSATs and SSTs around coastlines. Finally, trends are often quoted as a way to synthesize the data into a single number.

Why isn’t this placed at the front of the SPM?

The following quotes are directly from AR5 and were selected because they acknowledge the data problems. The Report uses various terms are that indicate their measures of the availability of the evidence, that is the amount, extent, and quality, while a second measure indicates their confidence in their knowledge for predictions. I have underlined and made bold their assessments, inserted percentages where appropriate and commented on the inadequate, misleading analysis and phraseology.

In this Report, the following summary terms are used to describe the available evidence: limited, medium, or robust; and for the degree of agreement: low, medium, or high. A level of confidence is expressed using five qualifiers: very low, low, medium, high, and very high, and typeset in italics, e.g., medium confidence. For a given evidence and agreement statement, different confidence levels can be assigned, but increasing levels of evidence and degrees of agreement are correlated with increasing confidence (see Section 1.4 and Box TS.1 for more details).


In this Report, the following terms have been used to indicate the assessed likelihood of an outcome or a result: Virtually certain 99–100% probability, Very likely 90–100%, Likely 66–100%, About as likely as not 33–66%, Unlikely 0–33%, Very unlikely 0–10%, Exceptionally unlikely 0–1%. Additional terms (Extremely likely: 95–100%, More likely than not >50–100%, and Extremely unlikely 0–5%) may also be used when appropriate. Assessed likelihood is typeset in italics, e.g., very likely (see Section 1.4 and Box TS.1 for more details).

Because of large variability and relatively short data records, confidence in stratospheric H2O vapour trends is low. (This is important because ice crystals visible in the form of Noctilucent and Polar Stratospheric Clouds are important especially in the levels of ozone, which is likely why they have problems in the next item.).


Confidence is medium in large-scale increases of tropospheric ozone across the Northern Hemisphere (NH) since the 1970s.


Confidence is low in ozone changes across the Southern Hemisphere (SH) owing to limited measurements. The public think we are fully informed on ozone and the Montreal Protocol resolved all issues).


Satellite records of top of the atmosphere radiation fluxes have been substantially extended since AR4, and it is unlikely (0-33%) that significant trends exist in global and tropical radiation budgets since 2000.


Surface solar radiation likely (66-100%) underwent widespread decadal changes after 1950, with decreases (‘dimming’) until the 1980s and subsequent increases (‘brightening’) observed at many land-based sites. There is medium confidence for increasing downward thermal and net radiation at land-based observation sites since the early 1990s.


While trends of cloud cover are consistent between independent data sets in certain regions, substantial ambiguity and therefore low confidence remains in the observations of global-scale cloud variability and trends.


It is likely (66-100%) that since about 1950 the number of heavy precipitation events over land has increased in more regions than it has decreased. (A meaningless comment).


Confidence is low for a global-scale observed trend in drought or dryness (lack of rainfall) since the middle of the 20th century, owing to lack of direct observations, methodological uncertainties and geographical inconsistencies in the trends. (The precipitation data is far more limited in every way than temperature data and it is inadequate. Despite this they tell the public the likelihood of drought is significantly increased because of AGW).


Confidence remains low for long-term (centennial) changes in tropical cyclone activity, after accounting for past changes in observing capabilities. (Does this mean it is useless before accounting for past changes?)


Confidence in large-scale trends in storminess or storminess proxies over the last century is low owing to inconsistencies between studies or lack of long-term data in some parts of the world (particularly in the SH). (It is not just the SH, although that is half the planet).



Because of insufficient studies and data quality issues confidence is also low for trends in small-scale severe weather events such as hail or thunderstorms. (Storms referred to here and above are a major mechanism for transfer of greenhouse gases and latent heat throughout the atmosphere.)


It is likely (66-100%) that circulation features have moved poleward since the 1970s, involving a widening of the tropical belt, a poleward shift of storm tracks and jet streams, and a contraction of the northern polar vortex. (It is probably at the low end of “likely” because there are insufficient surface stations to determine extent).


Large variability on inter-annual to decadal time scales hampers robust conclusions on long-term changes in atmospheric circulation in many instances. (What does “robust” mean? The atmosphere, weather, and climate are all about circulation. This comment is so broad and vague that it implies they don’t know what is going on.)


Confidence in the existence of long-term changes in remaining aspects of the global circulation is low owing to observational limitations or limited understanding. (This combines with the last to confirm they don’t know.)


Uncertainties in air–sea heat flux data sets are too large (How large is too large?) to allow detection of the change in global mean net air-sea heat flux, of the order of 0.5 W m–2 since 1971, required for consistency with the observed ocean heat content increase. The products cannot yet be reliably used to directly identify trends in the regional or global distribution of evaporation or precipitation over the oceans on the time scale of the observed salinity changes since 1950. (These are massive engines of latent heat transfer and alone are sufficient to say that any results of their work are meaningless.)


Basin-scale wind stress trends at decadal to centennial time scales have been observed in the North Atlantic, Tropical Pacific and Southern Ocean with low to medium confidence. (Wind is the almost forgotten variable and with the least data, yet essential to accurate measurements of evaporation and energy transfer.)


Observed changes in water mass properties likely (66-100%) reflect the combined effect of long-term trends in surface forcing (e.g., warming of the surface ocean and changes in E – P) and inter- annual-to-multi-decadal variability related to climate modes. (Water mass properties determine the sea level, so this comment makes a mockery of the claims about AGW sea level increase.)


It is likely (66-100%) that the annual period of surface melt on Arctic perennial sea ice lengthened by 5.7 ± 0.9 days per decade over the period 1979–2012. (The sea ice data was not reliable until 1981 and 30 years is a completely inadequate sample size for any climate-related variable despite the WMO created 30-year Normal).


After almost one decade of stable CH4 concentrations since the late 1990s, atmospheric measurements have shown renewed CH4 concentrations growth since 2007. The drivers of this renewed growth are still debated. (Apparently not. The media is full of stories about the growing threat of methane from human sources.)


Many of the cloudiness and humidity changes simulated by climate models in warmer climates are now understood as responses to large-scale circulation changes that do not appear to depend strongly on sub-grid scale model processes, increasing confidence in these changes. (But they just told us they don’t understand large scale circulation changes.) For example, multiple lines of evidence now indicate positive feedback contributions from circulation-driven changes in both the height of high clouds and the latitudinal distribution of clouds (medium to high confidence). However, some aspects of the overall cloud response vary substantially among models, and these appear to depend strongly on sub-grid scale processes in which there is less confidence. (How much less? In fact, they don’t know.)


Climate-relevant aerosol processes are better understood, and climate-relevant aerosol properties better observed, than at the time of AR4 (But they were not well understood or observed then, so this is a relative and meaningless statement.). However, the representation of relevant processes varies greatly in global aerosol and climate models and it remains unclear what level of sophistication is required to model their effect on climate. Globally, between 20 and 40% of aerosol optical depth (medium confidence) and between one quarter and two thirds of cloud condensation nucleus concentrations (low confidence) are of anthropogenic origin. (This is akin to saying 20/20 vision is like seeing 20% of the things 20% of the time. Look at the next quote.)


The quantification of cloud and convective effects in models, and of aerosol–cloud interactions, continues to be a challenge. Climate models are incorporating more of the relevant processes than at the time of AR4, but confidence in the representation of these processes remains weak. (Another relative statement that is meaningless. It is like saying I was useless before but I think I am slightly better now.)


Aerosol–climate feedbacks occur mainly through changes in the source strength of natural aerosols or changes in the sink efficiency of natural and anthropogenic aerosols; a limited number of modelling studies have bracketed the feedback parameter within ±0.2 W m–2 °C–1 with low confidence. (What is a limited number? How many would make it meaningful?)


Climate and Earth System models are based on physical principles, and they reproduce many important aspects of observed climate (Many but not all aspects.). Both aspects (This is misleading. They are talking about their aspects not the ones of observed climate) contribute to our confidence in the models’ suitability for their application in detection and attribution studies (Chapter 10) and for quantitative future predictions and projections (Chapters 11 to 14). In general, there is no direct means of translating quantitative measures of past performance into confident statements about fidelity of future climate projections. (This is a purely political meaningless statement. Are saying that their past performance is no measure or predictor of their future performance? So please let us keep stumbling forward wasting billions of dollars when the entire problem is insoluble because there is no data.)


The projected change in global mean surface air temperature will likely (66-100%) be in the range 0.3 to 0.7°C (medium confidence) (Does this mean they may be 66% certain they are 50% correct?)


Climate models have continued to be developed and improved since the AR4, and many models have been extended into Earth System models by including the representation of biogeochemical cycles important to climate change. (They have not improved since AR4). These models allow for policy-relevant calculations such as the carbon dioxide (CO2) emissions compatible with a specified climate stabilization target. (They do not allow for “policy-relevant calculations” and they have not improved because they have failed in every projection made.)


The ability of climate models to simulate surface temperature has improved in many, though not all, important aspects relative to the generation of models assessed in the AR4. (How many models improved and how many are needed to be meaningful? This confirms what is wrong with the preceding statement.)


The simulation of large-scale patterns of precipitation has improved somewhat since the AR4, although models continue to perform less well for precipitation than for surface temperature. (There is virtually no improvement. This is an enormous understatement.)


The simulation of clouds in climate models remains challenging. There is very high confidence that uncertainties in cloud processes explain much of the spread in modelled climate sensitivity. (A classic example of Orwellian double talk. They are very certain that they are uncertain.)


Models are able to capture the general characteristics of storm tracks and extratropical cyclones, and there is some evidence of improvement since the AR4. Storm track biases in the North Atlantic have improved slightly, but models still produce a storm track that is too zonal and underestimate cyclone intensity. (Well, which is it? Have they improved or not? I assume they have not. Otherwise, they would have said so.)


Many important modes of climate variability and intra-seasonal to seasonal phenomena are reproduced by models, with some improvements evident since the AR4. (That meaningless relative measure, improving on virtually nothing is still virtually nothing.) The statistics of the global monsoon, the North Atlantic Oscillation, the El Niño-Southern Oscillation (ENSO), the Indian Ocean Dipole and the Quasi-Biennial Oscillation are simulated well by several models, although this assessment is tempered by the limited scope of analysis published so far, or by limited observations. (More doublespeak: they have improved, except where they haven’t.)


Promoters of AGW and members of the IPCC lead the public to believe that they have a vast amount of data to support their analysis and claim that they are 95 percent certain that human CO2 is causing global warming. They also promote the notion that 97 percent of scientists agree with their conclusion. They promote by specific statements, by failing to investigate the accuracy of the data, or failing to speak out when they know it is incorrect.

Most people, probably at least 97 percent, have never read the SPM, including scientists, politicians, and the media. Probably 99 percent of people have never read the Science Report. How many of them would change their minds if they considered the information shown above? Maybe that is too much. Maybe all that is necessary is to learn that every projection the IPCC ever made was wrong.

This brief and limited look at what the IPCC are saying on its own gives credence to Emeritus Professor Hal Lewis’s charge in his October 2010 resignation letter from the American Physical Society

“It (the global warming scam) is the greatest and most successful pseudoscientific fraud I have seen in my long life as a physicist.”

It is a pseudoscientific fraud because there was no data as the basis for any of their work. The scientists determined to achieve the objective of the IPCC, that is prove ‘scientifically’ that human CO2 was causing global warming, had to modify or eliminate the inadequate real data and create false data. Even if, under the new regime, the fraud is exposed and proper science and scientific methods are applied it will take a very long time to gather the minimum data required. Until that occurs it is all just hand-waving. However, there is enough evidence to know that the precautionary principle is not applicable. The little evidence we have indicates we are safer to do nothing.

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
January 29, 2017 2:09 pm

Inference (i.e. created knowledge) based on assertions of assertions from limited, circumstantial data. The path to other logical domains is broad, while the scientific domain is uncomfortably, inconveniently restrictive.

January 29, 2017 2:21 pm

Intergovernmental Panel on Climate Change (IPCC) computer model projections are unfailingly wrong.
Bureaucracies value consistency!

Antti Naali
Reply to  PiperPaul
January 30, 2017 12:08 am

Do we even know if those results are from climate models? And how could we know since no-one outsider has never seen the original code and algorithms?
Taken into account the wast uncertainty and sparcity of data the modelling results should be all over the place. It is highly unlikely you could get so integrid results from those models if the models are complex. My best guess is they either “estimate” the modelling results or the models are very very simple.

Wim Röst
January 29, 2017 2:23 pm

“the SPM [IPCC Summary for Policymakers] is released to great fanfare months before the Science report is published”
WR: What a built – in tric!
What would happen if someone in business failed that much?

Reply to  Wim Röst
January 29, 2017 11:59 pm

This served the authors of the Summary for Policy Makers AR5 very well indeed. They flogged misleading graphs from the unfinished SPM by presenting them to the world at their press conference with no context or caption. It was an exercise in deception, using hindcasted ‘projections’ to appear like the rock-solid historical instrumental record.
Even if you took the trouble to download the SPM at that time to make head or tail of the graph (SPM10) there was no caption under it. The caption was buried in prose twenty pages earlier with no reference to it under the graph. This is why I say the SPM was unfinished. It needed much copy editing including moving the captions from the body of the text to the graphs. Only someone like you or Tim Ball would understand the SPM10 graph as is but it was presented to dupe a billion viewers around the world that temps were rising at an alarming rate even from 2000-2010 where it showed a truly massive hike. But this was a) the hindcast and b) decadal averages. There was no caption to tell the viewers this so they were roundly duped. Even perusing the graph in the unedited SPM didn’t clear this up. Stocker described the graph in the press conference in a way to suggest that the instrumental record showed a huge increase in temps right through the pause. It was a highly contrived ploy.
Add to this the fact that they played around with the graph in the all-nighter before the press conference and it gets even murkier. The graph they presented at the press conference wasn’t the same as the one that was (and still is) in the AR5 SPM.

Wim Röst
Reply to  scute1133
January 30, 2017 4:07 pm

scute1133, thanks for the information above. Every time I feel more tricked than I felt before.
I checked some data. Wikipedia told:
Current status[edit]
The Fifth Assessment Report (AR5) consists of three Working Group (WG) Reports and a Synthesis Report. The first Working Group Report was published in 2013 and the rest were completed in 2014.
• WG I: The Physical Science Basis – 30 September 2013, Summary for Policymakers published 27 September 2013.[4]
• WG II: Impacts, Adaptation and Vulnerability – 31 March 2014
• WG III: Mitigation of Climate Change – 11 April 2014
• AR5 Synthesis Report (SYR) – 2 November 2014
The first thing I thought when I read that the Summary for Policymakers (SPM) was released months before the final Science report was: “how could they make a summary and draw conclusions NOT on basis of the final Science Report?
The second thing was that I remembered that the text of the Summary for Policy Makers was approved by governments. By governments…..
The third thing was that I remembered that in the Press Release of the SPM NOT was told that SPM in fact a government report was, but that was suggested that scientists made the report. I checked:
“Rajendra Pachauri, Chair of the IPCC, said: “This Working Group I Summary for Policymakers provides important insights into the scientific basis of climate change. It provides a firm foundation for considerations of the impacts of climate change on human and natural systems and ways to meet the challenge of climate change.” These are among the aspects assessed in the contributions of Working Group II and Working Group III to be released in March and April 2014. The IPCC Fifth Assessment Report cycle concludes with the publication of its Synthesis Report in October 2014.
“I would like to thank the Co-Chairs of Working Group I and the hundreds of scientists and experts who served as authors and review editors for producing a comprehensive and scientifically robust summary. I also express my thanks to the more than one thousand expert reviewers worldwide for contributing their expertise in preparation of this assessment,” said IPCC Chair Pachauri.”

January 29, 2017 2:32 pm

As long as the greater temperature at the bottoms of atmospheres is claimed — without equation or experimental demonstration of the effect — to be due to some optical phenomenon , the models cannot be correct .
They conspicuously leave out gravity which must be accounted for .

Brett Keane
Reply to  Bob Armstrong
January 31, 2017 12:09 am

Armstrong January 29, 2017 at 2:32 pm: Sadly, ristvan is pushing an optical belief here, for some reason we are wondering about……as if prof Woods, a great Optics experimenter, had never lived and refuted the idea, which even Arrhenius left behind in time.

Bill Rocks
January 29, 2017 2:33 pm

Stunning. Frightening.

January 29, 2017 2:49 pm

The cause will self destruct on it’s own. Oh the stress of life without funding. I’m just going to sit here and enjoy watching the entire meltdown.

Scottish Sceptic
January 29, 2017 2:50 pm

The problem with the data python, is the bigger it grows, the more data you have, the less wriggle room there is.

January 29, 2017 2:55 pm

“There is considerable confidence … particularly at continental scales and above”
Lol. If we had really been going through a century of hockey stick style runaway warming CLEARLY, on continental scales, the record for the hottest days would have been set very recently, and coldest days a distant memory. Not so. In fact it’s the opposite:

By continent, all but one set their all-time cold temperature record more recently than their all-time high temperature records.
Data here: http://www.space.com/17816-earth-temperature.html

Plus, of course, the global record for the hottest day ever was set in 1913. The coldest: 1983. That’s BACKWARDS! Just like the records for the continents. From this data, which the leftist ideologues cannot manipulate, I’d say we’ve been cooling for the last century!!

Reply to  Eric Simpson
January 29, 2017 11:26 pm

If cold temperature records are being broken more often than high temperature records, it goes against the assertion that global warming raises low temperatures more than it raises high temperature averages. Something doesn’t add up.

Reply to  Louis
January 30, 2017 2:46 am

If the Average Winter Temperatures were steadily getting less cold (warmer) during the past 60+ years, …. which we know is an observational fact, …… and the Average Summer Temperatures remained about the same, …. which we also know is an observational fact,, ……. then wouldn’t that produce an increase in Average Temperatures during each of said 60+ year time frame?

January 29, 2017 2:57 pm

I am no expert. Would it not be more accurate to flip a coin than use climate models. Going back the length of them (about 75 years) taking the mean and odds of flipping a coin from their start date seems better odds? Would that be right?

Reply to  B.j.
January 29, 2017 4:32 pm

Perhaps casting a die . . coins would be right half the time ; )

Reply to  JohnKnight
January 29, 2017 4:42 pm

Heads warmer tails colder than the mean. It would be 50/50 so following the previous mean?

Reply to  JohnKnight
January 29, 2017 5:06 pm

Well, what about temps staying about the same? And extreme weather events, rising sea levels, hordes of climate refugees and so on? I’ve seen all manner of troubling things promoted as scientific revelations seen in the great crystal screens ; )

Reply to  JohnKnight
January 29, 2017 5:27 pm

Well, I’ve practiced statistics and science to know that to think that a global statistic like temperature as a proxy for energy content is quite the reach. There are more than a few questions about whether temperature is a leading or trailing indicator for CO2 in the atmosphere, I’ve seen enough scientific ‘evidence’ to lead me to believe that CO2 is a trailing indicator of temperature to make me question the temperature / CO2 correlative structure.
On top of that, one of the first things I learned in physics is the difference between temperature and energy content and to assume that all of these ‘approximations’ really reflect energy content and, energy content, is correlated to a highly manipulated temperature series that started after the Maunder Minimum where people skated on the Thames and when we had the year without a summer in 1815’s, just seems silly to me.
To me, these arguments makes about as much sense to the fact that, given the fact that I was born on a month and day of the year and notice this date above others when events happen that I’m part of some special and unique condition and should be deferred to by all. It makes no sense and there are very strong reason why I would have to question confirmation bias on my part.

Reply to  B.j.
January 29, 2017 7:38 pm

NO NO, NO… the Best way is to include “Fudge” in your AR5 models…

chiefio@Headend:/Climate/modelE2_AR5_branch$ grep -i fudge */*
model/BLK_DRV.f:      real*8  :: dum,fudgef,sum_dep
model/BLK_DRV.f:           fudgef = 0.9999
model/BLK_DRV.f:           IF( (dum.GT.0. .AND. SUM_DEP.GT.dum*FUDGEF) .
model/BLK_DRV.f:     +      OR. (dum.LT.0. .AND. SUM_DEP.LT.dum*FUDGEF) ) THEN
model/BLK_DRV.f:               mnuccd(K) = FUDGEF*mnuccd(K)*dum/SUM_DEP
model/BLK_DRV.f:               mcondi(K) = FUDGEF*mcondi(K)*dum/SUM_DEP
model/BLK_DRV.f:               mconds(K) = FUDGEF*mconds(K)*dum/SUM_DEP
model/TRCHEM_master.f:        PFASTJ2(LM+2)=PFASTJ2(LM+1)*0.2816 ! 0.00058d0/0.00206d0 ! fudge
model/TRCHEM_master.f:        PFASTJ2(LM+3)=PFASTJ2(LM+2)*0.4828 ! 0.00028d0/0.00058d0 ! fudge
model/TRCHEM_master.f:C       This is a fudge, so that we don't have to get mesosphere data:
grep: model/dd2d: Is a directory
model/thermf.f:      if (glue(i,j).gt.1. .and. saln(i,j,kn).gt.40.)                !  Med fudge

From: https://chiefio.wordpress.com/2017/01/25/gcms-frost-feedback/#comment-78415

Reply to  E.M.Smith
January 29, 2017 8:07 pm

To think governments spend trillions of dollars on the output of code like that.

Jack Simmons
Reply to  E.M.Smith
January 30, 2017 3:44 am

What’s the matter with fudge? My aunt used to make the most delicious fudge. Wish I could get the recipe.

Reply to  E.M.Smith
January 30, 2017 12:42 pm

The problem with fudge is that if I turned that in to the EPA for a permit application or emission inventory, I’d at the very least find myself told to redo it and maybe hit with fines. If I submitted that for an emission event calculation, multiplying by an arbitrary constant to get the answer I wanted in the middle of a hidden workbook, I would likely be charged with perjury.
They got a grant renewal and praise from the president.

January 29, 2017 3:12 pm

Figure 2 is a crappy diagram. Lines coming out of the temperature bix suggest it is a source. Arrows should go towards temperature… Its an outcome not a source.

Reply to  Macha
January 29, 2017 3:29 pm

Macho wrote:
“Figure 2 is a crappy diagram. Lines coming out of the temperature box suggest it is a source. Arrows should go towards temperature… Its an outcome not a source..”
No – Temperature drives CO2 much more than CO2 drives temperature.

Reply to  Allan M.R. MacRae
January 29, 2017 8:49 pm

‘CO2 drives temperature.’
Nice try Allan, CO2 only drives temperatures a little bit if you’re a lukewarmer.

Reply to  Allan M.R. MacRae
January 30, 2017 5:13 am

Alan….temperature is simply a measurement, not an entity in itself. Ie. A man made product to describe an environment, which arises from pressure and volume.

January 29, 2017 3:17 pm

are consistently high….
well….what would you expect when they have tuned them to match temperature records that have been “adjusted” cooler in the past
They are pretty much spot on to the adjusted temperature record…
…so can’t predict squat

4 Eyes
January 29, 2017 3:20 pm

Thanks Dr. Ball. I have printed this, copied and pasted it into Word and saved a link to every electronic device in my house. It will be circulated. I do hope you have some intelligent, informed and qualified successors who can guarantee this message and any additions is spread far and wide for years and years to come. Given his assertion that anthropogenic climate change is an enormous challenge facing mankind, surely Obama can use some of his free time to read some skeptical points of view like this and then start asking questions of his favored advisors? Or does he know all this already? He could actually help the world by admitting he has been duped.

Steve T
Reply to  4 Eyes
February 1, 2017 5:18 am

4 Eyes
January 29, 2017 at 3:20 pm
…..Given his assertion that anthropogenic climate change is an enormous challenge facing mankind, surely Obama can use some of his free time to read some skeptical points of view like this and then start asking questions of his favored advisors? Or does he know all this already? He could actually help the world by admitting he has been duped.

He could actually help the world by admitting he has been dupedwas the duper. There, fixed it for you. 🙂

January 29, 2017 3:25 pm

Dr. Tim Ball wrote above:
“This article was triggered by listening to a powerful advocate of the anthropogenic global warming (AGW) hypothesis talk about “synthetic’ data as if it was real data.”
Bravo Tim! Here is a recent similar commentary:
commieBob wrote:
“Our students are taught to fabricate castles in the sky based on almost no actual empirical facts. In fact, the postmodernists have it that facts don’t matter because facts are mere social constructs.”
Allan wrote:
Well said Bob.
One of my friends, an eminent meteorologist, told me about a presentation he attended, given by a warmist academic.
My friend said:
“This guy lives in a virtual world, not the real one. His entire presentation was the output of computer models. He never referred to real world data. His model output was nothing like observed, measured reality He did not seem to realise this, or perhaps he did not think it relevant.”
That is the essence of the global warming scam. It is not real. It is virtual reality, actually virtual falsehood, the product of computer models that used highly inflated estimates of the sensitivity of climate to increasing CO2 – up to ~10 times too high.
The warmist climate models also ignore the observed fact that CO2 lags temperature at all measured time scales, from ~9 months in the modern data record to ~800 years in the ice core record, on a longer time cycle. The warmists ignore this fact, because it proves they are saying the future is causing the past.

January 29, 2017 3:49 pm

“My immediate successor, Professor Tom Wigley, was chiefly interested in…”
Fill in the blank. Chiefly interested in … MANIPULATING THE DATA!
ClimateGate email from warmist Tom Wigley to Phil Jones and Ben Santer:
2009: Phil, … if you look at the attached plot you will see that the land also shows the 1940s blip. So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean… http://tomnelson.blogspot.com/2011/12/climategate-email-warmist-tom-wigley.html
Wigley is the smoking gun that there WAS a conspiracy among the climate activists to rig the data to support their leftist “cause.” So Wigley, in private email that he had no reason to believe would be revealed, said he’d like to reduce the 1940s blip, and lo and behold, that just what was done.

January 29, 2017 4:23 pm

Let’s redirect all the money into gathering data.
We do NOT need even one more climate model simulation. Everything has been done 100 times over and every disaster scenario has been done 100 times over.
My position from day one is that the climate is too complicated to simulate until “really big data” systems are available and quantum physics is used combined with appropriate time scales like pico seconds and then the computers need to be 5 or 6 generations more advanced than we have now. Maybe 25 years from now.
Data collection needs to be the main objective now. Anything that is not doing this, needs to have its funding completely cut-off because it is just a complete waste of societies resources. I rather educate kids in Africa than give a single dime to another climate model simulation.

Wim Röst
Reply to  Bill Illis
January 29, 2017 5:09 pm

“Let’s redirect all the money into gathering data”
WR: Full support. In the future we will need the data to understand what happens and what will happen. With the data we could then know, how to act. Without the (right) data we can’t do anything because ‘we will not know’.

Reply to  Bill Illis
January 29, 2017 5:49 pm

Aren’t you assuming that data collection in support of ANY real modeling is possible? Sorry, I just don’t see it!
Maybe it is and maybe it’s not but I’ve been around enough projects with 10’s of million or billions of $$$ thrown at them that I’ve realized that putting 10’s, or 100’s, or even millions of dollars into a project without first developing a workable approach is the height of futility.
It cost decades of lost effort before Abel showed that you can’t solve certain higher order polynomials with a formula. I suspect the same sort of ‘proof’ would show that a solution to long-range climate modeling is not feasible and we should stop wasting precious intellectual capital pursuing those models.

Reply to  Tony
January 30, 2017 3:29 am

a solution to long-range climate modeling is not feasible and we should stop wasting precious intellectual capital pursuing those models.

Absolutely correct.
One could have access to the greatest super-duper computer ever envisioned ……. and be collecting and processing scads n’ scads n’ scads of “up-to-the-second” real time “weather” data ,,,,,,,, but would still be limited to ”guessing” what the weather (or climate) was going to be in ten (10) days hence.
And that is exactly why hurricane “forecasters” always guess at or project (predict) three (3) to five (5) different potential “storm tracks”.

Reply to  Bill Illis
January 30, 2017 3:43 am

Hi Bill,
Re your statement: “Let’s redirect all the money into gathering data.”
I tend to agree. I would like to see if it is possible to rehabilitate the Surface Temperature (ST) data to rid it of the many corrupted “adjustments” that have been done in the past decade or so, as well-documented by Tony Heller and others.
The “improved” ST data would still have its shortcomings, but it’s (hopefully) better than nothing – hopefully better than some of the proxies like tree rings that have produced such nonsense.
I would like to see more work on simple “models” such as the one you produced that has a 3-month predictive capability:
Tropics Troposphere Temp = 0.288 * Nino 3.4 Index (of 3 months previous) + 0.499 * AMO Index + -3.22 * Aerosol Optical Depth volcano Index + 0.07 Constant + 0.4395*Ln(CO2) – 2.59 CO2 constant
I think yours would work well and be more useful as a 4-month predictor of Global Lower Troposphere Temperature, such as improving mine:
UAHLTcalc Global (Anom. in degC, ~four months later) = 0.20*Nino3.4IndexAnom + 0.15 [+ (factor3*volcano index) + (factor4*AMO Index) {Note I did not include a factor for CO2 because it is ~irrelevant}.
I also think it is useful to pursue my 2008 paper that shows that CO2 lags temperature by about 9 months in the modern data record. There is more work to be done here, but most climate scientists are reluctant to even discuss this apparent heresy. Maybe under Trump they will take off the blinders.
Another area of simple models that bears examination is solar-climate models such as that proposed by Dan Pangburn and others. Dan’s model makes sense, but I have not found the time to verify it in detail. It models longer-term global temperature as a function of the integral of solar activity and a sawtooth (or similar) function to simulate the PDO.
For all the claims that climate is too complicated to be modelled, there are simple models that seem to work, within certain limitations. We should pursue what works, not what clearly does not work, as the mega-modellers have done.
Best personal regards, Allan

Darrell Demick
Reply to  Allan M.R. MacRae
January 30, 2017 10:56 am

Truly excellent posts by a number of people on this topic.
As an almost retired Reservoir Engineer, I have always tried to follow the advice of the late Laurie Dake, basically the “guru of RE gurus”. One of his best pieces of advice is (doing my best to be exact on the quote):
“The more complex the problem, the simpler the solution should be to arrive at meaningful and defendable results.”
I would daresay that the earth’s climate is orders of magnitude more complex than the most complex reservoir that we extract oil and/or natural gas from. I cannot agree more on simpler solutions being more defendable.
Thank you for the great posts! And onwards towards the truth finally being forefront on this topic!

Jim Gorman
Reply to  Bill Illis
January 30, 2017 9:30 am

Using taxpayer money to creat a multitude of models that can not be validated and that use outputs from one as inputs to others simply leads to confirmation bias and circular logic.
If a model doesn’t use actual data for the item being modeling it shouldn’t be allowed to be in a scientific paper as a stand alone scientific advance. It is only a useful tool if it is validated that it can accurately describe empirical physical data. Otherwise it is not even a useful tool for scientific study.
Here is a thought. Would anyone have accepted a computer “models” output proving/disproving Einstein’s space-time? Didn’t think so

January 29, 2017 4:29 pm

They have to go to such great lengths to make CO2 the Villian. CO2 is a very weak GHG with no dipole, whose relationship with temperature is logarithmic. H2O is by far the most significant GHG, and yet they ignore it.
Climate “Science” on Trial; CO2 is a Weak GHG, it has no DiPole

January 29, 2017 4:35 pm

A multi-billion year old planet. About a hundred years of collected data, sporadic, and of doubtful accuracy. But they got it all figured out. Climate science is still in the data collection phase. Save the theories for later, please!

Reply to  Ronald P Ginzler
January 29, 2017 6:16 pm

Not being argumentative but “sill in the data collection phase”, yet somehow earth with all it has been through, somehow naturally maintained conditions suitable for life for billions of years. Earth has seen conditions far worse than a few hundred extra ppm of CO2 will ever cause. Earth has natural negative feedback mechanisms, billions of years prove it, there is lots of data already. Climate Scientist just started with the wrong hypothesis (agenda) and went out of their way to prove it.

Reply to  Duncan
January 30, 2017 6:33 am

Our most pressing future ‘problem’ has to do more with the possibility of another Ice Age…all Interglacials were short and Ice Ages long and I see no change in this pattern.

H. D. Hoese
January 29, 2017 4:40 pm

I have been watching, sometimes with a little understanding, this development of biogeochemical pathways since before the word model came to be overused. The odd, well actually obvious, thing is that ecosystem and fisheries models, the latter sometimes part of the former, depend heavily on all these mass and energy fluxes that are so tied up together. For simplicity (blame?) they sometimes consider only human causes, but regardless clear cases of climate effects give some guidance producing at least a few great papers. The concept of marine science laboratories was to throw different disciplines together for this purpose, but they seem to mostly specialize nowadays.
Throw in all that organic matter, dead or alive and less in places, the ecosystem models would presumably not be adequate until the climate ones work. The few I read about have lots of problems. You read about the “best available science,” sometime codified in law, a phrase that needs some reconstructing.

January 29, 2017 4:47 pm

Dr. Ball, independent of this interesting post, your best attack on Mann is direct, frontal, brutal. Some ideas. Except for a now rapidly cooling 2015-16 blip, no warming this century despite ~1/3 of all atmospheric CO2 rise since 1958 occuring this century. Or, centered PCA Mann technique automatically produces false hockey sticks from red noise (McIntyre). Or, to this post, AR4 WG1 SPM fig. 8.2 said the warming ~1920-1945 was not AGW. Not enough CO2 delta. BUT is essentially indistinguishable from the ~1975-2000 warming that supposedly was. Natural variation did not stop in 1975. Attribution problem. Regards for a yet again postponed trial.

Reply to  ristvan
January 30, 2017 4:06 am

Hi Rud,
Tim probably knows all of the above, but I am adding one more to your list:
The global cooling that happened from ~1940 to ~1975 during the time that fossil fuel combustion strongly accelerated essentially disproves the CAGW hypothesis.
Imagine IF we had a similar situation starting about now:
Hypothetically, let’s say from 2020 to 2055 there was continued fossil fuel combustion and a significant increase in atmospheric CO2, and yet average global temperature cooled by ~0.5C.
What would this say about ECS and the CAGW hypo? I suggest it would say that ECS ~=zero and that the CAGW hypo is falsified.
But this has already happened, which is why the warmists have falsified the temperature record in order to minimize this ~35-year past cooling
In other words, we already have a good bound on the magnitude of ECS (near-zero = insignificant) and we already have strong evidence that the CAGW hypo is false.
Regards, Allan

Matheus Carvalho
January 29, 2017 4:52 pm

Wow, thanks for the hard work compiling this long list…

January 29, 2017 5:01 pm

I think we’re using the wrong mathematical discipline here, we’re being forced to respond to statistical analyses but the real mathematical discipline is game theory. Playing the ‘stats’ game only puts us at a disadvantage, recognizing this as a game that uses stats is a totally different game.

January 29, 2017 5:01 pm

I noted the following to WUWT gadfly Nick Stokes in a previous post: “1934 was the hottest prior to homogenization. According to Greenland ice core studies, a bit above 90% of the past 10,000 years were warmer than any one of the past 100. For Santa Rosa and Ukiah, California, roughly half of the years from 1925 to 1940 were warmer than 2016 (before homogenization). During the Eemian interglacial, 125,000 years ago, almost all of the years were warmer than the average of the Holocene interglacial. Current warming “is much sound and fury signifying nothing.”

January 29, 2017 5:30 pm

Always pay attention to the weasel words:
“There is considerable confidence that climate models provide CREDIBLE quantitative ESTIMATES of future climate change, particularly at continental scales and above.”
The word “credible” simply means something that isn’t easily dismissed as being ridiculous, i.e not “incredible.” When combined with the term “estimates,” the IPCC is simply saying that their computer models are spitting out numbers that you can’t look at and say that they are outside the reasonable bounds of how the climate might respond.
Unfortunately, to those who lack critical reasoning skills, e.g. most journalists and virtually all politicians, this statement is misinterpreted as being a representation that the IPCC has concluded that the model estimates of future climate are accurate. But that is not what this quote states.
This deception is surely deliberate.

Reply to  Kurt
January 31, 2017 6:43 am

The wordsmithing is part of the strategy of using “narratives” to change society. You don’t need any hard data – all you need is a scary story. See, e.g., the precautionary principle..

John Robertson
January 29, 2017 5:37 pm

Policy based evidence manufacturing.
The preferred policy has not changed, hence the quality of evidence manufactured remains the same.
Up till 20th January 2017.
Now that President Trump appears to be abandoning the policy, the product from past production runs is useless.

January 29, 2017 5:41 pm

er, Pinewood Derby car, not a Soapbox car. They do both work on a hill.

Frederick Michael
Reply to  kim
January 29, 2017 7:36 pm

Yes. Pinewood derby.

Reply to  Frederick Michael
January 30, 2017 6:36 am

And you can ride in a soapbox if you are small like a kid. We loved doing that years ago. No more soap boxes, though. You can use a styrofoam box…

January 29, 2017 5:43 pm

Dr. Ball you write clearly, forcefully, and very convincingly. You have done much to educate the non-scientist like me. The science is roached. No Kung Fu grip whatsoever. So what’s left. Hubris, greed, arrogance, fanaticism, and all of the other failings of the human condition.
I believe that Dr. Feynman once related his involvement with early computer modeling work at Oak Ridge National Laboratory on this site. If I recall correctly he was not impressed with the modeling approach at that time. It may be helpful to note as I have before that modeling continues at Oak Ridge except now it is performed by one of the worlds most powerful supercomputers Titan. There is growing demand for a yet more powerful machine. The funds involved in building those hyper-expensive machines are vast and the well paying positions involved are many. As nuclear work has gone dry climate modeling has to some extent taken it’s place.
I’m afraid that East Tennessee hillbilly politicians who believe in God, Roads, and ORNL bear much responsibility for today’s climate. Like one of our local commissioners said to a city TV reporter when asked how his daughter got a county job she had no qualifications for “what would you do if your child needed a job. I did that”

January 29, 2017 6:03 pm

Came in to suggest pinewood, not soapbox, but Kim beat me to it.

Reply to  harkin1
January 30, 2017 3:42 am

HA, …… soapboxes were made of pinewood.
No need to use a hardwood for making a wooden box to ship soap in.

charles nelson
January 29, 2017 6:57 pm

Was that diagram by Schneider, Schneider? Like the Schneider?

January 29, 2017 6:57 pm
January 29, 2017 7:36 pm

“Most funding was directed to theories and research that ‘proved’ the AGW hypothesis.”
That was Thatcher’s aim when she set up the Hadley Centre for Climate Prediction and Research

Reply to  RoHa
January 29, 2017 7:40 pm

Our leadership often do the most damage when they are pushing the same agenda. Think of the housing bubble in the US. This was pushed by Clinton and Bush.

Joel O’Bryan
January 29, 2017 7:39 pm

So true the last statement.
The best course of action is to continue to use sound engineering to build betterflood control measures, better sea walls for coastal communities, better newer bridges and roads. Nature will always show man how puny we are in her fury.
It is the height of hubris to believe that throwing tens of trillions of dollars at reducing a few parts per million CO2 is better spent than using that money for infrastructure hardening. That kind of climate change thinking only warms the hearts of witch doctors, charlatans, and George Soros and his ilk.

January 29, 2017 7:40 pm

Minor nit Figure 1 is a pinewood derby kit not a soap box derby kit

January 29, 2017 8:59 pm

Dr. Tim Ball:
I have developed a simple model which perfectly matches the rise in average global temperatures between 1975 and 2011 (awaiting necessary data for later years).
Would you be willing to comment on the model and its conclusions?
(It explains why current modeling attempts are doomed to failure).
It can be found by a Google search for “Climate Change Deciphered”

January 29, 2017 9:07 pm

Facts are stubborn things, but computer models are more pliable.

January 29, 2017 9:38 pm

The plethora of models the IPCC has been involved indicates that a lot of guess work has been involved. Their modeling work can have no credibility unitl they narrow it down to the one model that adequately represents the Earth’s climate system.
In their first report the IPCC published a wide range for their guesses as to what the climate sensivity of CO2 really is . There is nothing more important than for them to do them to make an accurate determination as the climate sensivity of CO2. In their last report the IPCC published the exact same values meaning that after more than two decades of effort they have learned nothing that would allow them to narrow their range of guesses one IOTA. Of course if the climate sensitivity of CO2 is really zero then there is nothing for the IPCC to measure but the IPCC will not even consider that idea for fear of losing their funding. I think they should loose their funding anyway because of a total lack of performance.

Chris Hanley
January 29, 2017 10:54 pm

“Cooking the books would be a concern if that was what scientists were apt to do. But I don’t think scientists generally operate that way …” Tom Wigley PBS interview 2000.
Something I have just discovered through Wood for Trees, which has probably been discussed here before, is the adjustment to the HadCRUT record from HadCRUT3 to HadCRUT4:
The change is mainly due to a smoothing away of the 1945 – 1975 dip in the Southern Hemisphere record to bring it in line with the GISS record.
The ’45 – ’75 dip was always problematic to the narrative (the coming Ice Age etc.) but explained away by the supposed cooling effect of sulphate aerosols from industrial activity, controlled after ~1980.
But if the post-war warming was mainly due to well-mixed human CO2 emissions the cooling effect of aerosols would be mainly in the NH whereas HadCRUT3 was showing the opposite, the SH cooling more than the NH.
Voila, fifty year old data has now been ‘corrected’.

Wim Röst
Reply to  Chris Hanley
January 30, 2017 1:32 am

“The change is mainly due to a smoothing away of the 1945 – 1975 dip in the Southern Hemisphere record to bring it in line with the GISS record.
The ’45 – ’75 dip was always problematic to the narrative (the coming Ice Age etc.) but explained away by the supposed cooling effect of sulphate aerosols from industrial activity, controlled after ~1980.”
WR: ‘smoothing away of the 1945 – 1975 dip’. In combination with lowering the 1880 temperatures and the emphasized rise in modern temperatures…………….. In Holland we say: “Aan de vruchten kent men de boom” which can be translated as something like: “The fruits tell which type of tree it is”.

Reply to  Chris Hanley
January 30, 2017 4:13 am

The aerosol data for the cooling period ~1940 to ~1975 is false. See my post here:

January 30, 2017 12:24 am

We don’t understand all the variables, nor do we even know what all the variables are.
We don’t have sufficient data, nor can we guarantee the data is accurate.
Yet still the model outcomes are accepted as correct, and the science is settles.

Leo Smith
January 30, 2017 12:54 am

The scientists determined to achieve the objective of the IPCC, that is prove ‘scientifically’ that human CO2 was causing global warming
Er no. That is not the objective of the IPCC. The IPCC takes for granted that humans cause catastrophic global warming: Their objective is to ignore any different view in order to pretend to be describing just how catastrophic…
“The role of the IPCC is to assess on a comprehensive, objective, open and transparent basis the
scientific, technical and socio-economic information relevant to understanding the scientific basis of
risk of human-induced climate change, its potential impacts and options for adaptation and mitigation.”


January 30, 2017 2:06 am

Dr Ball – please send the whole your article to Prince Charles…

Reply to  sherlock1
January 30, 2017 3:57 am

PC is lacking the intellectual rigour needed to evaluate Dr Ball’s post. But he should send it anyway, if there is a mailing address that will work

Peta from Cumbria, now Newark
January 30, 2017 2:49 am

Because Climate Change is a Chimera -all things to all people and universally to-be-feared.
Nobody can even define Climate.
Sorry – its a Coupled Chaotic Non-linear blah blah
Coupled to what?
Its talked about as a global thing, all in one, all encompassing so what’s it coupled to?
Just like one end of a brick is ‘coupled’ to the other end.
But wait, we’ve now created 2 parts to the brick, left and right, so there’s more than one ‘brick’ hence more than one climate. Gives a lie to Global temperatures and Global Climate.
Chaotic -OK.
But don’t they see that that means its unpredictable? Its never in the same state twice or if it is, it occurs at random & unpredictable times. Yet they blunder right on and try to predict it. Am *I* going crazy here?
Even around here folks talk about linear to mean exclusively ‘straight line’
Sorry but straight lines are a very particular case on linearity – sines, cosines, quadratics, logs & exponentials are linear. Meaning you can calculate them.
A non-linear system contains singularities – effectively attempts to divide by zero.
And yes there are non-linearities, The Main One being that heat/energy transfer is always from hot to cold – just like a diode – the semiconductor diode being an almost perfect example of a non-linear device.
And yet climate models routinely trample that by saying the cold sky can send heat to the warm dirt.
They have invented negative energy. THAT is the non-linearity.
In a linear universe, you would have negative energy and time could go backwards.
But you don’t and it doesn’t

tony mcleod
January 30, 2017 4:47 am

“Little Alice fell
the hOle,
bumped her head
and bruised her soul”

Stephen Greene
January 30, 2017 5:26 am

What really bothers me is the number of ‘scientists’ who know that much is a lie and continue to cover up and ride the gravy train. Liberals have no morals. Well, lacking, severely anyway. The number of Liberalism associated pathologies is growing by leaps. Can you see the media reporting on their own DSM characterized disorders. I doubt it!

Johann Wundersamer
January 30, 2017 5:57 am

“It is likely (66-100%) that since about 1950 the number of heavy precipitation events over land has increased in more regions than it has decreased. (A meaningless comment).”
Don’t think that comment meaningless comparing precipitations in the 1970ties to precipitations distribution over land / the year nowadays.
From the 1970ties in the EU I remember
Nowadays, since controls of
Atmospheric particulate matter – also known as particulate matter (PM) or particulates –
e.g. by EPA and local authorities we don’t have that ‘Salzburger Schnürlregen’ anymore.
Instead we have clouds without rainfall crossing the landscapes – till 2 following days it pours and the streets are flooded.

Johann Wundersamer
January 30, 2017 6:02 am

( hpefully ) corrected link:
“It is likely (66-100%) that since about 1950 the number of heavy precipitation events over land has increased in more regions than it has decreased. (A meaningless comment).”
Don’t think that comment meaningless comparing precipitations in the 1970ties to precipitations distribution over land / the year nowadays.
From the 1970ties in the EU I remember
Nowadays, since controls of
Atmospheric particulate matter – also known as particulate matter (PM) or particulates –
e.g. by EPA and local authorities we don’t have that ‘Salzburger Schnürlregen’ anymore.
Instead we have clouds without rainfall crossing the landscapes – till 2 following days it pours and the streets are flooded.

January 30, 2017 6:04 am

The term “feedback” has been used in several contexts both by the IPCC and supporting organizations and many of their critics, both positive and negative critics and pro and con.
Feedback in the IPCC reports both directly in defining terms such as equilibrium climate sensitivity and in the computer models. I’ve less time than Dr. Ball, but there are many references from outsiders to that effect.
The IPCC usage is apparently derived from basic control theory. (see https://en.wikipedia.org/wiki/Control_theory , it’s adequate.). In electronics(the most often used example) even simple control theory applies. Almost all circuits limit feed back to either by a direct voltage or sometimes the signal is coupled magnetically through a transformer. But the feedback loop is not just another input to the system, it is designed to directly control the process in an explicit way- turn a valve, open a damper, limit a signal, change a frequency, etc.
The climate system does not have direct feedback controls and the basic control theory doesn’t really apply. We do not know the domains(you might call them) or exactly how they interact. To me most of the ones I know about- cloud formation height, wind speeds, moisture gradients, pressure gradients, and many others, are thermodynamic interactions. There is no single, direct feedback control in any of the interactions. The interactions are all continuous, direct, and respond to the where and how the imbalances of the energy in the system move towards an equilibrium, without ever reaching it because it is constantly perturbed by the rise and fall of energy from the sun daily, gravitational effects causing tides, water currents, wind, etc.
I think this is a h-ll of a lot more complicated than anyone appears to be thinking. I’m with Dr. Lamb, data, data, data! I am not a mathematician, but I do know that it is possible to solve many differential equations. However it is also impossible to solve many partial differential equations. It’s only possible to solve non-linear partial differentials by using numeric approximations which the current climate models use. They’ve shown their inadequacy.
We’re all too old to see an end to this, so good luck to x^nth generation.

Reply to  philohippous
January 31, 2017 6:49 am

It all started with the Hansen paper that defined the issue back in 1984.: J. Hansen et al., AGU Geophys. Monogr. 29, 130 (1984). This paper was important because it revised the climate sensitivity parameter (one of those “dials “ in the computer models) from a low value to one that produces much scarier “projections of the future.” This paper is a real laugh to a real engineer. It combines electrical feedback equations with some bald, unproven assertions about the effect of changing a number of parameters, and then it just throws it all together (in a calculation!) to come up with a new sensitivity number.

Johann Wundersamer
January 30, 2017 7:31 am

Gotcha, Dr. T. –
They’re trying to defend their ‘climate change’ prediction models which for 30 years predicted wrong :
Many important modes of climate variability and intra-seasonal to seasonal phenomena are reproduced by models, with some improvements evident since the AR4. (That meaningless relative measure, improving on virtually nothing is still virtually nothing.)
All’s well that ends well –
if models then new ‘climate change models’ built from the start.

January 30, 2017 9:54 am

Great article
Remember the Climate Science Oath:
(1) The future climate is known with great accuracy
(2) The past climate keeps changing.
Note: It requires a PhD to understand the path.

January 30, 2017 12:04 pm

Where’s Nick and Toneb and the gang today on this thread? Aren’t they big old IPCC report /modeling supporters?

Joel Snider
January 30, 2017 12:19 pm

‘The little evidence we have indicates we are safer to do nothing.’
Regardless of the ultimate influence of humans on climate change – and to my eye, the difference between the skeptics and the (ostensibly honest) warmists, is fairly small – basically restricted debate over long-term consequences – we do not have REGULATORY power.
‘Don’t wait for it to happen… just watch what DOES happen’ (Sean Connery)
THEN take appropriate action to deal with the real world. Don’t try to control the real world because you can’t.

January 30, 2017 1:44 pm

How can something be true, unless the evidence there are hundreds of different “evidence”? The biggest selfdeception most scientists investigating the causes of global warming and climate change, the spending for some models that are based on looming these curious.
I do not know how many of you will read this, but I know that all the previous “evidence” causes of these phenomena, totally wrong and not based at all on the laws of nature and knowledge of the structure of the universe.
Consider whether this is realistic you’ll list?
MAIN CAUSES OF CLIMATE CHANGE on all planets are mutual relations and influences of the planets and the sun. A little more clearly: The causes are changes in the magnetic field.
This accept, but if you do not know how, I offer cooperation and correspondence, of course, if anyone has a feeling that he could reach the result. I have written prove !!

January 30, 2017 5:57 pm

The IPCC is a political organization designed to politicize “science ” to facilitate a globalization agenda ,
increase taxes and funnel $Billions to climate con-men .
The IPCC purpose was to package a fraud using climate models as the “evidence ” . The climate models have been successful at disproving the hypothesis advanced by the IPCC and those who do not even pretend to follow the basic scientific method taught high school students .
When the perpetrators knew the jig was up they did a quick name change (climate change ) to hide the con further .
Climate change is just repackaged global warming garbage pushed by Al Gore and mini Al Dicraprio .
Climate changes , it’s warming and we should be very happy it is . Nobody vacations in Antarctica .

January 31, 2017 11:13 am

I truly respect your work, Dr. Ball, but must pick this nit: That is not a soap box kit. It is a BSA pinewood derby car kit. http://www.scoutstuff.org/bsa/crafts/pinewood-derby/vehicle/official-pinewood-derby-car-kit.html#.WJDfKvJ8O48

%d bloggers like this:
Verified by MonsterInsights