Video follows. I had at first thought that this press release from NASA Goddard was telling me that they had taken the data from their new Orbiting Carbon Observatory and put it into a model that used wind data so that distribution and mixing could be tracked. Seems sensible, right? But no, they’ve created a model that is projecting such things years before the OCO even made it in to orbit, while touting that they have it. The model “simulates May 2005 to June 2007” They write: (bold mine):
But the simulation – the product of a new computer model that is among the highest-resolution ever created – is the first to show in such fine detail how carbon dioxide actually moves through the atmosphere.
Uh, sorry, no. Model simulations aren’t actual movements, you need hard tracking data for that. One wonders what sort of science mindset exists where they can substitute modeled output for actual data and publish a press release like this with a straight face.Hopefully, somebody at NASA Goddard will actually use the OCO data instead of model data to make claims. The high resolution model itself has merit, but without hard atmospheric CO2 data put into it, like we have from the new OCO, it really is just little more than a model with guesswork data.
NASA Computer Model Provides a New Portrait of Carbon Dioxide
An ultra-high-resolution NASA computer model has given scientists a stunning new look at how carbon dioxide in the atmosphere travels around the globe.
Plumes of carbon dioxide in the simulation swirl and shift as winds disperse the greenhouse gas away from its sources. The simulation also illustrates differences in carbon dioxide levels in the northern and southern hemispheres and distinct swings in global carbon dioxide concentrations as the growth cycle of plants and trees changes with the seasons.
Scientists have made ground-based measurements of carbon dioxide for decades and in July NASA launched the Orbiting Carbon Observatory-2 (OCO-2) satellite to make global, space-based carbon observations. But the simulation – the product of a new computer model that is among the highest-resolution ever created – is the first to show in such fine detail how carbon dioxide actually moves through the atmosphere.
“While the presence of carbon dioxide has dramatic global consequences, it’s fascinating to see how local emission sources and weather systems produce gradients of its concentration on a very regional scale,” said Bill Putman, lead scientist on the project from NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “Simulations like this, combined with data from observations, will help improve our understanding of both human emissions of carbon dioxide and natural fluxes across the globe.”
The carbon dioxide visualization was produced by a computer model called GEOS-5, created by scientists at NASA Goddard’s Global Modeling and Assimilation Office. In particular, the visualization is part of a simulation called a “Nature Run.” The Nature Run ingests real data on atmospheric conditions and the emission of greenhouse gases and both natural and man-made particulates. The model is then is left to run on its own and simulate the natural behavior of the Earth’s atmosphere. This Nature Run simulates May 2005 to June 2007.
While Goddard scientists have been tweaking a “beta” version of the Nature Run internally for several years, they are now releasing this updated, improved version to the scientific community for the first time. Scientists are presenting a first look at the Nature Run and the carbon dioxide visualization at the SC14 supercomputing conference this week in New Orleans.
“We’re very excited to share this revolutionary dataset with the modeling and data assimilation community,” Putman said, “and we hope the comprehensiveness of this product and its ground-breaking resolution will provide a platform for research and discovery throughout the Earth science community.”
In the spring of 2014, for the first time in modern history, atmospheric carbon dioxide – the key driver of global warming – exceeded 400 parts per million across most of the northern hemisphere. Prior to the Industrial Revolution, carbon dioxide concentrations were about 270 parts per million. Concentrations of the greenhouse gas in the atmosphere continue to increase, driven primarily by the burning of fossil fuels.
Despite carbon dioxide’s significance, much remains unknown about the pathways it takes from emission source to the atmosphere or carbon reservoirs such as oceans and forests. Combined with satellite observations such as those from NASA’s recently launched OCO-2, computer models will help scientists better understand the processes that drive carbon dioxide concentrations.
The Nature Run also simulates winds, clouds, water vapor and airborne particles such as dust, black carbon, sea salt and emissions from industry and volcanoes.
The resolution of the model is approximately 64 times greater than that of typical global climate models. Most other models used for long-term, high-resolution climate simulations resolve climate variables such as temperatures, pressures, and winds on a horizontal grid consisting of boxes about 50 kilometers (31 miles) wide. The Nature Run resolves these features on a horizontal grid consisting of boxes only 7 kilometers (4.3 miles) wide.
The Nature Run simulation was run on the NASA Center for Climate Simulation’s Discover supercomputer cluster at Goddard Space Flight Center. The simulation produced nearly four petabytes (million billion bytes) of data and required 75 days of dedicated computation to complete.
In addition to providing a striking visual description of the movements of an invisible gas like carbon dioxide, as it is blown by the winds, this kind of high-resolution simulation will help scientists better project future climate. Engineers can also use this model to test new satellite instrument concepts to gauge their usefulness. The model allows engineers to build and operate a “virtual” instrument inside a computer.
Using GEOS-5 in tests known as Observing System Simulation Experiments (OSSE) allows scientists to see how new satellite instruments might aid weather and climate forecasts.
“While researchers working on OSSEs have had to rely on regional models to provide such high-resolution Nature Run simulations in the past, this global simulation now provides a new source of experimentation in a comprehensive global context,” Putman said. “This will provide critical value for the design of Earth-orbiting satellite instruments.”
For detailed views of various parts of the world, visit:
www.nasa.gov/content/goddard/a-closer-look-at-carbon-dioxide
For more information about NASA’s Orbiting Carbon Observatory-2, visit:
Back in ’70s, when Littlewoods Football Pools (in the UK) was the biggest thing, before the National Lottery I had to write six Fortran IV programmes for my exams (on an ICL 1900 series!!!). One of my programmes analysed the last 12 years of UK football league results (in-putted on paper tape!) and another ‘modelled’ the likely results for the current year’s fixture list: I’d written a Pools Prediction Program Model!
Now, according to the ‘model’ (and they’re infallible, no?) I should have been able to retire with a fortune in the bank after modelling the results of the season: I won a total of £1:50 (about £20 today) in the season!!
Models? Phtt. Of course, it might have had something to do with my programming skills…
Hi Harry,
I am an old Singer System 10 techo – and your comments on modelling hypothetical concepts, really made me chuckle. It reminds of the old Peter Cook movie “The Rise and Rise of Michael Rimmer” it said it all. Well worth scratching it up…
this makes me wonder about how the data from OCO-2 is being ‘processed’
http://www.caltech.edu/content/checking-first-data-oco-2?CFID=fc5cafe6-93db-499f-b693-25585e8b5884&CFTOKEN=0
“The data retrieval method that Yung and his colleagues designed for OCO-2 compares the light spectra collected by the satellite to a model of how light spectra would look—based on the laws of physics and knowledge of how efficiently CO2 absorbs sunlight. This knowledge, in turn, is derived from laboratory measurements made by Caltech professor of chemical physics Mitchio Okumura and his colleagues at JPL and the National Institute of Standards and Technology.”
Most satellite measurements are highly indirect. Gravity measurement, for example, is inferred from subtle changes in the satellite’s orbit. How satellite radar altimetry copes with waves to arrive at a millimeter precision is beyond me.
Thank you for providing the link. I cannot say that I understand how this calibration and final concentrations will be arrived at, but I am anxious to see the concentration gradient in the column, if indeed there is such a gradient. I can’t wait.
” I am anxious to see the concentration gradient in the column, if indeed there is such a gradient.”
If indeed. I’d be interested too.
Scientists have made ground-based measurements of carbon dioxide for decades and in July NASA launched the Orbiting Carbon Observatory-2 (OCO-2) satellite to make global, space-based carbon observations. But the simulation – the product of a new computer model that is among the highest-resolution ever created – is the first to show in such fine detail how carbon dioxide actually moves through the atmosphere.
The operative words here are “but” and “actually”.
They really do believe the simulation to be the ground truth or gold standard. And observational data to be of inferior status.
It’s the un-Renaissance.
Does anyone know if this satellite data will affect the price of fish? What difference will detailed knowledge of total column CO2 have on anything?
Other than overfeeding a bunch of trough dwelling software fiddlers? absolutely nothing at all
Well it makes the colors change and swirl around, so you can watch the simulation instead of dropping a tab of LSD.
“We’re very excited to share this revolutionary dataset with the modeling and data assimilation community”
A tad too excited. It’s not data, you fools.
It’s a combination of measured data and the model physics, as Steve Mosher alludes to, with continuous assimilation of weather data and CO2 source data. The flow patterns are probably pretty accurate, kind of like weather forecast model flows of total integrated water vapor, which have been done for many years. It’s just a visualization of how point atmospheric sources of anything end up flowing around the world…the new thing here is their estimates of CO2 emissions from various sources. Very cool visually, but of little consequence to the global warming issue.
Roy
It’s crap. Your words “It’s just a visualization ” “the new thing here is their estimates of CO2 emissions from various sources” and lastly your really accurate point which says it’s crap ” of little consequence to the global warming issue”
…but it sure is pretty!!
I wonder how much this crap cost?
I know super computers and their CPU cycles cost a lot. People using the super computers for fun but frivolous stuff, should not be paid a salary, in fact they should be charged for the cycles they used.
Mr. Layman here.
I think the “fear” is that it will be “Gerbered” into something it is not. No matter what the satellite later shows, this model of “Carbon Pollution” will be twisted to try to hype-notize the public into supporting anyone who will continue the war on carbon.
It’s a shame that something that may give an in-the-ball-park projection of anything is viewed with suspicion just because it came from a computer.
Thank you, Hansen and the rest of CAGW Climate “Science”!
Typo
“I think the “fear” is that it will be “Gerbered” into something it is not.”
Should be:
“I think the “fear” is that it will be “Grubered” into something it is not.”
(They may feed us BS because they think we’re stupid, but we’re not babies. Babies grow up.)
“Gerbered” = Homogenisation. So very apt.
Roy – do you know how many actual CO2 measurements went into this simulation. Did they do any out of sample measurements. How would they measure the CO2 over remote areas of the globe?
Did I miss it, or did they forget the triumphal indication that the modeled results at the end of 2007 matched the actual measured results?
Oh, I forgot! That’s not the point of this exercise, is it?
Three things:
1- I love that positioning the Americas on the left gives the initial visual impression that it’s all North America’s fault – as my eye observes it. I vote we have them put Europe on the left. We all know Europe is the real reason climate models do what they do anyways. Don’t question it. Just make it happen.
2 – There’s always a lot of chatter on this and other blogs about Australia and their CO2 political issues, yet the model says they don’t have any. Can they finally put that all to bed, now that a model says so? It seems their resources would be better spent saving the world from their natural wildfires.
3 – I’m glad the oceans doesn’t emit any CO2 in the model, that would make it unrealistic.
I like it – it’s a very interesting visualization. They need to have fake data like this so that they can validate and exercise the ground stations in preparation for the real data. I would have preferred that a large sign was overlaid onto the video saying “not real data” but perhaps that would have cost too much 🙂
The choice of colours was disappointing. Given that CO2 a) is a so-called “greenhouse gas” and b) causes vegetation to green-up, the obvious choice would have been to plot high CO2 concentrations in green rather than red. Perhaps a political decision?
Note as well that in this simulation the CO2 appears to go to zero during the summer, but of course CO2 goes to “below average” during the summer. So they could have shown a more standard colour overlay where red means “CO2 below normal” and green means “CO2 above normal.”
There’s nothing quite like “high res” garbage out. No more of that low-res garbage out stuff.
Yes sirree, nothing says high quality, high-res garbage output from the boyz at GISS like another computer simulation without any validation against real world data.
I just noticed the date on the graphic: April 1st. (2006 04 01) – and eight years ago?
I have been looking in my crystal ball and it tells us….
It tells me that this new satellite will provide a new source if alarms concerning CO2 so that the climateers can fabricate CO2 data, just as the altimetry satellite has enabled the University of Colorado to fabricate a sea level rise and the Grace satellite has enabled the fabrication of ice volume loss in Antarctica.
See a model, fund a model
GIGO.
That is all.
With the use of false colours in a visualization anything can be made to look dramatic.
Are they modeling natural CO2 sources here?
From the video – “CO2 is the most important greenhouse gas affected by human activity”.
I guess if he added,
“…and human activity is only a tiny, teeny bit of the total CO2 produced on Earth and we have no idea why only human generated CO2 is not absorbed back into the system, but like to make that claim anyway.”
That would put kind of a bummer spin on the video, like what in the world are they doing at NASA, are they idiots? No they are smart, but are like kids with toys who need to have their electronic toys taken away from them so they can go out in the world and do something productive.
Sorry, no more super computers for NASA until they get a clue.
CO2 is, of course, invisible. So what color do you use to illustrate in a simulation move? How about yellow, orange, red and brown, colors commonly used in the media for warm, hot, fire and dirty. Ah yes, that will impress the viewer that CO2 is a pollutant; a bad thing. It may seem unimportant to you, but it delivers a message to the casual viewer.
BINGO!
If the Arctic isn’t really ice free, let’s make make it look like it should be.
This is what I want climate scientists to research, who is causing the Earth to tilt. Over here in North America it’s geting colder and colder and the days shorter and shorter. It sucks, don’t ski, not into it.
I want answers climate science. Who is tilting the earth, how are they doing it???? You’ve saved us from ozone, you are saving us from CO2, you have stopped global warming, get on the ball now and stop the earth from tilting. Snap to it. Move!
PS. I have a model of how the earth tilts if you want to borrow it.
The main thing that all that beneficial CO2 is doing is this:
http://www.csiro.au/Portals/Media/Deserts-greening-from-rising-CO2.aspx
“Increased levels of carbon dioxide (CO2) have helped boost green foliage across the world’s arid regions over the past 30 years through a process called CO2 fertilisation, according to CSIRO research.”
Sunshine +H2O +CO2 = Sugars/Food + O2
How absurd it is to argue with humans that think they know the perfect temperature of the earth and define it in terms of the temperature from 150 years ago. The debate is over how “bad” it is to exceed that perfect temperature by a certain amount. The debate (which was supposed to be settled a long time ago) has gone on for decades.
There are all sorts of arguments for why increasing CO2 has already exceeded dangerous levels to life on this planet and it will only get worse.
Plants and creatures doing the actual living on our planet, who can’t read models say otherwise.
What would happen if we threw away global climate models?
Many scientists might be forced to look at what is actually going on in the real world.
College students in Colorado, actually living on the planet have said otherwise.
CO2 is good, they know it. Added already to their plant growing greenhouse. No need for modeling.
They learned it from their high school science teachers. This is where the problem is, high school science teachers.
This model, GEOS-5, is nothing new. Looks like it was developed at Oak Ridge National Lab (ORNL) a decade or so ago. I’m guessing that the years of the simulation runs (2000, 2001, 2005, 2007 etc) also represent the development years. So it’s hardly a “new portrait of carbon dioxide” from OCO-2.
http://www.climatemodeling.org/~forrest/pubs/abstracts/Erickson_AGU_20091218.html
But I’m guessing they could use GEOS-5 as a visualization tool as part of the processing and integration with atmospheric plume tracking software (data assimilation etc). But nothing seems to be mentioned about that.
I am curious why we’re not seeing any of the data collected since July 2014. There have been various news releases about the first data downloads, and test data being collected. The latest news blurb, Oct 2, sounded like there’s at least 90 days of data.
So, where are the promised plume tracks leading to sources and sinks of CO2?
Clicking on the OCO-2 “ProductInfo” link (http://oco.jpl.nasa.gov/science/ProductInfo/) we are informed that the data “will start coming in 45 days after the launch”.
But this is all in the future tense, so it is seriously outdated.
Very little OCO-2 product data available, especially the latest 3km gridded xCO2 column data and the ‘sources and sinks’ data are both “N/A”.
Are they having technical problems, or is the data being suppressed/ignored for other reasons?
Yes. And if they release data and something is wrong there will be “you see the bastards ….”
There is no hurry. Why can’t we just wait. I am just tired, tired, tired of “the data are being suppressed/ignored!!!!!!
So what if they are having technical problems. You never had any? Are you actually doing anything but sitting in your air conditioned office?
We are about to get very important data. Give them the time to check and double check and triple check that everything is correct. Or are you and so many here on this post just waiting to get another hockey stick to bitch at.
> There is no hurry. Why can’t we just wait.
Actually I’m a very patient fellow, but am very interested in tracking CO2 plumes. (I have tracked plumes on global scale for 30 years). So I am very disappointed because they said they would release data after 45 days, but are holding back for some reason.
Everything is fine they say. So where is the promised data?
Maybe it’s just website needs to be updated. (We know the US Govt is not very handy at maintaining websites).
Until we have many years, or even decades, of CO2 data to compare and do correlations with other factors (Mauna Loa data, LandSat ChlA-vegetation data) , it seems like we really won’t know what we are looking at in any meaningful way.
It may be rather presumptuous on my part, but I also question the overall design of the OCO-2. Why did they decide to track the short-wave CO2 absorption bands (1.61 µ and 2.04 µ) which require reflected sunlight for detection. Thus imposing rather severe visibility constraints on the detection of CO2.
Why didn’t OCO use the long-wave CO2 bands (4.3 µ and 12-18 µ), which can be viewed directly against the 288K black-body earth-shine?
http://i58.tinypic.com/2ldul1z.png
Color Key:
Orange: O2 absorption ( 0.76µ reference used by OCO-2)
Yellow: CO2 absorption (short-wave) used by OCO-2
Green: CO2 absorption (long-wave)
Blue: H2O absorption (6-7µ long-wave)
Disadavantages of short-wave
1. Requires sunlight (do daytime only)
2. Very small signals (compared to long-wave)
3. Channels shared with water vapor absorption bands
Advantages of long-wave
1. Already proven by GOES Channel 3 water vapor imagery
2. Available day and night
3. Stronger signal
4. Less obscured by sharing with H2O absorption (4.3µ is a “clear channel” for CO2)
My guess is that 4µ and 16µ imagers have already been tried for imaging CO2 and probably don’t show much CO2 (compared to H2O water vapor at 6µ). How does short-wave imagery improve the CO2 detection? Perhaps I’m overlooking something here.
Correction: One of the yellow circled OCO-2 bands (the left one) is wrong. The OCO-2 CO2 bands are 2.04 and 1.61 microns. So just shift the yellow ovals one band to the right.
Correction to the correction: Never mind. The yellow circles are correctly placed at 1.61µ and 2.04µ. Don’t know why they looked wrong last night.
Look at that poor fellow at 1.61µ. Completely buried in H2O noise. At least the CO2 signal at 15µ is partially riding above the big H2O noise.
Of course we clearly see why H2O dominates the so-called “greenhouse effect” (i.e. it explains atmospheric warming correctly, but that’s not how real ‘greenhouses’ are warmed).
CO2 seems to be a ‘bit player’ at most. But let’s see some global OCO-2 renderings and we can make some better judgments about that!
I suggest you look at some real spectra rather than the cartoon version you’re showing. With the spacing of the lines the high resolution spectra used are well able to differentiate between the spectra. The optical depth of the H2O spectral are about two orders of magnitude less than that of the CO2 and the lines are much sparser.
Ok, thanks for the info. So do you think short-wave IR was the right choice for OCO-2? If so, why?
Yes, for the reasons I gave at November 19, 2014 at 8:38 am
I think the O2 absorption band (not on yur chart) read-out is the reference signal. Thus they needed downgoing Vis-IR.
The O2 line is at 0.76µ, orange circle above. I think its to help discriminate CO2 from H2O and aerosols (the CO2 bands are shared with H2O)
They need down-welling visible light because the Earth doesn’t emit visible light on its own.
But the Earth does shine on its own in the longwave IR spectrum. That’s why the satellites can see IR on the dark side.
The 15micron band is way too opaque, the data would only be from the upper atmosphere.
True, but why not use the clear channel 4.3µ as a ‘reference’, then it should be possible to some further discrimination on the 15µ channel?
That’s exactly what they’re doing with the OCO-2 shortwave channels, which are both blocked by H2O, so they use O2 as the reference signal. But it’s only useful in full sunlight.
Having a ‘reference’ channel won’t permit you to see deeper into the atmosphere. The bands chosen allow penetration to the surface, also at those wavelengths there is negligible thermal radiation from the surface and atmosphere to interfere with the signal. Of course in the 15micron band there is considerable thermal radiation. As a practical consideration using the 15micron band would require different optical m aerials and lenses.
The shortwave channels are not blocked by H2O.
The orbit is sun synchronous so there never is a night-time image.
First sentence of the video – oxymoron
“This will provide critical value for the design of Earth-orbiting satellite instruments.”
Still doubts the satellite date will support the simulation poject?
If I understand correctly (from NASA’s cryptically worded article above) the modeling technique uses real surface and upper air wind data, integrated with gridded samples of some airborne substance.
Actually this technique has been used elsewhere and works rather well. For example the Cooperative Institute for Meteorological Satellite Studies (CIMSS) group at the University of Wisconsin calls this technique ‘morphing’ and uses it to track that ‘other’ well-known GHG, water vapor from samples of absorption lines in the microwave spectrum. At such low frequencies they can compute the total precipitable water (TPW).
So they morph the microwave data integrating with wind data to produce their model (MIMIC-TPW) which renders the movement of TPW over the earth. It’s not really a model in the sense that all of the data comes from external measurements, but the morphing is the secret sauce which makes it work. Works rather well as you can see yourself:
http://tropic.ssec.wisc.edu/real-time/mimic-tpw/global/main.html
So I think that GEOS-5 is doing something similar to MIMIC-TPW. If GEOS-5 only used the real OCO-2 data it would be quite useful.
Did some more googling and found this 2010 NASA which shows NASA weather forecasters making flight forecasts for Sept 18, 2010 using both MIMIC and GEOS-5 as tools to assess global patterns for three ‘agencies’ (who are not specifically identified):
https://fcportal.nsstc.nasa.gov/grip/reporting/sites/fcportal.nsstc.nasa.gov.grip.reporting/files/TriAgencyForecastSeptember18.pdf
See bottom of page 4 “SAL/Dust” discussing dust plumes from the Sahara Air Layer, which is the the spawning ground for many easterly tropical waves which tend to develop into Caribbean hurricanes. MIMIC-TPW and GEOS-5 are apparently used to track these kinds of patterns.
So hooking GEOS-5 to OCO-2 should be a ‘no-brainer’. Then why haven’t they released the products? Inquiring minds want to know!
I said:
” It’s not really a model in the sense that all of the data comes from external measurements, but the morphing is the secret sauce which makes it work. “
Of course MIMIC does perform a fair amount of ‘modeling’ in the sense of interpolating/extrapolating around the collected microwave return samples. In fact, CIMSS does not try to hide this and advises that MIMIC-TPW can produce some rather bizarre looking glitches in sparsely sampled areas.
Here on WUWT we tend to badmouth ‘models’ a lot. So it’s important to remember that all measurements require a ‘model’ of some kind. Some models are more useful (“reliable”) than others, of course:
http://wattsupwiththat.com/2014/10/14/yet-another-significicant-paper-finds-low-climate-sensitivity-to-co2-suggesting-there-is-no-global-warming-crisis-at-hand/#comment-1762165
http://wattsupwiththat.com/2014/10/21/john-cooks-claim-of-a-warmer-southern-ocean-is-proven-wrong/#comment-1769227
So MIMIC-TPW (like _any_ measuring device) always generates some errors, but these are fairly well understood. But, importantly, overall it is a very useful tool for tracking airborne parcels.
In fact, as you can read in this description of the MIMIC algorithm, it works exactly as intended for OCO-2. Namely the “sources” of TPW are convection cells and the “sinks” are precipitation cells. Morphing does the magical “modeling” in between. Hooking up to OCO-2 is merely a change in the data and definition of sources and sinks.
ftp://ftp.cira.colostate.edu/ftp/Kidder/201006221557515-16151.pdf
“Bottom-to-top” models?
There they go again.
They need the Marvel Comics treatment (courtesy of Callisto): “You? AGAIN?! *WHAM!*
(Like the army mule, one needs to get their attention first.)