UCI oceanographers predict increase in phytoplankton by 2100

Machine learning Earth system model projects higher numbers in low-latitude regions

University of California – Irvine

Irvine, Calif. – A neural network-driven Earth system model has led University of California, Irvine oceanographers to a surprising conclusion: phytoplankton populations will grow in low-latitude waters by the end of the 21st century.

The unexpected simulation outcome runs counter to the longstanding belief by many in the environmental science community that future global climate change will make tropical oceans inhospitable to phytoplankton, which are the base of the aquatic food web. The UCI researchers provide the evidence for their findings in a paper published today in Nature Geoscience.

Senior author Adam Martiny, UCI professor in oceanography, explained that the prevalent thinking on phytoplankton biomass is based on an increasingly stratified ocean. Warming seas inhibit mixing between the heavier cold layer in the deep and lighter warm water closer to the surface. With less circulation between the levels, fewer nutrients reach the higher strata where they can be accessed by hungry plankton.

“All the climate models have this mechanism built into them, and it has led to these well-established predictions that phytoplankton productivity, biomass and export into the deep ocean will all decline with climate change,” he said. “Earth system models are largely based upon laboratory studies of phytoplankton, but of course laboratory studies of plankton are not the real ocean.”

According to Martiny, scientists traditionally account for plankton by measuring the amount of chlorophyll in the water. There is considerably less of the green stuff in low-latitude regions that are very hot compared to cooler regions further away from the equator.

“The problem is that chlorophyll is not everything that’s in a cell, and actually in low latitudes, many plankton are characterized by having a very small amount of it; there’s so much sunlight, plankton only need a few chlorophyll molecules to get enough energy to grow,” he noted. “In reality, we have had so far very little data to actually demonstrate whether or not there is more or less biomass in regions undergoing stratification. As a result, the empirical basis for less biomass in warmer regions is not that strong.”

These doubts led Martiny and his UCI colleagues to conduct their own phytoplankton census. Analyzing samples from more than 10,000 locations around the world, the team created a global synthesis of the key phytoplankton groups that grow in warm regions.

The vast majority of these species are very tiny cells known as picophytoplankton. Ten times smaller in diameter than the strains of plankton one would find off the California coast – and 1,000 times less voluminous – picophytoplankton are nonetheless great in number, making up 80 to 90 percent of plankton biomass in most warm regions.

The group built global maps and compared the quantity of biomass along the gradient of temperature, a key parameter, according to Martiny. Conducting a machine learning analysis to determine the difference now versus the year 2100, they found a big surprise: “In many regions there would be an increase of 10 to 20 percent of plankton biomass, rather than a decline,” Martiny said.

“Machine learning is not biased by the human mind,” he said. “We just give the model tons and tons of data, but they can help us challenge existing paradigms.”

One of the theories the team explored to explain the growth, with help from co-author Francois Primeau, UCI professor of Earth system science, had to do with what happens to phytoplankton at the end of their life cycle.

“When plankton die – especially these small species – they sit around for a while longer, and maybe at high temperature other plankton can more easily degrade them and recycle the nutrients back to build new biomass,” Martiny said.

Such ecosystem features are not easily taken into account by traditional, mechanistic Earth system models, according to Martiny, but they were part of the geographically diverse dataset the team used to train its neural network-derived quantitative niche model.

Martiny said that this study as a follow-up to research published last summer is further evidence as to the diversity and resilience of phytoplankton.

“We could obviously let climate change get out of hand and go into completely uncharted territory, and then all bets are off,” he said. “But at least for a while, I think the adaptive capabilities in these diverse plankton communities will help them maintain high biomass despite these environmental changes.”

Joining Martiny and Primeau were fellow authors Pedro Flombaum, former UCI postdoctoral researcher and later visiting scholar in Earth system science (currently a professor at the University of Buenos Aires, Argentina), and Weilei Wang, UCI postdoctoral scholar in Earth system science. The study received support from the National Science Foundation’s Ten Big Ideas program and the U.S. Department of Energy Office of Biological and Environmental Research.

###

About the University of California, Irvine: Founded in 1965, UCI is the youngest member of the prestigious Association of American Universities. The campus has produced three Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UCI has more than 36,000 students and offers 222 degree programs. It’s located in one of the world’s safest and most economically vibrant communities and is Orange County’s second-largest employer, contributing $5 billion annually to the local economy. For more on UCI, visit http://www.uci.edu.

Media access: Radio programs/stations may, for a fee, use an on-campus ISDN line to interview UCI faculty and experts, subject to availability and university approval. For more UCI news, visit news.uci.edu. Additional resources for journalists may be found at communications.uci.edu/for-journalists.

From EurekAlert!

64 thoughts on “UCI oceanographers predict increase in phytoplankton by 2100

      • It’s an AI model but it’s still a model. It is still a model calibrated on short term data , extrapolated 80y outside the data.

        Now it maybe possible to argue that the AI process is more objective than a politically partisan climate “scientist” on a mission “to save the planet”.

      • I’ve also read that extra CO2 has been shown to increase the individual size of the plankton, but no mention of that here.

        The only constant seems to be that these people are always “surprised” when they actually learn something, from either models or the real world. It probably says something about their mindset.

    • This is a good example of science with the blinders removed slightly. I’m surprised but pleased it even got published. The authors examined an unstudied, unaccounted for carbon sink that isn’t included in the climate models. Great to see some real ecoclimate research as opposed to fine tuning models of phenomena discovered over 100 years ago and studied nearly to death.

      Next Big Question: What Other Processes That We Haven’t Studied Are Affecting The Climate?

  1. Didn’t the Goracle tell us that the science was settled? So the science is not settled. What else have the climate models gotten wrong?

    • “…but of course laboratory studies of plankton are not the real ocean.” – article

      I can only say “No schist, Sherlock!” The ocean is real world stuff. Laboratories are not.

      Thanks for the article.

      • We need ocean models to analyze phytoplankton growth rates because it’s too hard to get oceanographers (who are devoting their lives to study….um….oceans) to procure liters of water at various lattitudes and filter out the microorganisms and perform microscopy on the samples and weighing the samples.

        It is impossible that oceanographers are not already doing this ALL THE TIME.

        I guess the this microorganism growth rate analysis done by models are for those oceanographers with aquaphobia.

      • The science is “settled”, with a probability approaching 1 as the frame of reference approaches 0.”

        This reminded me of the generalist/specialist distinction:
        The specialist knows more and more about less and less until he knows everything about nothing.
        The generalist knows less and less about more and more until he knows nothing about everything.

  2. “Machine learning is not biased by the human mind,” he said. “We just give the model tons and tons of data, but they can help us challenge existing paradigms.”
    hear, hear, hear 😀

    • They are biased.

      Biased by the data presented to the learning algorithm. That data itself can contain human bias.

      Alternatively, the actual bias they follow is completely opaque to us. That in itself can contain many unknown risks.

      ‘Machine Learning’ is a crock, and has not improved much on the 30 years I’ve been studying and programming them.

      • Zig Zag has wandered astray.
        Machine learning is not a crock. Are you aware of neural network learning? That wasn’t around 30 years ago. I have been teaching computer science since the late 1960s.

        Are you claiming that in this particular case you know that the input data is bad?

        • In this ‘article’ there is no information about what data was collected; how it was prepossessed; what was used as input to the leaning process; what the results were in terms of accuracy of prediction vis a vis the learning data set and the results; how many models were run; which algorithms were used….basically nothing. The only response I have to it is, ‘tell me more’.
          There is a huge difference between neural network algorithms and machine learning algorithms. 30 years ago neural networks were a concept. Today, with tools such as Keras and Python the concepts are put into practice.
          That being said. Zig Zag is right to question the data which is being processed and if it has been subject to human bias. The answer is, we don’t know, given the paucity of detail in this ‘pop-sci’ article.
          Frankly, I think that posting this article is a waste of time by WUWT. It accomplishes nothing. If I want to read shi£e, I would buy Greta’s book.
          It pains me to say it, but recently this site has degenerated into rabid political doggerel which achieves nothing.

          • “python”… ah you included one stolen buzzword too many, now the truth us on the washing line for all to see how filthy, dishonest and crapulant your comment is:

            Python is actually just a programming language. It has nothing to do with machine learning, artificial intelligence, Monte Carlo stochastic self modifying algorithms, agile off the shelf sidesourcing or whatever jagon you think might impress us. Leave it to ppl who actually know what their talking about, after all “delegation over abstraction”.

          • Thank you, Kate.

            I was building neural networks in C++, python is very similar but not as effective (in this area) according to what I have seen. It’s certainly just another procedural language, and they all work the same way.

          • Finn (nice name, btw),

            How do you think the days are selected? How can it be selected without human intervention?

            This is a genuine question. I really can’t see how you can eliminate human bias in some form.

          • I was building neural networks in C++, python is very similar

            LOL. Port some of your C++ networks to python and let us know how it runs.

            One is a highly efficient compiled language, the other is a highly inefficient interpreted scripting language more comparable to Visual Basic and attracts those with similar levels of competence.

            Anyone suggesting python is a “tool” which represents part of the advancement of AI in the last 30y , knows nothing about AI or python.

          • so can someone send this to the chaps feated here the other day saying the penguins were starving due to co2 killing the krill then?
            cos dont krill eat phytoplankton?

          • I believe the first machine learning computer was shown to the world in 1962 in a Scientific American(when it still published lots of more that toe deep math and science). Invented in 1960 by Donald Michie, apparently part of the Bletchley Park code breaking team, Matchbox Educable Noughts And Crosses Engine consisted of 304 match boxes, each with a possible TicTacToe position on it and 4 beads chosen from 9 colors, one for each TicTacToe square. The two machine operators sit across from each other and play a game of TicTacToe, recording the moves and keeping the beads played. The winning side gets rewarded for the good color moves it made and punished by losing bad color moves.

            It’s almost the simplest learning computer possible(Marvin Gardner designed Hexapawn which only requires 24 boxes).

            Neural networks are more complex. Depending on the problem the number of possible moves is much larger than 304, and ways to add and subtracting nodes that weren’t foreseen or are unnecessary. Intelligence in one way means the ability to expand or limit choices to get the best outcome and even change/add/modify desireable outcomes.

            Michie

        • Zig Zag has wandered astray.
          Machine learning is not a crock. Are you aware of neural network learning? That wasn’t around 30 years ago.

          Odd how my degree project over 30 years ago was to build neural networks, then. I’ll admit they were not mainstream back then (otherwise I couldn’t have used that as a project – it had to be something novel) but they had existed for several years according to my research at the time.

          Everything I’ve seen since does not seem to have advanced much, except in terms of speed.

          My point about bias was not directed at this article, but at the claim that ‘Machine Learning’ is free from bias. It’s obviously not if the data is chosen by a human, and I’m not sure there’s any other effective way to do that.

          • I infer that the model(s) picked as best fit is free from bias as the machine will pick whatever model(s) fits closest mathematically. But its not just the input data that it a potentially a source of bias, surely the choice of which models they implement (or don’t) and include in the smorgasbord the machine is offered introduces bias?

            Not to show my ignorance, but it just sounds like computers are applying every workable permutation of non lamda expressible functions, doing all the heavy lifting thanks to their speed

          • Hi Zig Zag.
            In my comment, I actually agreed with your suspicion of bias. However it is impossible to criticise the people involved in the study without any idea of what they actually did. This is a ‘pop-sci’ article which could have been lifted straight from the pages of the Daily Mail. As such. It’s worthless.
            Back in the 80’s I was programming geological models in Fortran77. That was the scientific language of the time.
            Python was released in 1991 so it’s hardly a new kid on the block. It is an interpreted language as you say and the execution time is longer than a compiled language. However, it has such a plethora of different libraries for so many different purposes that development time is very short.
            Why bother writing AI from scratch in C++? Someone has already done it.
            Today, python and R are the scientific programming languages.
            Unlike Greg and Kate, you seem to know what you are talking about, so you may find it interesting to have a look at how python and R are used today.
            Perhaps then you wouldn’t be so biased 🙂

          • “However, it has such a plethora of different libraries for so many different purposes that development time is very short.”

            So the main advantage of python is that you don’t need to know much about programming, that figures. You just cobble together some unreliable , poorly written unstructured code from a series of other libraries and ties all the loose ends together python flavoured gaffer tape.

            Obviously the way to go for calculation intensive work like AI.

            The python interpreter is written in C, does that tell you anything?

    • Machine learning is biased by the anthropogenic fitness function. It is essentially a sophisticated function fitting system and process.

    • After it’s code is written, machine learning algorithms are not further biased by human minds, BUT WHILE CODE IS BEING WRITTEN machine learning algorithms are very much biased by human minds (unless of course, these algorithms are not written by humans at all, and magically pop out of the anuses of magical unicorns, in which case, I retract my snarky comment).

  3. I have visited UCI once on a trip to Irvine from Ireland.
    A wonderful place, as is Irvine with wonderful people.
    So much can be done there. I recommend a trip to Long Beach and Newport Beach.

  4. Krishna Gans January 27, 2020 at 2:28 pm
    “Machine learning is not biased by the human mind,” he said. “We just give the model tons and tons of data, but they can help us challenge existing paradigms.”
    hear, hear, hear 😀

    Beat me to it.

  5. The fundamental weakness of CAGW predictions is that they are based on the assumption that the biosphere will always be catastrophically affected because it cannot change sufficiently rapidly.

    Surprise, suprise, it can 🤡

  6. The neural networks here are basically doing a statistical analysis. They can determine correlation, but not cause.

    • And we mostly don’t know what correlation they are making. They are also extremely biased by the data presented to them, which itself contains human bias.

      Like many new technologies, there is a feeling of magical silver-bullet attributes of Machine Learning (as its sexed up new title). If you’ve programmed them and used them extensively, you’ll realise how inadequate and flawed they really are. The maths behind them hasn’t changed much in 30 years in my experience. The only thing that’s got better is the speed because of hardware improvements.

      • Agreed. Neural nets are a form of curve fitting.

        Certain classes of problems can be solved by curve fitting. Many cannot. The trick is to ask the right question.

        For example: if you sample life on earth you find most life is in the tropics. A neural net will most likely predict that as the planet warms, this will lead to more life not less.

        However this sort of answer is not going to attract funding, so it isn’t likely to be studied by neural nets. Surprising this study got past the censors.

    • Neural networks predicted it? Abandon all hope – it just must be true. Never dispute neural networks.

  7. This is nuts. Who needs future models when there are extrapolable experimental data? The news value is only that this model runs counter to the ‘stagnant carbon sinks’ Burn model used by IPCC. See my recent guest post on ‘Carbon Sequestration’ for details.

    Actual plankton measurements in the Atlantic have shown up to a 10x increase in phytoplankton in the past 3 decades. Admittedly the North Atlantic rather than tropical Atlantic, but still. And the tropical Atlantic should be more fertile than North Atlantic because of ‘fertilizer’ dust from the Sahara, a well known and well covered phenomenon. (The limiting ocean phytoplankton nutrient is iron, which Sahara dust provides in copious reddish Fe2O3 quantities to the ocean euphotic zone.

    • The Bern model of sources-sinks and their annual/seasonal/ENSO related fluxes was refuted by the OCO-2 data. AR5 of course came out before OCO-2 data starting rolling-in in any quantity in late 2014. AR6 WG1 drafters have those OCO-2 findings sitting in front of them in the form of the OCO-2 October 2017 data-based papers in Science Mag. I suspect it is why the OCO-2 team went silent/was forced to shut-up after those papers. The implications in the details of those papers were “extraordinary” for settled CO2 emissions-sources science.
      It will be interesting to see how the IPCC AR6 authors manage to avoid or ignore that rather huge blunder of climate science.

      • The implications in the details of those papers were “extraordinary” for settled CO2 emissions-sources science.

        That sounds very significant. Can you provide a ref or a link to the papers you are referring to ? I think I should have a look at that.

    • “The limiting ocean phytoplankton nutrient is iron”

      It depends on the phytoplankton. Diatoms are a type of phytoplankton with silica shells, ferocious competitors for the dinoflagellates which fix carbon dioxide. When the spring bloom starts, diatoms dominate, suppressing the growth of the others until the silica runs out and the DMS-producing dinoflagellates can flourish.

      Questions: has massive agriculture/land disturbance/vegetation destruction led to an increase in dissolved silica run-off? If so, has this led to a longer diatom bloom? If so, has this reduced low level cloud cover during the dominance period of the diatoms?

      Less cloud, warming. Warming, stratification. Less DMS. Less cloud….

      JF

    • There’s also now some high-latitude ocean areas that have recently become more ice-free in summer like the Chukchi and Beauford Seas that I would assume are supporting more (or much more) phytoplankton than before.

  8. Reality rears its ugly head once more as actual observations show quite the reverse of model predictions and have increased significantly. Yet another instance of computer model garbage output. The significance of this finding is hugely positive as phytoplankton use photosynthesis to absorb the carbon dioxide to fabricate sugars and release oxygen into the seawater. This carbon dioxide therefore does not “acidify” the oceans but provides nourishment for all marine life and oxygenates the water and thence the atmosphere above it. Carbon dioxide is therefore undoubtedly proving highly beneficial to both land and marine plant and animal life. Funny how things turn out when you emerge from computerland and start to look around you….

  9. The depleted carbon index of CO2 from the low latitude oceans indicates that the fraction dissolved in the ocean has an index of tightly around -13.5. The amount of that fraction has been increasing significantly for decades. This is very strong evidence that the biomass in the oceans has been steadily increasing. Phytoplankton is the bottom of the food chain that ends in the ocean with large fish and animals. I consider that as a good thing because it is a source of food for the worlds growing human population.

  10. Agreed. Neural nets are a form of curve fitting.

    Certain classes of problems can be solved by curve fitting. Many cannot. The trick is to ask the right question.

    For example: if you sample life on earth you find most life is in the tropics. A neural net will most likely predict that as the planet warms, this will lead to more life not less.

    However this sort of answer is not going to attract funding, so it isn’t likely to be studied by neural nets. Surprising this study got past the censors.

    • Neural nets, much like our own brains, are effective means to discover order in chaos, but are similarly constrained to a limited frame of reference. But this is where the similarity ends. Whereas neural nets are essential grandiose pattern matchers, our own brains seems capable of inference or created knowledge, maybe. Or, perhaps, an illusion of inference through deduction of ever larger masses of knowledge enabled by skill (engineering, mathematics) in the near-domain.

  11. “Machine learning is not biased by the human mind,” he said. “We just give the model tons and tons of data, but they can help us challenge existing paradigms.”

    Except when the defects are built into the data…

  12. Just amazing the things we can learn by cranking climate models and by throwing in new variables until it all works out. ETCW, for example (early twentieth century warming). Even as we have events in 2100 all figured out we are still struggling with the weirdness of ETCW and LTCW.

    I guess it’s easier to talk about things that haven’t happened yet so that there is no pesky data to worry about. So what the model says is the final word.

    https://tambonthongchai.com/2020/01/28/etcw-ltcw/

  13. We’re doomed! There will be no escape from the Phytoplankton! All hope is lo …

    Er – what do phytoplankton do?

  14. Phytoplankton use CO2 dissolved in ocean water for photosynthesis. If the CO2 concentration in the air over the water has increased from about 350 ppm to 410 ppm in the last 40 years or so, by Henry’s Law this would lead to higher concentrations of dissolved CO2 in the ocean. Although the Argos buoys have said that the sea surface temperature warmed by 0.1 C or so, this would not be nearly enough to counteract the effect of the increased CO2 in the atmosphere. The increase in CO2 in the air is beneficial to the phytoplankton, and to all marine animals which feed on them.

    This might lead to more fish in the sea…

  15. “The unexpected simulation outcome runs counter to the longstanding belief by many in the environmental science community that future global climate change will make tropical oceans inhospitable to phytoplankton”

    How many?

    An important tool within paleoclimate studies is the classification of foraminifera (some of which are phytoplankton) shell fossils within deep marine borehole cores. I recall doing this under a microscope during grad earth science. We were to define sea temperature and various other parameters based around species. These changed with time down through the core which represented several 100,000 years. We would also look at species in cores from different latitudes.

    Deep marine cores are useful due to the sedimentation rate of a very fine grain size being very slow. This results in a lot of time being represented within one meter of core.

    As time went by species would migrate north or south in a cyclical manner. We could link these to glacials and interglacials plus changes in salinity and other factors. Hardly surprising.

    Maybe the environment science community went to the wrong classes. One does not need a simulation. One just needs to study the recognised science and have a brain.

    M

  16. Doesn’t this mean more Whale food? Don’t environmentalists love the Whales? They should be all for this, right?

  17. ““When plankton die – especially these small species – they sit around for a while longer, and maybe at high temperature other plankton can more easily degrade them and recycle the nutrients back to build new biomass,” Martiny said.

    Such ecosystem features are not easily taken into account by traditional, mechanistic Earth system models, according to Martiny, but they were part of the geographically diverse dataset the team used to train its neural network-derived quantitative niche model.”

    ____________________________________

    What good for is training a “neural network niche model” when they already had to “build global maps and compare the quantity of biomass along the gradient of temperature, a key parameter, according to Martiny.”

    Maybe they stayed longer because of raining outside.

Comments are closed.