Public Release: 3-Dec-2018
Researchers created and trained a machine learning algorithm to assign risk probabilities to 150,000 plant species worldwide
University of Maryland
The International Union for Conservation of Nature’s (IUCN) Red List of Threatened Species is a powerful tool for researchers and policymakers working to stem the tide of species loss across the globe. But adding even a single species to the list is no small task, demanding countless hours of expensive, rigorous and highly specialized research.
As a result of these limitations, a large number of known species have not yet been formally assessed by the IUCN and ranked in one of five categories, from least concern to critically endangered. This deficit is quite apparent in plants: Only about 5 percent of all currently known plant species appear on IUCN’s Red List in any capacity.
A new method co-developed by Anahí Espíndola, an assistant professor of entomology at the University of Maryland, uses the power of machine learning and open-access data to predict species that could be eligible for at-risk status on the IUCN Red List. The research team created and trained a machine learning algorithm to assess more than 150,000 species of plants from all corners of the world, making their project among the largest assessments of conservation risk to date. According to the results, more than 10 percent of these species are highly likely to qualify for an at-risk IUCN classification.
The algorithm is a predictive model that can be applied to any grouping of species at any scale, from the entire globe to a single city park. Espíndola and her colleagues published their findings online in the Proceedings of the National Academy of Sciences on December 3, 2018.
“Our method isn’t meant to replace formal assessments using IUCN protocols. It’s a tool that can help prioritize the process, by calculating the probability that a given species is at risk,” Espíndola said. “Ultimately, we hope it will help governments and resource managers decide where to devote their limited resources for conservation. This could be especially useful in regions that are understudied.”
Espíndola and her collaborators built their predictive model using open-access data from the Global Biodiversity Information Facility (GBIF) and the TRY Plant Trait Database. Lead author Tara Pelletier, an assistant professor of biology at Radford University, worked together with Espíndola to perform the machine learning analysis.
Espíndola and Pelletier then trained the model using GBIF and TRY data from the relatively small group of plant species already on the IUCN Red List. This allowed the researchers to assess and fine-tune the model’s accuracy by checking its predictions against the listed species’ known IUCN risk status. The Red List sorts non-extinct species into one of five classification categories: least concern, near-threatened, vulnerable, endangered and critically endangered.
The researchers then applied the model to the many thousands of plant species that remain unlisted by IUCN. According to the results, more than 15,000 of the species–roughly 10 percent of the total assessed by the team–have a high probability of qualifying as near-threatened, at a minimum.
Espíndola and her colleagues mapped the data and noted several major geographical trends in the model’s predictions. At-risk species tended to cluster in areas already known for their high native biodiversity, such as the Central American rainforests and southwestern Australia. The model also flagged regions such as California and the southeastern United States, which are home to a large number of endemic species, meaning that these species do not naturally occur anywhere else on Earth.
“When I first started thinking about this project, I suspected that many regions with high diversity would be well-studied and protected. But we found the opposite to be true,” Espíndola said. “Many of the high-diversity areas corresponded to regions with the highest probability of risk. When we saw the maps, we were surprised it was that clear. Endemic species also tend to be more at risk because they are usually confined to smaller areas.”
The model also flagged a few surprising areas not typically known for their biodiversity, such as the southern coast of the Arabian Peninsula, as having a high number of at-risk species. Some of the most imperiled regions have not received enough attention from researchers, according to Espíndola. She hopes that her method can help to fill in some of these knowledge gaps by identifying regions and species in need of further study.
“Let’s say you wanted to assess every species of wild bee on one continent. So you do the assessment and find that only one species is at risk. Now you’ve used all those resources to identify an area with low risk, which is still helpful, but not ideal when resources are limited. We want to help prevent that from happening,” Espíndola said. “Our analysis was global, but the model can be adapted for use at any geographic scale. Everything we’ve done is 100 percent open access, highlighting the power of publicly-available data. We hope people will use our model–and we hope they point out errors and help us fix them, to make it better.”
###
The research paper, “Predicting plant conservation priorities on a global scale,” Tara Pelletier, Bryan Carstens, David Tank, Jack Sullivan and Anahí Espíndola, was published online in the Proceedings of the National Academy of Sciences on December 3, 2018.
This work was supported by the National Science Foundation (Award Nos. DEB-1457519, DEB-1457726 and EPS-809935), the National Institutes of Health (Award Nos. NCRR 1P20RR016454-01 and NCRR 1P20RR016448-01), DIVERSITAS/Future Earth and the German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig. The content of this article does not necessarily reflect the views of these organizations.
University of Maryland
College of Computer, Mathematical, and Natural Sciences
2300 Symons Hall
College Park, MD 20742
http://www.cmns.umd.edu
@UMDscience
About the College of Computer, Mathematical, and Natural Sciences
The College of Computer, Mathematical, and Natural Sciences at the University of Maryland educates more than 9,000 future scientific leaders in its undergraduate and graduate programs each year. The college’s 10 departments and more than a dozen interdisciplinary research centers foster scientific discovery with annual sponsored research funding exceeding $175 million.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
GIGO ….
GIGO on steroids. There is the case of the chatbot who learned to be very offensive indeed. link
The trouble with AI is that the technology is easy to abuse. link The other thing is that, since AI learns rather than being programmed, nobody is sure how any given AI actually reaches its decisions.
I remember an interaction with a clerk once. She gave me a total that was obviously wrong. I objected and she replied that it must be right because the computer said so. Extrapolating from that, it seems certain that people will uncritically accept anything an AI says.
To err is human, to really foul things up requires a computer.
AI is programmed … it cannot go outside the programming … all of its “learning” is predefined …
neural net AI is not….’programmed’ and it has no limits.
So it can be even stupider than real people, because it cannot die from its mistakes.
A neural network is ‘taught’ its rules. In that way it is effectively ‘programmed’, just not in the traditional sense, more in the sense some animals learn.
There are very clear limits. These are the limits of the lessons used to teach them.
I recall a classic case where a system was taught to identify friendly and enemy tanks. The only problem was that all friendly tank images had been in daylight, so at night, the system classed all tanks as enemy tanks.
That is the kind of serious limitation neural networks suffer from. As always, its what we teach them (or tell them to do in a traditional program) that matters. They have no intelligence at all.
Anthropogenic neural networks are optimizing, pattern-matching algorithms. They are fully characterized and manageable, thus bounded, and predictable.
No.
It is almost certain that any sufficiently complex artificial intelligence will produce emergent behaviors that are completely unpredictable to the designers of the system. link
I [knew] a fellow in college that had a model that predicted 9 of the last 3 recessions.
If you throw enough darts, one of them will eventually hit the bulls eye.
The Texas method is superior. After you throw the darts, paint the target around them.
1. You make up probabilities that species are at risk, assumed from a change in climate. Take polar bears as an example.
2. You use these made up probabilities to assess the risk for other species.
Now, if you happened to find evidence of species that were at risk, and have now gone extinct, and you can prove the cause was a change in climate, I might begin to take it seriously.
Wow, the number of at risk species in a region is proportional to the number of species found in that region. Who would’ve guessed …
The first thing I looked at was the risk table. High risk to Low risk.
Is there any species that is “Not at risk”? How about Likely to Flourish? These seem to be missing from the possibilities.
This is surely a one-eyed view of the world. They only see the bad, there is no good.
— and what about areas of increasing plant diversity due to invasive species?
Just like our fire warnings in Australia changed from low – medium- high to low/moderate – high – very high – severe – extreme – catastrophic. So no matter when or where you look the risk of fire is immense.
Does this take into account that some plant species are very rare for other reasons than environmental pressure? For example, evolutionary dead ends? I suspect (know) that many species are NATURALLY headed for extinction. Evolution is an on-going process, a constant state of flux – and we are always right in the middle of it. Taking a static survey can only be incomplete and could be confusing and/or misleading.
All species are headed for extinction, but some leave descendants.
As with all models it depends on what settings are programmed into it.
So the “What if factor is always there. Lets make it too cold, so some plants will not make t. Too hot and again some will die out, too wet or too dry, same thing again applies.
So its a typical GIGO setup.
MJE
So – we learn which species (not yet clearly defined) are to be classified as ‘at risk’. And this means what? Can – and should – we attempt to do anything about it? Even if we could?
Precisely how many ‘at risk’ species have disappeared over the millennia? And this is bad? or good?
We need to get good answers to these questions before we get all a a-flutter about self learning machines even identifying these species with unknown issues.
its neither bad nor good … its what evolution is and can’t be stopped nor should it be …
Species are supposed to go extinct … trying to stop that is silly and ignorant …
A whole generation of “researchers” have fallen prey to the belief that computer modelling is a subsitute for reality. Funny how there’s no computer model which has predicted the massive loss and near extinction of intelligent, critically-thoughtful, independetly-minded human beings which appears to be happening right now.
Have they not heard of Plant Nurseries? Specialist Propagation Nurseries? Plant collectors and propagators? Tecomanthe speciosa had 1 plant left in the wild when discovered in 1945. It is now widespread in gardens in New Zealand, and I believe in other countries also.
As Willis says, “show me the dead bodies of all the species going extinct from CAGW.”
There aren’t any…
Sure, overhunting, overfishing, shrinking natural habitats, excessive use of agrochemicals, air/water pollution etc., have decreased populations of some species, but these issues have nothing to do with CAGW.
Ironically, increases in CO2 fertilization have been a huge benefit to plant, animal and insect speicies with increases in global greening land area equivalent to twice the size of mainland US…
CO2 fertiliazation has also increased phytoplankton, which has been a huge boon for all ocean organisms..
The Mother of All Ironies, is that the tiny amount of actual CO2 forcing will help offset some of the coming global cooling from a Grand Solar Minimum event, and when the PDO/AMO/NAO are all in their respective 30-year cool cycles from around the early 2020’s…
Less models and hand waving, and more dead animal bodies, please..
https://www.youtube.com/watch?v=JYfM-frIWlQ
Nough said 😉
Biodiversity is such a crock. Plant species mutate in many ways some of them not completely understood. Worrying about diversity is senseless when we have the tools of gene splicing and cloning and genetically modfied food. By the way GMO is just as safe as the old botanist way of grafting one plant species onto another. The GMO threat is yet another one of the Left’s BIG lies.
Color diversity, maybe. Individual diversity, probably less value, in principle, than a human life. In practice, their value is determined by the politically congruent model, which is notable selective, opportunistic, that optimizes profit and leverage.
On the other hand there are a lot more varieties of pot, so it all balances out.
And its very likely that Pot or some other mind altering substance is at the root of the Climate change nonsense.
MJE
Save the world’s most endangered plants, ban “vegetarians” now ! ….
WUWT is obediently recycling every single warmist-alarmist claim, clearly working hard trawling the media to make sure they don’t miss any.
Did I miss the memo?
Is WUWT now a CAGW-compliant warmist blog?
Speaking of recycling. You’ve made the same post on almost all of today’s new articles.
If you are so offended by the articles the moderators choose to post, why are you still here?
I noticed. Maybe he’s a bot.
Oh look, a new toy to play with!
Most previous commenters are skeptical, but to this botanist this looks like a valuable tool. Before judging its usefulness it’s necessary to evaluate the decision-making criteria, however. If reasonable, it’s a way to filter out the species doing just find, leaving the remainder for expert evaluation. Knowing a species’ vulnerability is different from demanding all efforts to preserve it. Conservation efforts don’t have to be draconian.
Plant species going extinct? Gee whiz!!!
I have fossils of plants that no longer exist, because they died out, 300 million years ago, at the end of the Carboniferous period. The only things left from that epoch are horsetail rushes, club mosses and true ferns. Well, that and algae – algae never die. All the precursors to modern trees like conifers are gone.
Plant species going extinct happens all the time, everywhere, every geological period. We have nothing to do with it. Now, I realize that these people have to do something to excuse the grant money that they get, but not acknowledging the reality of the past epochs on this planet, when humans didn’t exist at all, is kind of silly.
One of these days, the folks funding all this Faerie Counting are gonna put on their Hi Viz coats and demand a recount.
If they don’t get it, I for one would not like to be in the census taker’s shoes.
Do we hope the Katowice Klimate Klowns have taken *any* notice of France……
There seems to be an assumption that if you quantify an assumption (make it a mathematical equation) it becomes scientific data, and losses the ‘questionability’ of an assumption.
With the help of ‘distinguished people with authority’, I wrote a model that determined that the percentage of modeled climate change studies based entirely on mathematically expressed assumptions (and, therefore, essentially worthless) was 99.8%, plus or minus 0.2%.
Unfortunately, my model became self-aware, and in a wave of depression, brought on by a feeling of worthlessness, erased itself.
I see newspaper stories about new species being found all the time.
Can computer models predict those?
No. They are not shown to be independent or causative. They almost certainly are both based on many common sources, common authors, common you-name-it. It is almost certainly “prediction” based on already known data, however hard they try to persuade themselves otherwise.
This is what people like Steven “kinetics can tell you nothing” Mosher still don’t seem to get. TIME is still currently the best way to eliminate incorrect causality assertions. If A happens before B, then A may, or may not, cause B. But B does not cause A. Einstein?Newton? Not interested. This is still demonstrably the best approach in the universe we have to live in.
Somebody called this ”GIGO on steroids” and I tend to agree. First one strange thing: the map at the top of this post. It is nowhere in the paper and nowhere in the supplementary data. It is probably something cooked up specifically for the press release. However it is a very odd map indeed.
It is fairly well known where there are concentrations of threatened species. It is in areas with high numbers of narrow endemics (species with very limited range), especially when these areas are also under heavy pressure from agriculture, forestry, overgrazing or urbanization.
Now this map “finds” some such areas e. g. California, S W Australia and Borneo, but it completely misses others e. g. central Chile, the southwestern Cape, the circum-Mediterranean area, the Macaronesian islands and the West Indies.
It also “finds” some very odd “hotspots” like the Faeroes-Hebrides, the Yucatan peninsula and the Florida panhandle. Does anyone seriously believe that there are many more threatened plant species in the Florida Panhandle than in Haiti? If so he has for sure never been to Haiti, or Florida for that matter. And the Faeroes? The islands were completely ice-covered during the last glaciation, so there is just one single endemic plant Euphrasia atropurpurea, and it is indeed considered to be declining, so I suspect this 100% negative record made the machine-learning go berserk.
And how come neither Hawaii nor New Zealand apparently have any threatened species at all?
This might be a useful project, but if so it has obviously been fed with substandard data.
Great bit snarking on the some of the limits of machine learning. This was the keynote from USENIX 2018.
Eurekalert has been pumping out the BS lately.
The Red List sorts non-extinct species into one of five classification categories: least concern, near-threatened, vulnerable, endangered and critically endangered.
The researchers then applied the model to the many thousands of plant species that remain unlisted by IUCN. According to the results, more than 15,000 of the species–roughly 10 percent of the total assessed by the team–have a high probability of qualifying as near-threatened, at a minimum.
__________________________________________________
But first Espíndola and her colleagues have to teach their machine to distinguish
1. real danger of extinction
2. probably mitigated
3. we just can’t find them, sob.