NASA takes first step to allow computers to decide what to tell us in search for life on Mars

GOLDSCHMIDT CONFERENCE

IMAGE
IMAGE: ARTIST’S IMPRESSION OF THE ROSALIND FRANKLIN ROVER ON MARS HTTPS://WWW.ESA.INT/SCIENCE_EXPLORATION/HUMAN_AND_ROBOTIC_EXPLORATION/EXPLORATION/EXOMARS/EXOMARS_2022_ROVER view more CREDIT: PHOTO CREDIT: ESA/ATG MEDIALAB

NASA has stepped closer to allowing remote onboard computers to direct the search for life on other planets. Scientists from the NASA Goddard Space Flight Centre have announced first results from new intelligent systems, to be installed in space probes, capable of identifying geochemical signatures of life from rock samples. Allowing these intelligent systems to choose both what to analyse and what to tell us back on Earth will overcome severe limits on how information is transmitted over huge distances in the search for life from distant planets. The systems will debut on the 2022/23 ExoMars mission, before fuller implementation on more distant bodies in the Solar System.

Presenting the work at the Goldschmidt Geochemistry conference, Lead researcher Victoria Da Poian said “This is a visionary step in space exploration. It means that over time we’ll have moved from the idea that humans are involved with nearly everything in space, to the idea that computers are equipped with intelligent systems, and they are trained to make some decisions and are able to transmit in priority the most interesting or time-critical information”.

Eric Lyness, software lead in the Planetary Environments Lab at NASA Goddard Space Flight Center (GSFC), emphasized the need to have smart instruments for planetary exploration: “It costs a lot of time and money to send the data back to Earth which means scientists can’t run as many experiments or analyse as many samples as they would like. By using AI to do an initial analysis of the data after it is collected but before it is sent back to Earth, NASA can optimise what we receive, which greatly increases the scientific value of space missions”

Victoria Da Poian and Eric Lyness (both at NASA’s Goddard Space Flight Centre), have trained artificial intelligence systems to analyse hundreds of rock samples and thousands of experimental spectra from the Mars Organic Molecule Analyzer (MOMA), an instrument that will land on Mars within the ExoMars Rosalind Franklin Rover in 2023. MOMA is a state-of-the-art mass spectrometer-based instrument, capable of analyzing and identifying organic molecules in rocks samples. It will search for past or present life on the Martian surface and subsurface through analysis of rock samples. The system to be sent to Mars will still transmit most data back to Earth, but later systems for the outer solar system will be given autonomy to decide what information to return to Earth.

First results show that when the system’s neural network algorithm processes a spectrum from an unknown compound, this can be categorized with up to 94% accuracy and matched to previously seen samples with 87% accuracy. This will be further refined until being incorporated into the 2023 mission.

Victoria Da Poian continued:

“What we get from these unmanned missions is data, lots of it; and sending data over hundreds of millions of kilometres can be very challenging in different environments and extremely expensive; in other words, bandwidth is limited. We need to prioritize the volume of data we send back to Earth, but we also need to ensure that in doing that we don’t throw out vital information. This has led us to begin to develop smart algorithms which can for now help the scientists with their analysis of the sample and their decision-making process regarding subsequent operations, and as a longer-term objective, algorithms that will analyse the data itself, will adjust and tune the instruments to run next operations without the ground-in-the-loop, and will transmit home only the most interesting data.”

The team used the raw data from initial laboratory tests with an Earth-based MOMA instrument to train computers to recognize familiar patterns. When new raw data is received, the software tells the scientists what previously encountered samples match this new data.

Eric Lyness said:

“The mission will face severe time limits. When we will be operating on Mars, samples will only remain in the rover for at most a few weeks before the rover dumps the sample and moves to a new place to drill. So, if we need to retest a sample, we need to do it quickly, sometimes within 24 hours. In the future, as we move to explore the moons of Jupiter such as Europa, and of Saturn such as Enceladus and Titan*, we will need real-time decisions to be made onsite. With these moons it can take 5 to 7 hours for a signal from Earth to reach the instruments, so this will not be like controlling a drone, with an instant response. We need to give the instruments the autonomy to make rapid decisions to reach our science goals on our behalf”.

See NASA’s planned Dragonfly mission to Titan. This is part of NASA’s “New Frontiers” program”

Eric Lyness commented: “When first gathered, the data produced by the MOMA life-searching instrument is difficult to interpret. It will not shout out “I’ve found life here”, but will give us probabilities which will need to be analyzed. These results will largely tell us about the geochemistry that the instruments find. We’re aiming for the system to give scientists directions, for example our system might say “I’ve got 91% confidence that this sample corresponds to a real world sample and I’m 87% sure it is phospholipids, similar to a sample tested on July 24th, 2018 and here is what that data looked like”. We’ll still need humans to interpret the findings, but the first filter will be the AI system”.

The researchers note that data is expensive to send back from Mars, and gets more expensive as landers get further from Earth. “Data from a rover on Mars can cost as much as 100,000 times as much as data on your cell phone, so we need to make those bits as scientifically valuable as possible.” said Eric Lyness.

Commenting, Dr Joel Davis (postdoctoral researcher in planetary geology at the Natural History Museum, London) said: “One of the main challenges for planetary missions is getting the data back to Earth – it costs both time and money. On Mars, the travel time delay is around 20 minutes and this gets more the further you go out in the solar system. Given the finite lifespans of missions, scientists have to be very selective about the data they chose to bring back. These results certainly seem promising; having greater autonomy onboard spacecraft is one way of ensuring the usefulness of the data returned.”

Dr Davis was not involved in this work, this is an independent comment.

The Goldschmidt conference thanks the NASA Goddard Space Flight Centre for their assistance in the preparation of this material. ExoMars is a joint European-Russian, European Space Agency-Roskosmos project. One of the central goals of the mission is to search for traces of past and present life. A key instrument is the Mars Organic Molecule Analyser (MOMA), which is a joint German-French-American investigation led by the Max Planck Institute for Solar System Research in Göttingen.

The Goldschmidt conference is the world’s main geochemistry conference, hosted by the Geochemical Society and the European Association of Geochemistry. Held annually, it covers such material as climate change, astrobiology, planetary and stellar development and conditions, chemistry of Earth materials, pollution, the undersea environment, volcanoes, and many other subjects. For 2020 the scheduled Hawaii congress has been moved online, and takes place from 21-26 June, see https://goldschmidt.info/2020/index. Future congresses are in Lyon, France (2021) and the rescheduled Hawaii congress (2022).

###

From EurekAlert!

20 thoughts on “NASA takes first step to allow computers to decide what to tell us in search for life on Mars

  1. Jupiter probe Galileo was commanded in 2003 to dive into Jupiter’s atmosphere to destroy itself to prevent a possible collision and contamination with a Jovian moon.
    Saturn probe Cassini was commanded in 2017 to dive into Saturn’s atmosphere to destroy itself to prevent a possible collision and contamination of a Saturnian moon.

    Next up:
    “I’m sorry Dave, I’m afraid I can’t do that.”

    • — and Dave, being a fallible human, had even left his spacesuit helmet back in the locker room. Classic.

    • And believe it or not, some hand-wringers were “concerned” about the nuclear-powered probes going deep into those planets’ atmospheres, getting their plutonium fuel-cells compressed & causing fission-bomb explosions. ROFLMAO.

  2. But what if the MOMA instrument locates itself by consulting a large stored database, and wrongly concludes that the answer is Toronto, much like what happened to Watson while playing Jeopardy back in 2011?

    For background on the Watson system and it’s problems, try:

    https://www.aol.com/2011/02/17/the-watson-supercomputer-isnt-always-perfect-you-say-tomato/

    (Quote from the above)
    On Day 2, Watson missed one clue by a country mile — better make that an entire country. During a Final Jeopardy! segment that included the “U.S. Cities” category, the clue was: “Its largest airport was named for a World War II hero; its second-largest, for a World War II battle.”

    Watson responded “What is Toronto???,” while contestants Jennings and Rutter correctly answered Chicago — for the city’s O’Hare and Midway airports.

    • It’s already a given that you can’t send all the data the craft is capable of collecting.
      So the question boils down to, how do you decide what to send and what not to send.
      Either humans decide before hand, without being able to see the data, or a computer tries to analyze the data and make decisions in real time.
      In both cases, you are bound to have errors, and valuable data is lost. Which option has the best chance of making the correct decisions the majority of the time?

  3. It’s a compromise, driven by physical constraints. I would hope it is also programmed to examine and report anomalous repetitive patterns beyond the previously ‘trained’ earthly pattern data.

  4. Maybe it will even be able to tell the difference between kilometers and miles on re-entry. Hint: it might be better to concentrate on robust software designed to get good sensor packages to the surface than it is to worry about what it might detect, or to “decide” what is important.

  5. “ humans are involved with nearly everything in space, to the idea that computers are equipped with intelligent systems, and they are trained to make some decisions and are able to transmit in priority the most interesting or time-critical information”.

    If / then statements…..yawn

  6. Fans of 2001, Joel and David? Me too, even though Kubrick and Clarke deliberately left the ending incomprehensible, which in my view abrogates a duty a storyteller must keep with his listeners. A cool joke: HAL is programmed never to yell or verbally abuse human beings so, when he’s trying to talk Dave out of disconnecting him, he has to be polite. (“I honestly think you should sit down, take a stress pill, and think things over.”)

  7. I think they will need about 30 separate computers, all doing their own thing, then amalgamate the results to average out the resulting data opinions. Maybe set up a specific bureau to oversee the combined efforts of the “intelligent” computers. Give the bureau a suitable name, that conveys the collective nature of the pan Solar system , how about the Inter Planetary Computer Committee or IPCC for short?
    What could possibly go wrong?…….

  8. While I think robotic explorers are nifty and cost effective, I’m not a big fan of AI. It’s not that it never works or is useless. It’s that it’s often not easy to tell quickly if it has come to a ludicrous conclusion. The results really need to be confirmed by more conventional analysis. I think there is a real risk of initial “NASA finds life on Mars” headlines and two years later a one paragraph article on page 10 indicating that NASA has been unable to confirm its previous conclusion of possible life on Mars.

    But, what the hell. The NASA folks probably have a good idea what they are doing. And it doesn’t really matter anyway. There’s life there or there isn’t, and eventually we’ll know for sure.

    Good going. (… I guess …)

  9. Reasonable, computers have been making decisions for a long time why not in space and planets! Computers diagnose, troubleshoot, calibration, guide, pilot, assist in driving and send you your speeding ticket and more.

    Probably a lot of ifs and ands in the training. Maybe a couple of or and nors. Good move.

    • I am not sure that the current software design practice is up to that task. Remember Boeing 737 MAX? After an initial dismissal of a possibility of a bug, it took 18 months to review the design – test flights start today, I hope.

  10. Computers only tell us what the programmers told the to tell us. Personification of computers is kind of crazy. Until true AI appears in a few hundred years, computers are just a product of the programmer’s software.

Comments are closed.