But an innovative new Mars rover camera, called TextureCam, could potentially streamline robotic planetary exploration by equipping future rovers with the ability to choose their science targets. The technology, developed at NASA's Jet Propulsion Laboratory in Pasadena, Calif., is based on the maxim that the more science a rover can do by itself, the less of a burden there is for Earthlings to analyze the images of rocks and other features.
The goal of TextureCam is to make interplanetary robots more capable. The speed of light between Mars and Earth is 20 minutes on average, which already introduces delays between commands and execution. To get around this, rover drivers send task lists of commands. This becomes even more difficult, however, once you get farther from Earth at a location such as Jupiter's icy moon Europa.
"We currently have a micromanaging approach to space exploration," senior researcher Kiri Wagstaff, a computer scientist and geologist at JPL, said in a statement.
"While this suffices for our rovers on Mars, it works less and less well the further you get from the Earth. If you want to get ambitious and go to Europa and asteroids and comets, you need more and more autonomy to even make that feasible."
The researchers recently took TextureCam for some test runs in the Mojave Desert in California after "training" it using images other rovers took on the Red Planet. The results, although very early-stage, showed the technology could prioritize the rocks that are more scientifically interesting.
Training for textures
Rovers of the future would have more smarts than the relatively "brainless" Mars rover Curiosity currently exploring the Red Planet, which is highly capable of gathering data but cannot perform the science by itself, TextureCam officials explained.
Curiosity can autonomously zero in on the rocks it needs to take pictures, but has to beam the pictures to Earth for the scientists to do the analysis remotely. If Curiosity is out of range of a Mars orbiter, the uploads are painfully slow – about 250 times slower than what a human on Earth encounters on a typical 3G cell phone network.
TextureCam would instead take a 3D picture of the rock using stereo cameras. A processor, not linked to the rover's main computer, would then scan the picture for textures, allowing the machine to figure out the difference between rocks, sand and the background sky.
The processor could also determine the size of rocks and their distance, as well as if there are any layers that could be important for science analysis. It would then prioritize its messages to Earth, selecting the most interesting targets to send back to controllers.
"You do have to provide it with some initial training, just like you would with a human, where you give it example images of what to look for," Wagstaff said. "But once it knows what to look for, it can make the same decisions we currently do on Earth."
The technology could fly on the NASA's 2020 Mars rover mission, or on trips to more distant destinations like Europa, project scientists said.
Source of Article: Space.com