April 23 (UPI) -- A new 3D motion tracking system could help autonomous technologies navigate their environs without the help of cameras or LiDAR.
The system utilizes nanoscale graphene photodetectors, which are highly sensitive to light.
Light absorbed by the photodetectors can be used to generate images in real-time, helping autonomous technologies "see" and move through their surroundings.
Scientists described the technology's potential in a new paper, published Friday in the journal Nature Communications.
"The in-depth combination of graphene nanodevices and machine learning algorithms can lead to fascinating opportunities in both science and technology," lead author Dehui Zhang said in a news release.
"Our system combines computational power efficiency, fast tracking speed, compact hardware and a lower cost compared with several other solutions," said Zhang, a doctoral student in electrical and computer engineering at the University of Michigan.
Because the photodetectors are designed to absorb just 10 percent of the light they're exposed to, they appear nearly transparent.
To produce 3D images, researchers stacked the photodetectors, allowing each layer to image a specific focal plane.
Of course, seeing the surroundings is just part of the challenge for autonomous technologies. Whether it's a submersible exploring the deep sea or a robot building a car, autonomous technologies also have to sense how they're moving through space.
Typically, LiDAR systems and light-field cameras -- with the help of computer algorithms -- help autonomous technologies sense their movements, but these systems have a variety of limitations.
To grant their new 3D motion tracking system spacial intelligence, researchers paired the graphene photodetectors with a neural network.
Computer engineers trained the network to survey an entire scene and hone in on and track specific objects, like a pedestrian about to enter at a crosswalk or a merging car.
The network, which is particularly well suited for stable environs, could be used to guide automated medical technologies or manufacturing robots.
"It takes time to train your neural network," said co-author Ted Norris, project leader and professor of electrical and computer engineering at Michigan. "But once it's done, it's done. So when a camera sees a certain scene, it can give an answer in milliseconds."
To develop their neural network, scientists augmented signal processing algorithms used for other imaging systems, including X-ray and MRI technologies.
In the lab, scientists paired their network with two small photodetector arrays. Using the setup, researchers successfully tracked the movement of a laser beam, as well as a ladybug.
Though the technology is still in its infancy, researchers are confident in the potential of their 3D motion tracking system.
According to the study's authors, the neural network can be easily scaled up to produce higher resolution images. Researchers say the production process for the graphene photodetectors can also be easily scaled.
"Graphene is now what silicon was in 1960," Norris said. "As we continue to develop this technology, it could motivate the kind of investment that would be needed for commercialization."