A set of UCLA bioengineers and a former postdoctoral scholar are suffering from a fresh class of bionic 3D camera systems that may mimic flies’ multiview vision and bats’ natural sonar sensing, leading to multidimensional imaging with extraordinary depth range that may also scan through blind spots.
Powered by computational image processing, the camera can decipher the decoration of objects hidden around corners or behind other items. The technology could possibly be incorporated into autonomous vehicles or medical imaging tools with sensing capabilities far beyond what’s considered advanced today. This research has been published in Nature Communications.
At night, bats can visualize a captivating picture of these surroundings with a type of echolocation, or sonar. Their high-frequency squeaks bounce off their surroundings and so are picked back up by their ears. The minuscule differences in just how long it requires for the echo to attain the nocturnal animals and the intensity of the sound inform them instantly where things are, what’s in the manner and the proximity of potential prey.
Many insects have geometric-shaped compound eyes, where each “eye” comprises hundreds to thousands of individual units for sightmaking it possible to start to see the ditto from multiple lines of sight. For instance, flies’ bulbous compound eyes provide them with a near-360-degree view despite the fact that their eyes have a set focus length, rendering it problematic for them to see anything a long way away, like a flyswatter held aloft.
Inspired by both of these natural phenomena within flies and bats, the UCLA-led team attempt to design a high-performance 3D camera system with advanced capabilities that leverage these advantages but additionally address nature’s shortcomings.
“As the idea itself has been tried, seeing across a variety of distances and around occlusions is a major hurdle,” said study leader Liang Gao, a co-employee professor of bioengineering at the UCLA Samueli School of Engineering. “To handle that, we developed a novel computational imaging framework, which for the very first time enables the acquisition of a broad and deep panoramic view with simple optics and a little selection of sensors.”
Called “Compact Light-field Photography,” or CLIP, the framework allows the camera system to “see” having an extended depth range and around objects. In experiments, the researchers demonstrated that their system can “see” hidden objects that aren’t spotted by conventional 3D cameras.
The researchers also work with a kind of LiDAR, or “Light Detection And Ranging,” when a laser scans the environment to produce a 3D map of the region.
Conventional LiDAR, without CLIP, would have a high-resolution snapshot of the scene but miss hidden objects, similar to our human eyes would.
Using seven LiDAR cameras with CLIP, the array requires a lower-resolution image of the scene, processes what individual cameras see, then reconstructs the combined scene in high- resolution 3D imaging. The researchers demonstrated the camera system could image a complex 3D scene with several objects, ready at different distances.
“If you are covering one eye and considering your laptop, and there is a coffee mug just slightly hidden behind it, you will possibly not see it, as the laptop blocks the view,” explained Gao, who’s also an associate of the California NanoSystems Institute. “But if you are using both eyes, you’ll notice you’ll receive an improved view of the thing. That’s type of what’s happening here, however now imagine seeing the mug having an insect’s compound eye. Now multiple views of it are possible.”
In accordance with Gao, CLIP helps the camera array seem sensible of what’s hidden in the same way. Coupled with LiDAR, the machine can achieve the bat echolocation effect so you can sense a concealed object by just how long it requires for light to bounce back again to the camera.
The co-lead authors of the published research are UCLA bioengineering graduate student Yayao Ma, who’s an associate of Gao’s Intelligent Optics Laboratory, and Xiaohua Fenga former UCLA Samueli postdoc employed in Gao’s lab and today a study scientist at the study Center for Humanoid Sensing at the Zhejiang Laboratory in Hangzhou, China.
More info: Xiaohua Feng et al, Compact light field photography towards versatile three-dimensional vision, Nature Communications (2022). DOI: 10.1038/s41467-022-31087-9
Citation: Bug eyes and bat sonar: Bioengineers turn to animal kingdom for creation of bionic super 3D cameras (2022, August 12) retrieved 14 August 2022 from https://phys.org/news/2022-08-bug-eyes-sonar-bioengineers-animal.html
This document is at the mercy of copyright. Aside from any fair dealing for the intended purpose of private study or research, no part could be reproduced minus the written permission. This content is provided for information purposes only.