AI technology creates 3D objects from 2D photos in record time
6. April 2022AI technology creates 3D objects from 2D photos in record time
San Francisco, Apr. 6, 2022
Researchers from Google, Waymo and UC Berkeley used AI technology to create 3D renderings of San Francisco neighborhoods based on millions of photos.
Their technique, called „block-NeRF,“ reconstructed the large-scale scenes from images taken by self-driving vehicles.
The technique is based on „neural radiance fields“ (NeRF), a technique introduced by Google in 2020 that maps color and light from 2D photos. It is now being used to create 3D objects from 2D photos in seconds.
In this case, researchers fed Block-NeRF 2.8 million images taken over a three-month period (day and night) by cameras mounted on Waymo vehicles. The images were used to reconstruct the Alamo Square neighborhood in SF. Researchers used multiple NeRFs to combine all renderings based on the visual data.
Each „block NeRF“ represents different blocks of the neighborhood. The researchers also recreated the Bay Bridge and Embarcadero area, Moscone Center, and downtown San Francisco, each of which required millions of photos to render in 3D.