3D Depth Mapping Xilinx FPGA IP

Omnitek’s 3D Depth Mapping IP allows depth calculations to be made from the 3D information contained within left and right video streams. This can then be graphically displayed in the form of Depth (distance) maps and histograms.

Functional block diagram of the 3D Depth Mapping IP

The diagram above illustrates the use of the 3D Depth Mapping IP with Omnitek's Image Signal Processor (ISP) IP and Warp IP to process images taken directly from a pair cameras ('left' and 'right') via the MIPI camera interfaces. The camera output is fed into a ISP pipeline which is configured to crop the image as required and correct any camera anomalies before the images are de-Bayered by the Colour Filter Array. The image is then passed to the Omnitek Warp Processor to correct any lens distortion and apply any size and shape transformations that are required before the images are analysed by the 3D Depth Mapping IP.

Applications for such 3D analysis include stereo microscopy, virtual reality, augmented reality, automotive driver assist systems (ADAS), drones and surveillance systems to mention just a few.

Typically the 3D Depth Mapping IP is used with Omnitek’s Image Signal Processor (ISP) and the Omnitek's Warp IP to perform lens correction and perspective mapping before 3D depth calculations are performed. This can be seen in the diagram above.

Download product brochure in pdf format | Download White Paper | Sales Info


Key Features:

  • Very small FPGA resource footprint
  • Very low latency
  • Image resolutions up to 4096 pixels x 2160 lines up to 120Hz
  • Wide Dynamic Range support
  • Automated disparity map calculation for L/R images
  • Automated 3D geometry extraction
  • 3D Image Correction
  • Real time geometry correction for 3D camera toe-in or other 3D rig artefacts using Warp IP
  • Bare Metal and Linux Support Libraries
  • Fully compatible with Omnitek OSVP Suite, HDR, Warp and other IP Cores to provide a comprehensive image processing package.

Applications

The 3D Depth Mapping Subsystem can be used in a range of applications including:

  • Stereo Microscopy
  • Medical Imaging
  • Virtual & Augmented Reality
  • ADAS systems
 

Implementation Examples:

Depth Mapping

Using the left and right input images, a Depth Map can be generated where each pixel in the image by calculating the distance of each point in the left image compared to the right image. From this the 3D software colour coded the display from red to violet to indicate the distance.

Left Image

Right Image

The thistle image above shows the colour mapping of the image elements and their relative distance from the viewer or screen.

 

3D Depth Projection

Using the 3D Depth Map information, 3D Projection can be used to calculate the XYZ coordinates for every point in the display. This then allows the 3D Depth Map to be positioned in 3D space.

The image above shows the 3D depth-mapped image rotated in X, Y, Z space to show the elements in front of and behind the screen/viewing plain.

 

3D Depth Plan

Using the 3D Depth Map information, a 3D Depth Plan can be generated to establish the total depth of an object.

The image above shows the 3D Depth-Mapped image from above to display the maximum and minimum depth budget of the elements in front of and behind the screen/viewing plain.

The examples shown have been generated using Omnitek’s OTM and OTR Waveform Analysers Depth Mapping tools.