A Brief Analysis of the Principles of Depth Cameras: Structured Light, TOF, and Stereo Vision.

A Brief Analysis of the Principles of Depth Cameras: Structured Light, TOF, and Stereo Vision

In today’s technological age, smartphones have become an indispensable part of our daily lives. When you pick up your phone, facial recognition features activate swiftly to instantly unlock the screen. This all happens so quickly that you may have never truly considered the technical principles behind it. Likewise, when you see robots adeptly navigating complex environments and avoiding obstacles, do you wonder how they perceive depth in order to make judgments?

Depth cameras, this magical technology, open up a whole new visual world for us. It is not simply an ordinary camera, but rather combines various advanced technologies to capture three-dimensional information about objects. Today we will briefly introduce the principles behind depth cameras, hoping to provide you with a better understanding for selecting depth cameras for your applications through this article.

Please note that the brands and products mentioned in this article are purely for explaining relevant principles and do not constitute purchase recommendations, so please feel free to continue reading.

Introduction to Depth Cameras

What are depth cameras?

Simply put, depth cameras are like special cameras. Not only can they take photos of how objects look, they can also sense how far objects are from the camera. Imagine you are shining a flashlight at a wall - the flashlight not only tells you the color and shape of the wall, but also how far the wall is from you. Thus, depth cameras can provide us with a three-dimensional visual effect to help us better understand the surrounding environment.

Scientifically speaking, depth cameras are actually a type of specialized imaging device that can capture the depth information of each pixel in a scene - that is, the distance from the pixel to the camera. This type of imaging device not only records the appearance of objects, but also measures the position of objects in three-dimensional space.

The output of a depth camera is usually a depth map, where the value of each pixel represents the distance from that pixel to the camera. This depth information is very useful in many applications, such as robot navigation, augmented reality, gesture recognition, and more. Depth cameras typically work based on the following technologies:

Structured Light Cameras

When it comes to structured light depth cameras, we have to mention the Kinect. The Kinect is a depth camera launched by Microsoft, originally designed for its Xbox gaming platform to allow players to control games through body movements. One of the core technologies of the Kinect is structured light depth imaging.

The basic principle of structured light depth imaging is:

  1. Light projection: There is an infrared light source inside the Kinect device which projects a series of specific infrared light patterns (such as dot matrix patterns) onto the scene.

  1. Pattern deformation: When these infrared light patterns are reflected by objects in the scene, the patterns become deformed due to the geometric shapes and positions of the objects.
  2. Infrared camera capture: The Kinect is also equipped with an infrared camera to capture the reflected light patterns.
  3. Depth calculation: By comparing the original projected patterns with the deformed reflected patterns, the Kinect can calculate the depth information of each pixel - that is, the distance from that pixel to the camera.
  4. Generate depth map: Based on the depth information above, the Kinect generates a depth map where the value of each pixel represents the distance from that pixel to the camera.

In addition to Kinect and other structured light depth cameras, the most common structured light depth cameras are generally those used in reverse engineering metrology devices. Generally, they project spaced stripes onto the object being measured, with the stripes shrinking at intervals. At appropriate times they switch to projecting vertical stripes. By using these known stripe widths, the geometry of the measured object can be determined more accurately. This method provides relatively precise measurement data since the stripe widths are known. However, since it requires multiple projections, it takes more time and computational power. The measured object also cannot move too much.

In addition to Kinect and similar structured light depth cameras, many other depth cameras also use structured light now. For example:

  1. Apple Face ID: Apple's Face ID technology also uses structured light techniques to capture facial information from users to enable facial recognition capabilities. It is used on iPhone X and later iPhone models, as well as iPad Pro.
  2. Intel RealSense: This is a series of structured light depth cameras launched by Intel, mainly for computer vision and gesture recognition applications.

Advantages and Disadvantages:


  1. High precision: Structured light techniques generally provide relatively high depth accuracy, especially for close-range imaging.
  2. Stability: With fixed ambient lighting conditions, structured light cameras can provide stable depth information.
  3. Cost-effectiveness: Manufacturing costs may be lower compared to some other depth imaging technologies.
  4. Compact size: Structured light components can be designed to be relatively compact, suitable for integration into mobile devices and other small form factors.
  5. No need for external light source: Structured light cameras have built-in projection modules, so no additional external light source is needed.


  1. Ambient light interference: Performance may be affected under intense sunlight or other strong lighting conditions.
  2. Limited working range: Structured light cameras typically have an optimized working distance range, beyond which accuracy may decrease.
  3. Complexity: Structured light systems require precise calibration and synchronization to ensure accurate alignment between projection and detection modules.
  4. Processing time: Computing and decoding structured light patterns may require more processing time, especially for high-resolution imaging.
  5. Reliance on object surface properties: Some object surfaces like transparent, highly reflective, or very dark surfaces may not be suitable for structured light imaging because they may not correctly reflect the structured light patterns.

Application Scenarios

Structured light is generally used in areas like:

Time-of-Flight (TOF) Cameras

With the development of TOF technology, more and more manufacturers have started offering depth cameras based on TOF.

The basic principle of TOF is to use an infrared emitter to emit modulated light pulses. When the light pulses reflect off an object and return to the receiver, the time of flight of the light pulse is used to calculate the distance to the object. This modulation method places very high requirements on the emitter and receiver - light travels so fast that extremely high precision is needed to measure time.

In actual applications, the light is usually modulated into a pulse wave (typically a sine wave). When encountering diffuse reflection off an obstacle, a specially designed CMOS sensor receives the reflected sine wave, which has undergone a phase shift. The distance to the object can be calculated based on this phase shift.

The principle is not complex, but high measurement precision with miniaturized integrated emitters and receivers is difficult to achieve.

For example, DFRobot recently launched the RGB-D 3D TOF Sensor Camera. The VCSEL is responsible for emitting infrared lasers, while the TOF camera receives the reflected signals. Meanwhile, the RGB camera also captures color images of the scene.

Since measuring the time of flight of light requires extremely high frequencies and precision, early TOF devices had issues with size and cost, limiting their use mostly to industrial applications. The miniaturization of TOF relies heavily on breakthroughs in integrated circuit and sensor technologies in recent years, making on-chip measurements of light pulse phases increasingly feasible. Having chip-level solutions paved the way for smaller, lower cost products.

Some good TOF depth cameras currently on the market include: CS20 Dual-Resolution 3D TOF Solid-state LiDAR (5V, 5m): A TOF image sensor with 640*480 resolution. 3D ToF Depth Sensor Camera (Supports ROS1 and ROS2): Integrates a BL702 and Juyou 100x100 TOF, measures distances up to 1.5m, with a 1.14 inch LCD screen for real-time depth map previews.

Advantages and Disadvantages:


  1. Real-time: TOF cameras can provide depth information in real time, suitable for applications requiring fast response.
  2. Wide working range: Compared to other technologies, TOF cameras typically have larger working ranges and can measure depth at longer distances.
  3. Environmental adaptability: TOF cameras are less affected by ambient light interference so can work in various lighting conditions.
  4. Compact size: TOF sensors can be designed to be very compact for integration into various devices.
  5. No need for complex algorithms: Since TOF cameras directly measure time of flight, complex algorithms to calculate depth are not needed.


  1. Precision issues: In some cases, TOF cameras may not provide depth information as precise as other technologies like structured light.
  2. Multipath interference: In complex scenes, light can reflect multiple times, leading to errors in depth calculations.
  3. High power consumption: Some TOF cameras may consume relatively high power when running continuously.
  4. Cost: High resolution, high precision TOF cameras can be costly.
  5. Reactivity to certain materials: TOF cameras may have difficulty obtaining accurate depth information for transparent, highly reflective, or other types of materials.

Application Scenarios:

Stereo Vision Cameras

Stereo vision uses the principles of triangulation to obtain 3D information - that is, the image planes of two cameras and the measured object form a triangle. Given the positional relationship between the two cameras and the coordinates of the object in their left and right images, the 3D dimensions and spatial coordinates of feature points on the object can be obtained. Thus, stereo vision systems generally consist of two cameras.

Since stereo vision only requires two cameras, it is easy for anyone to build their own stereo depth camera system. We find that OpenCV already has mature algorithms and tools for development.

Advantages and Disadvantages:


  1. Natural depth perception: Stereo cameras mimic human binocular vision to provide natural depth awareness.
  2. No need for external light source: Unlike structured light or TOF techniques, stereo cameras do not require external light sources or lasers to measure depth.
  3. Cost-effectiveness: Stereo cameras are typically cheaper than other depth camera technologies since they do not require additional light sources or sensors.
  4. Environmental adaptability: Stereo cameras work under various lighting conditions, including outdoors.
  5. Simple hardware requirements: Stereo cameras mainly rely on two standard cameras without the need for specialized sensors or light sources.


  1. Computationally intensive: Stereo depth estimation requires extensive computations, especially at high imaging resolutions.
  2. Limited depth range: The depth perception range of stereo cameras is limited by the distance between cameras and baseline length.
  3. Reliance on texture: Stereo cameras may have difficulty accurately estimating depth in low texture or repeating texture scenes.
  4. Disparity errors: In some cases like fast moving objects or imperfect camera alignment, parallax errors can occur.
  5. Scene dependence: For certain scenes like transparent, reflective, or very dark scenes, stereo cameras may have trouble obtaining accurate depth information.

Application Scenarios:

Comparing these Technologies

In fact, many stereo cameras now incorporate more than one depth measurement technique.While structured light provides high precision, TOF is faster and more versatile. Stereo vision may be more natural and cost effective.

In terms of accuracy, structured light has a greater advantage as it provides more information. But structured light requires more demanding encoding and decoding, so it is not suitable for low latency scenarios. It also needs additional hardware to project the structured patterns onto targets, increasing costs.

For TOF technology, precision may not be its strong suit - light travels so fast that even very precise electronic components have difficulty achieving millimeter-level accuracy. But the benefit of TOF is that it obtains depth data directly without requiring the system to calculate it. This provides a natural advantage in real-time motion scenarios. Using modulated lasers also means it has excellent interference resistance.

Stereo vision can be considered a more universal technique that people discovered early on by mimicking human binocular vision. Stereo vision does not require complex hardware - just two cameras. In the past when computing power was insufficient, depth data acquisition may have been challenging, but this is no longer a concern. It also has its own problems though - stereo vision may not work as well in dim lighting, since insufficient light means insufficient information to perform calculations. Structured light and TOF do not appear to have this issue.


Depth cameras have completely changed how machines see and interact with the world. Whether structured light, TOF, or stereo vision, each technology has its own advantages and disadvantages.

Technology summary:

Structured light depth cameras: Decode depth information from reflected light by projecting known light patterns onto the scene. Provides high precision depth data but may be affected by ambient light and material properties. TOF depth cameras: Calculate depth by measuring the time of flight of a light signal between emission and reflection. Can provide real-time depth suitable for applications needing fast response, but may not match the high precision of structured light in some cases. Stereo depth cameras: Mimic human binocular vision to calculate depth by comparing the parallax between images from two cameras. Provides natural depth perception but depth estimation requires extensive computation.


Example scenarios:

We believe that with technological advancement, depth camera technologies will integrate further rather than being considered separately. Many products already combine these technologies in complementary ways, like the RGB-D 3D TOF camera and some Intel RealSense models that incorporate both stereo vision and structured light. We will see more applications of these depth cameras on robots in the future, as robots and humans will co-exist in the physical world going forward.

So how would you use depth cameras? We look forward to your discussion on the Forum.