Project 3 – Computer Vision – Stereo Depth Estimation
This is a class project for CSCI 4527/6527 Computer Vision at George Washington University.
1. Overview
In this project, we explore stereo vision. Stereo vision, is an important module of computer vision. All kinds of objects that human beings can see in the world benefit from our visual system. Well, we want robots to be able to distinguish objects just like humans, and the same need to give robots eyes. We certainly have this kind of experience. When we see an object, we can actually estimate whether the object is closer or farther from us. Then we need to give the robot more than just a system that can recognize objects but can able to judge how far the object is. The function of the stereo vision system can meet the requirements very well.
2. How It Works
2.1 The principle of stereo vision
In general, stereo vision systems require two (or more than two) camera support. It is just like human eyes. In a stereo system. If we can find the corresponding point in the same image taken by the left and right cameras, we can use the triangulation method to find the depth.
It can be clearly seen from the figure 1 that in the three-dimensional system, the imaging of P and Q on the target plane T is not located at the same point but has its own imaging point, that is, q' and p'.
When we have Reference and Target, how should we solve the correspondence between the reference plane and the target plane? At this time, the polar constraints are needed. The polar constraints mean that once we know the polar geometry of the stereo vision system, like figure 2 shows, the two-dimensional search of the matching features between the two images is turned into a one-dimensional search along the polar lines. Once we know the search area of the corresponding point, we can reduce it from 2D to 1D, so that a more convenient stereoscopic vision is formed. In our case, the positions of the left and right figures are parallel, and our focus is on how to calculate the depth information of the object. This is the most important and most critical place. The calculation principle under the standard stereo vision system is given below.
In figure 3, we assume that P is a point in space, OR is the optical center of the left camera, OT is the optical center of the right camera, the focal length of the camera is f (distance from the optical center to the imaging plane), and the imaging plane is represented by a pink line in the figure. B denotes the distance between the two cameras' optical centres, also called the baseline. The imaging points of P in the imaging planes of the two left and right cameras are p and p' respectively. xR and xT are the horizontal distances of the imaging points. Z is We need depth of requirements.
According to triangle similarity theorem:
D is what we usually call disparity.
In image processing, we usually use grey value to represent the disparity, the greater the disparity, the grey value is, the greater the expressed in the disparity image visual effect is the brighter the image, the further the object from us, the smaller the parallax, grey value is smaller, the disparity images will be dark. Because what we have to do is to input the rectified image, what we need to do is stereo matching, and then the depth is calculated to get the depth map.
2.1 stereo matching and the depth map
In the algorithm of image matching, local matching algorithm is selected.
2.2.1 Matching Cost Computation
For each pixel, within a given range of parallax d, we find:
we can think this is the minimum means this is the right match points, and the minimum value of d is the disparity we are looking for.
2.2.2 Cost Aggregation
However, the final disparity is not good, and the matching between points is easily affected by the noise. So we need to set up a window around the point to make a comparison between the pixel block and the pixel block. The basic principle is to select a sub window in the neighborhood of the pixel, given a point on an image. In a region of another image, based on some similarity, find the subgraph which is most similar to the sub-window image. The corresponding pixel point in the matching subgraph is the matching point of the pixel. About Gray-scale Image Matching Algorithm, here we choose Mean Absolute Differences (Abbreviated MAD algorithm).
The advantage is:
1. Simple and easy to understand.
2. The operation process is simple and the matching accuracy is high.
Suppose we have a 3 x 3 window, a first look at the original left image pixels of aggregation. similarly to the right of the image, need to do in disparity scope aggregation. Then, the next step is to calculate the cost of matching the pixels to be processed, and find the minimum cost of them:
The final formula is a function of disparity d.
The widow size I have test are 5 x 5 and 9 x 9. Effects of sliding windows of different sizes on the calculation result of depth maps:
Small size window: higher accuracy and more detail; but especially sensitive to noise
Large-size windows: Inaccurate and incomplete details; but robust to noise.
I use an rectified images as follow:
Figure 6. Rectified images from left.
Figure 7. Rectified images from right.
3. Possible Extensions
I choose to solve “One challenge is images that have large blank regions; develop some other heuristics or rules to "guess" what the best correspondence is for these large regions.”
For this problem I take this approaches
First, a global optimization theory method by dynamic programming is used to estimate the disparity, a global energy function is established, and the optimal disparity value is obtained by minimizing the global energy function. Then do the disparity refinement, use dynamic programming to perform an accurate calculation of the disparity map of the rough estimate obtained in the previous step. Following Figure 12 is the result, we can find more detail information in the blank regions.
Figure 12. The result of the global optimization method with disparity refinement