無人駕駛--車道線檢測實戰(附原始碼)
做完專案後寫了個技術小結,供自己回顧也為他人學習提供參考。
另外準備建一個無人駕駛方面的微信交流群,有興趣的朋友可以加我微信:wxl609278502 請註明: 姓名-單位/學校
專案描述:
使用opencv實時處理車載攝像機採集的道路影象,檢測當前的車道,並計算出當前車輛偏離車道中心的距離,計算當前車道的曲率。
專案程式碼GitHub地址:https://github.com/xlwang123/SEU_LaneDetect
附幾張高速公路直道和匝道彎道的檢測效果圖:
測試地點:南京
軟硬體要求
- 普通車載記錄儀(要求畸變不嚴重,對於畸變嚴重的需要進行圖片校正)
- 工控機(帶GPU更好,沒有GPU也無所謂)
- Ubuntu16.04 + opencv3.2
實現步驟
- 圖片校正(對於相機畸變較大的需要先計算相機的畸變矩陣和失真係數,對圖片進行校正)
- 擷取感興趣區域,僅對包含車道線資訊的影象區域進行處理
- 使用透視變換,將感興趣區域圖片轉換成鳥瞰圖
- 針對不同顏色的車道線,不同光照條件下的車道線,不同清晰度的車道線,根據不同的顏色空間使用不同的梯度閾值,顏色閾值進行不同的處理。並將每一種處理方式進行融合,得到車道線的二進位制圖。
- 提取二進位制圖中屬於車道線的畫素
- 對二進位制圖片的畫素進行直方圖統計,統計左右兩側的峰值點作為左右車道線的起始點座標進行曲線擬合。
- 使用二次多項式分別擬合左右車道線的畫素點(對於噪聲較大的畫素點,可以進行濾波處理,或者使用隨機取樣一致性演算法進行曲線擬合)。
- 計算車道曲率及車輛相對車道中央的偏離位置。
- 效果顯示(可行域顯示,曲率和位置顯示)。
相機校正
這裡會使用opencv提供的方法通過棋盤格圖片組計算相機校正矩陣(camera calibration matrix)和失真係數(distortion coefficients)。首先要得到棋盤格內角的世界座標”object points”和對應圖片座標”image point”。
假設棋盤格內角世界座標的z軸為0,棋盤在(x,y)面上,則對於每張棋盤格圖片組的圖片而言,對應”object points”都是一樣的。而通過使用openCv的cv2.findChessboardCorners(),傳入棋盤格的灰度(grayscale)圖片和橫縱內角點個數就可得到圖片內角的”image point”。
def get_obj_img_points(images,grid=(9,6)):
object_points=[]
img_points = []
for img in images:
#生成object points
object_point = np.zeros( (grid[0]*grid[1],3),np.float32 )
object_point[:,:2]= np.mgrid[0:grid[0],0:grid[1]].T.reshape(-1,2)
#得到灰度圖片
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#得到圖片的image points
ret, corners = cv2.findChessboardCorners(gray, grid, None)
if ret:
object_points.append(object_point)
img_points.append(corners)
return object_points,img_points
然後使用上方法得到的object_points and img_points 傳入cv2.calibrateCamera() 方法中就可以計算出相機校正矩陣(camera calibration matrix)和失真係數(distortion coefficients),再使用 cv2.undistort()方法就可得到校正圖片。
def cal_undistort(img, objpoints, imgpoints):
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img.shape[1::-1], None, None)
dst = cv2.undistort(img, mtx, dist, None, mtx)
return dst
以下為其中一張棋盤格圖片校正前後對比:
校正圖片測試
#獲取棋盤格圖片
cal_imgs = utils.get_images_by_dir('camera_cal')
#計算object_points,img_points
object_points,img_points = utils.calibrate(cal_imgs,grid=(9,6))
#獲取測試圖片
test_imgs = utils.get_images_by_dir('test_images')
#校正測試圖片
undistorted = []
for img in test_imgs:
img = utils.cal_undistort(img,object_points,img_points)
undistorted.append(img)
透視變換–鳥瞰圖
這裡使用”cv2.getPerspectiveTransform()”來獲取變形矩陣(tranform matrix),把圖片變形為鳥撒視角。
def get_M_Minv():
src = np.float32([[(203, 720), (585, 460), (695, 460), (1127, 720)]])
dst = np.float32([[(320, 720), (320, 0), (960, 0), (960, 720)]])
M = cv2.getPerspectiveTransform(src, dst)
Minv = cv2.getPerspectiveTransform(dst,src)
return M,Minv
源點(source points)和目標點(destination points)需要根據自己的實際圖片進行選擇
然後使用”cv2.warpPerspective()”傳入相關值獲得變形圖片(wrapped image)
thresholded_wraped = cv2.warpPerspective(thresholded, M, img.shape[1::-1], flags=cv2.INTER_LINEAR)
不同顏色空間的閾值處理
這裡會使用梯度閾值(gradient threshold),顏色閾值(color threshold)等來處理校正後的圖片,捕獲車道線所在位置的畫素。(這裡的梯度指的是顏色變化的梯度)
分別列出一種顏色空間的處理和一種梯度處理(更多的處理方式見原始碼。
下面為使用hls顏色空間的s通道進行閾值過濾:
def hls_select(img,channel='s',thresh=(0, 255)):
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
if channel=='h':
channel = hls[:,:,0]
elif channel=='l':
channel=hls[:,:,1]
else:
channel=hls[:,:,2]
binary_output = np.zeros_like(channel)
binary_output[(channel > thresh[0]) & (channel <= thresh[1])] = 1
return binary_output
s_thresh = utils.hls_select(img,channel='s',thresh=(180, 255))
可以看到在路面顏色相對較淺且車道線顏色為黃色的區域,車道線仍然被清晰的捕捉到了,然而在其他地方表現卻不太理想(第四,第八張圖片)
因此為了應對多變的路面情況,需要結合多種閾值過濾方法。
以下方法通過”cv2.Sobel()”方法計算x軸方向或y軸方向的顏色變化梯度導數,並以此進行閾值過濾(thresholding),得到二進位制圖(binary image):
def abs_sobel_thresh(img, orient='x', thresh_min=0, thresh_max=255):
#裝換為灰度圖片
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
#使用cv2.Sobel()計算計算x方向或y方向的導數
if orient == 'x':
abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 1, 0))
if orient == 'y':
abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 0, 1))
#閾值過濾
scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))
binary_output = np.zeros_like(scaled_sobel)
binary_output[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
return binary_output
通過測試發現使用x軸方向閾值在35到100區間過濾得出的二進位制圖可以捕捉較為清晰的車道線:
x_thresh = utils.abs_sobel_thresh(img, orient='x', thresh_min=35, thresh_max=100)
提取左右車道線的畫素座標
上面的二進位制圖還存在一定的噪音畫素,為了準確檢測車道邊界,首先要確定哪些畫素是屬於車道線的。
首先要定位車道線的基點(圖片最下方車道出現的x軸座標),由於車道線在的畫素都集中在x軸一定範圍內,因此把圖片一分為二,左右兩邊的在x軸上的畫素分佈峰值非常有可能就是車道線基點。
以下為測試片x軸的畫素分佈圖:
車道畫素點擬合
左右車道的基點定位後,使用二次多項式對車道線進行擬合。這裡使用9個200px寬的滑動窗來定位一條車道線畫素:
def find_line(binary_warped):
# Take a histogram of the bottom half of the image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]/2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# Choose the number of sliding windows
nwindows = 9
# Set height of windows
window_height = np.int(binary_warped.shape[0]/nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated for each window
leftx_current = leftx_base
rightx_current = rightx_base
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Identify the nonzero pixels in x and y within the window
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on their mean position
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
# Fit a second order polynomial to each
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
return left_fit, right_fit, left_lane_inds, right_lane_inds
以下為滑動窗多項式擬合(sliding window polynomial fitting)得到的結果:
曲率計算及車道偏離中心計算
def calculate_curv_and_pos(binary_warped,left_fit, right_fit):
# Define y-value where we want radius of curvature
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
leftx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
rightx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
# Define conversions in x and y from pixels space to meters
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/700 # meters per pixel in x dimension
y_eval = np.max(ploty)
# Fit new polynomials to x,y in world space
left_fit_cr = np.polyfit(ploty*ym_per_pix, leftx*xm_per_pix, 2)
right_fit_cr = np.polyfit(ploty*ym_per_pix, rightx*xm_per_pix, 2)
# Calculate the new radii of curvature
left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
curvature = ((left_curverad + right_curverad) / 2)
#print(curvature)
lane_width = np.absolute(leftx[719] - rightx[719])
lane_xm_per_pix = 3.7 / lane_width
veh_pos = (((leftx[719] + rightx[719]) * lane_xm_per_pix) / 2.)
cen_pos = ((binary_warped.shape[1] * lane_xm_per_pix) / 2.)
distance_from_center = veh_pos - cen_pos
return curvature,distance_from_center
顯示
使用逆變形矩陣把鳥瞰二進位制圖檢測的車道鑲嵌回原圖,並高亮車道區域:
def draw_area(undist,binary_warped,Minv,left_fit, right_fit):
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
# Create an image to draw the lines on
warp_zero = np.zeros_like(binary_warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (undist.shape[1], undist.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(undist, 1, newwarp, 0.3, 0)
return result
使用”cv2.putText()”方法處理原圖展示車道曲率及車輛相對車道中心位置資訊:
def draw_values(img, curvature, distance_from_center):
font = cv2.FONT_HERSHEY_SIMPLEX
radius_text = "Radius of Curvature: %sm" % (round(curvature))
if distance_from_center > 0:
pos_flag = 'right'
else:
pos_flag = 'left'
cv2.putText(img, radius_text, (100, 100), font, 1, (255, 255, 255), 2)
center_text = "Vehicle is %.3fm %s of center" % (abs(distance_from_center), pos_flag)
cv2.putText(img, center_text, (100, 150), font, 1, (255, 255, 255), 2)
return img