手把手教用matlab做無人駕駛(九)--專案1:使用單目相機檢測車道線
阿新 • • 發佈:2018-12-16
現在介紹一個專案,這個專案跟優達學城的課程有點像,可以學習一下.。
不管是含有ADAS功能的車還是依賴於多種感測器設計的自動駕駛汽車,它們需要的感測器都包括超聲波、雷達、鐳射雷達和攝像頭。接下來的專案闡述的是用單目相機實現自動駕駛過程中一部分內容,實現的內容如下:
1.車道線檢測
2.檢測機動車,行人以及另一些物體。
3.判斷自己車與物體的距離。
1.定義相機配置
focalLength = [309.4362, 344.2161]; % [fx, fy] in pixel units principalPoint = [318.9034, 257.5352]; % [cx, cy] optical center in pixel coordinates imageSize = [480, 640]; % [nrows, mcols]
camIntrinsics = cameraIntrinsics(focalLength, principalPoint, imageSize);
接下來,定義相對於車輛底盤的攝像機位置,
注意,單目像機設定一個非常具體的車體座標。其中X軸指向車輛前方,Y軸指向車輛左側,Z軸從地面上指向天空。
座標系的遠點在地面上,直接在的攝像機中心的正下方。
這裡給出相機相對座標引數:
height = 2.1798; % mounting height in meters from the ground pitch = 14; % pitch of the camera in degrees
sensor = monoCamera(camIntrinsics, height, 'Pitch', pitch);
2.匯入視訊
videoName = 'caltech_cordova1.avi'; videoReader = VideoReader(videoName);
讀感興趣的部分,其中包含車道標誌和車輛。
timeStamp = 0.06667; % time from the beginning of the video videoReader.CurrentTime = timeStamp; % point to the chosen frame frame = readFrame(videoReader); % read frame at timeStamp seconds imshow(frame) % display frame
3.建立鳥瞰圖(程式後面都有註釋)
% Using vehicle coordinates, define area to transform distAheadOfSensor = 30; % in meters, as previously specified in monoCamera height input spaceToOneSide = 6; % all other distance quantities are also in meters bottomOffset = 3; outView = [bottomOffset, distAheadOfSensor, -spaceToOneSide, spaceToOneSide]; % [xmin, xmax, ymin, ymax] imageSize = [NaN, 250]; % output image width in pixels; height is chosen automatically to preserve units per pixel ratio birdsEyeConfig = birdsEyeView(sensor, outView, imageSize);
生成鳥瞰圖
birdsEyeImage = transformImage(birdsEyeConfig, frame); figure imshow(birdsEyeImage)
車輛座標中的車道標誌
% Convert to grayscale birdsEyeImage = rgb2gray(birdsEyeImage); % Lane marker segmentation ROI in world units vehicleROI = outView - [-1, 2, -3, 3]; % look 3 meters to left and right, and 4 meters ahead of the sensor approxLaneMarkerWidthVehicle = 0.25; % 25 centimeters % Detect lane features laneSensitivity = 0.25; birdsEyeViewBW = segmentLaneMarkerRidge(birdsEyeImage, birdsEyeConfig, approxLaneMarkerWidthVehicle,... 'ROI', vehicleROI, 'Sensitivity', laneSensitivity); figure imshow(birdsEyeViewBW)
% Obtain lane candidate points in vehicle coordinates [imageX, imageY] = find(birdsEyeViewBW); xyBoundaryPoints = imageToVehicle(birdsEyeConfig, [imageY, imageX]);
maxLanes = 2; % look for maximum of two lane markers boundaryWidth = 3*approxLaneMarkerWidthVehicle; % expand boundary width [boundaries, boundaryPoints] = findParabolicLaneBoundaries(xyBoundaryPoints,boundaryWidth, ... 'MaxNumBoundaries', maxLanes, 'validateBoundaryFcn', @validateBoundaryFcn);
決定車道邊界
在上面程式中,我們找到的曲線有可能是無效的,例如,當曲線擬合到人行橫道標誌時,用另一些方法去刪除這樣的曲線:
% Establish criteria for rejecting boundaries based on their length maxPossibleXLength = diff(vehicleROI(1:2)); minXLength = maxPossibleXLength * 0.60; % establish a threshold % Reject short boundaries isOfMinLength = arrayfun(@(b)diff(b.XExtent) > minXLength, boundaries); boundaries = boundaries(isOfMinLength);
% To compute the maximum strength, assume all image pixels within the ROI % are lane candidate points birdsImageROI = vehicleToImageROI(birdsEyeConfig, vehicleROI); [laneImageX,laneImageY] = meshgrid(birdsImageROI(1):birdsImageROI(2),birdsImageROI(3):birdsImageROI(4)); % Convert the image points to vehicle points vehiclePoints = imageToVehicle(birdsEyeConfig,[laneImageX(:),laneImageY(:)]); % Find the maximum number of unique x-axis locations possible for any lane % boundary maxPointsInOneLane = numel(unique(vehiclePoints(:,1))); % Set the maximum length of a lane boundary to the ROI length maxLaneLength = diff(vehicleROI(1:2)); % Compute the maximum possible lane strength for this image size/ROI size % specification maxStrength = maxPointsInOneLane/maxLaneLength; % Reject weak boundaries isStrong = [boundaries.Strength] > 0.4*maxStrength; boundaries = boundaries(isStrong);
boundaries = classifyLaneTypes(boundaries, boundaryPoints); % Locate two ego lanes if they are present xOffset = 0; % 0 meters from the sensor distanceToBoundaries = boundaries.computeBoundaryModel(xOffset); % Find candidate ego boundaries leftEgoBoundaryIndex = []; rightEgoBoundaryIndex = []; minLDistance = min(distanceToBoundaries(distanceToBoundaries>0)); minRDistance = max(distanceToBoundaries(distanceToBoundaries<=0)); if ~isempty(minLDistance) leftEgoBoundaryIndex = distanceToBoundaries == minLDistance; end if ~isempty(minRDistance) rightEgoBoundaryIndex = distanceToBoundaries == minRDistance; end leftEgoBoundary = boundaries(leftEgoBoundaryIndex); rightEgoBoundary = boundaries(rightEgoBoundaryIndex);
在鳥瞰檢視影象中顯示檢測到的車道標誌,並在常規視角中顯示:
xVehiclePoints = bottomOffset:distAheadOfSensor; birdsEyeWithEgoLane = insertLaneBoundary(birdsEyeImage, leftEgoBoundary , birdsEyeConfig, xVehiclePoints, 'Color','Red'); birdsEyeWithEgoLane = insertLaneBoundary(birdsEyeWithEgoLane, rightEgoBoundary, birdsEyeConfig, xVehiclePoints, 'Color','Green'); frameWithEgoLane = insertLaneBoundary(frame, leftEgoBoundary, sensor, xVehiclePoints, 'Color','Red'); frameWithEgoLane = insertLaneBoundary(frameWithEgoLane, rightEgoBoundary, sensor, xVehiclePoints, 'Color','Green'); figure subplot('Position', [0, 0, 0.5, 1.0]) % [left, bottom, width, height] in normalized units imshow(birdsEyeWithEgoLane) subplot('Position', [0.5, 0, 0.5, 1.0]) imshow(frameWithEgoLane)
定位車輛座標
檢測與跟蹤汽車對前碰撞報警系統(FCW)和自動緊急剎車系統(ABE)是至關重要的。
detector = vehicleDetectorACF(); % Width of a common vehicle is between 1.5 to 2.5 meters vehicleWidth = [1.5, 2.5];
monoDetector = configureDetectorMonoCamera(detector, sensor, vehicleWidth); [bboxes, scores] = detect(monoDetector, frame);
locations = computeVehicleLocations(bboxes, sensor); % Overlay the detections on the video frame imgOut = insertVehicleDetections(frame, locations, bboxes); figure; imshow(imgOut);
問題:
1.你知道怎麼把圖片轉化為鳥瞰圖嗎?
2.你知道怎麼擬合車道邊界嗎?
3.怎麼決定車道標記,如人行道干擾,還有車道有的是實線,有的是虛線?
4.怎麼定位車輛座標?