2維特徵Feature2D—特徵點的影象匹配
阿新 • • 發佈:2019-02-04
基於特徵點的影象匹配是影象處理中經常會遇到的問題,手動選取特徵點太麻煩了。比較經典常用的特徵點自動提取的辦法有Harris特徵、SIFT特徵、SURF特徵。
先介紹利用SURF特徵的特徵描述辦法,其操作封裝在類SurfFeatureDetector中,利用類內的detect函式可以檢測出SURF特徵的關鍵點,儲存在vector容器中。第二部利用SurfDescriptorExtractor類進行特徵向量的相關計算。將之前的vector變數變成向量矩陣形式儲存在Mat中。最後強行匹配兩幅影象的特徵向量,利用了類BruteForceMatcher中的函式match。程式碼如下:
#include <iostream> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/opencv.hpp> // 新版本寫在下面檔案中: #include <opencv2/nonfree/features2d.hpp> //#include "opencv2/features2d/features2d.hpp" #include<opencv2/legacy/legacy.hpp> using namespace std; using namespace cv; void readme(); /** * @function main * @brief Main function */ int main( int argc, char** argv ) { Mat img_1 = imread( "/Users/liupeng/Desktop/my/opencvLearn/opencvLearn/lp1.jpg", CV_LOAD_IMAGE_GRAYSCALE ); Mat img_2 = imread( "/Users/liupeng/Desktop/my/opencvLearn/opencvLearn/lp1.jpg", CV_LOAD_IMAGE_GRAYSCALE ); if( !img_1.data || !img_2.data ) { return -1; } //-- Step 1: Detect the keypoints using SURF Detector int minHessian = 400; SurfFeatureDetector detector( minHessian ); std::vector<KeyPoint> keypoints_1, keypoints_2; detector.detect( img_1, keypoints_1 ); detector.detect( img_2, keypoints_2 ); //-- Step 2: Calculate descriptors (feature vectors) SurfDescriptorExtractor extractor; Mat descriptors_1, descriptors_2; extractor.compute( img_1, keypoints_1, descriptors_1 ); extractor.compute( img_2, keypoints_2, descriptors_2 ); imshow("descriptors_1", descriptors_1); imshow("descriptors_2",descriptors_2); //-- Step 3: Matching descriptor vectors with a brute force matcher BruteForceMatcher< L2<float> > matcher; std::vector< DMatch > matches; matcher.match( descriptors_1, descriptors_2, matches ); //-- Draw matches Mat img_matches; drawMatches( img_1, keypoints_1, img_2, keypoints_2, matches, img_matches ); //-- Show detected matches imshow("Matches", img_matches ); waitKey(0); return 0; } /** * @function readme */ void readme() { std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; }
當然,進行強匹配的效果不夠理想,這裡再介紹一種FLANN特徵匹配演算法。前兩步與上述程式碼相同,第三步利用FlannBasedMatcher類進行特徵匹配,並只保留好的特徵匹配點,程式碼如下:
//-- Step 3: Matching descriptor vectors using FLANN matcher FlannBasedMatcher matcher; std::vector< DMatch > matches; matcher.match( descriptors_1, descriptors_2, matches ); double max_dist = 0; double min_dist = 100; //-- Quick calculation of max and min distances between keypoints for( int i = 0; i < descriptors_1.rows; i++ ) { double dist = matches[i].distance; if( dist < min_dist ) min_dist = dist; if( dist > max_dist ) max_dist = dist; } printf("-- Max dist : %f \n", max_dist ); printf("-- Min dist : %f \n", min_dist ); //-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist ) //-- PS.- radiusMatch can also be used here. std::vector< DMatch > good_matches; if(min_dist!=0) { for( int i = 0; i < descriptors_1.rows; i++ ) { if( matches[i].distance < 2*min_dist ) { good_matches.push_back( matches[i]); } } } else{ good_matches = matches; } //-- Draw only "good" matches Mat img_matches; drawMatches( img_1, keypoints_1, img_2, keypoints_2, good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS ); //-- Show detected matches imshow( "Good Matches", img_matches );
在FLANN特徵匹配的基礎上,還可以進一步利用Homography對映找出已知物體。具體來說就是利用findHomography函式利用匹配的關鍵點找出相應的變換,再利用perspectiveTransform函式對映點群。具體程式碼如下:
- //-- Localize the object from img_1 in img_2
- std::vector<Point2f> obj;
- std::vector<Point2f> scene;
- for( int i = 0; i < good_matches.size(); i++ )
- {
-
//-- Get the keypoints from the good matches
- obj.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );
- scene.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );
- }
- Mat H = findHomography( obj, scene, CV_RANSAC );
- //-- Get the corners from the image_1 ( the object to be "detected" )
- Point2f obj_corners[4] = { cvPoint(0,0), cvPoint( img_1.cols, 0 ), cvPoint( img_1.cols, img_1.rows ), cvPoint( 0, img_1.rows ) };
- Point scene_corners[4];
- //-- Map these corners in the scene ( image_2)
- for( int i = 0; i < 4; i++ )
- {
- double x = obj_corners[i].x;
- double y = obj_corners[i].y;
- double Z = 1./( H.at<double>(2,0)*x + H.at<double>(2,1)*y + H.at<double>(2,2) );
- double X = ( H.at<double>(0,0)*x + H.at<double>(0,1)*y + H.at<double>(0,2) )*Z;
- double Y = ( H.at<double>(1,0)*x + H.at<double>(1,1)*y + H.at<double>(1,2) )*Z;
- scene_corners[i] = cvPoint( cvRound(X) + img_1.cols, cvRound(Y) );
- }
- //-- Draw lines between the corners (the mapped object in the scene - image_2 )
- line( img_matches, scene_corners[0], scene_corners[1], Scalar(0, 255, 0), 2 );
- line( img_matches, scene_corners[1], scene_corners[2], Scalar( 0, 255, 0), 2 );
- line( img_matches, scene_corners[2], scene_corners[3], Scalar( 0, 255, 0), 2 );
- line( img_matches, scene_corners[3], scene_corners[0], Scalar( 0, 255, 0), 2 );
- //-- Show detected matches
- imshow( "Good Matches & Object detection", img_matches );
然後再看一下Harris特徵檢測,在計算機視覺中,通常需要找出兩幀影象的匹配點,如果能找到兩幅影象如何相關,就能提取出兩幅影象的資訊。我們說的特徵的最大特點就是它具有唯一可識別這一特點,影象特徵的型別通常指邊界、角點(興趣點)、斑點(興趣區域)。角點就是影象的一個區域性特徵,應用廣泛。harris角點檢測是一種直接基於灰度影象的角點提取演算法,穩定性高,尤其對L型角點檢測精度高,但由於採用了高斯濾波,運算速度相對較慢,角點資訊有丟失和位置偏移的現象,而且角點提取有聚簇現象。具體實現就是使用函式cornerHarris實現。
除了利用Harris進行角點檢測,還可以利用Shi-Tomasi方法進行角點檢測。使用函式goodFeaturesToTrack對角點進行檢測,效果也不錯。也可以自己製作角點檢測的函式,需要用到cornerMinEigenVal函式和minMaxLoc函式,最後的特徵點選取,判斷條件要根據自己的情況編輯。如果對特徵點,角點的精度要求更高,可以用cornerSubPix函式將角點定位到子畫素。
來一張效果圖吧: