一步步實現slam3-初始位置估計2
阿新 • • 發佈:2019-01-12
上一步已經找到了初始的兩幀,並且得到對應的匹配,那下一步就根據匹配點計算相應的單應矩陣和基礎矩陣。
具體計算就是隨機選取8個匹配點,啟動兩個執行緒分別計算單應矩陣和基礎矩陣。具體計算兩個矩陣就不在敘述,我在程式碼中添加了比較詳細的註釋。
計算單應矩陣
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
cv::Mat Initializer::calcHFromMatches(const std::vector<cv::Point2f> &points_ref, const std::vector<cv::Point2f> &points_cur) { /** * x = H y ,則對向量 x和Hy 進行叉乘為0,即: * * | 0 -1 v2| |a b c| |u1| |0| * | 1 0 -u2| * |d e f| * |v1| = |0| * |-v2 u2 0| |g h 1| |1 | |0| * * 矩陣化簡得: * * (-d+v2*g)*u1 + (-e+v2*h)*v1 + -f+v2 |0| * (a-u2*g)*u1 + (b-u2*h)*v1 + c-u2 = |0| * (-v2*a+u2*d)*u1 + (-v2*b+u2*e)*v1 + -v2*c+u2*f |0| * * 0*a + 0*b + 0*c - u1*d - v1*e - 1*f + v2*u1*g + v2*v1*h + v2 = 0 * u1*a+v1*b + 1*c + 0*d + 0*e +0*f - u2*u1*g - u2*v1*h - u2 = 0 * -v2*u1*a -v2*v1*b -v2*c +u2*u1*d +u2*v1*e +u2*f +0*g +0*h + 0 = 0 */ const int N = points_ref.size(); cv::Mat A(2 * N, 9, CV_32F); for (int i = 0; i < N; i++) { const float u1 = points_ref[i].x; const float v1 = points_ref[i].y; const float u2 = points_cur[i].x; const float v2 = points_cur[i].y; A.at<float>(2 * i, 0) = 0.0; A.at<float>(2 * i, 1) = 0.0; A.at<float>(2 * i, 2) = 0.0; A.at<float>(2 * i, 3) = -u1; A.at<float>(2 * i, 4) = -v1; A.at<float>(2 * i, 5) = -1; A.at<float>(2 * i, 6) = v2*u1; A.at<float>(2 * i, 7) = v2*v1; A.at<float>(2 * i, 8) = v2; A.at<float>(2 * i + 1, 0) = u1; A.at<float>(2 * i + 1, 1) = v1; A.at<float>(2 * i + 1, 2) = 1; A.at<float>(2 * i + 1, 3) = 0.0; A.at<float>(2 * i + 1, 4) = 0.0; A.at<float>(2 * i + 1, 5) = 0.0; A.at<float>(2 * i + 1, 6) = -u2*u1; A.at<float>(2 * i + 1, 7) = -u2*v1; A.at<float>(2 * i + 1, 8) = -u2; } cv::Mat u, w, vt; // 通過svd進行最小二乘求解 cv::SVDecomp(A, w, u, vt, cv::SVD::MODIFY_A | cv::SVD::FULL_UV); return vt.row(8).reshape(0, 3); } |
計算基礎矩陣
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
cv::Mat Initializer::calcFFromMatches(const std::vector<cv::Point2f> &points_ref, const std::vector<cv::Point2f> &points_cur) { /** * 構建基礎矩陣的約束方程,給定一對點對應m=(u1,v1,1)T,m'=(u2,v2,1)T * 滿足基礎矩陣F m'TFm=0,令F=(f_ij),則約束方程可以化簡為: * u2u1f_11+u2v1f_12+u2f_13+v2u1f_21+v2v1f_22+v2f_23+u1f_31+v1f_32+f_33=0 * 令f = (f_11,f_12,f_13,f_21,f_22,f_23,f_31,f_32,f_33) * 則(u2u1,u2v1,u2,v2u1,v2v1,v2,u1,v1,1)f=0; * 這樣,給定N個對應點就可以得到線性方程組Af=0 * A就是一個N*9的矩陣,由於基礎矩陣是非零的,所以f是一個非零向量,即 * 線性方程組有非零解,另外基礎矩陣的秩為2,重要的約束條件 */ const int N = points_ref.size(); cv::Mat A(N, 9, CV_32F); for (int i = 0; i < N; i++) { const float u1 = points_ref[i].x; const float v1 = points_ref[i].y; const float u2 = points_cur[i].x; const float v2 = points_cur[i].y; A.at<float>(i, 0) = u2*u1; A.at<float>(i, 1) = u2*v1; A.at<float>(i, 2) = u2; A.at<float>(i, 3) = v2*u1; A.at<float>(i, 4) = v2*v1; A.at<float>(i, 5) = v2; A.at<float>(i, 6) = u1; A.at<float>(i, 7) = v1; A.at<float>(i, 8) = 1; } cv::Mat u, w, vt; cv::SVDecomp(A, w, u, vt, cv::SVD::MODIFY_A | cv::SVD::FULL_UV); cv::Mat Fpre = vt.row(8).reshape(0, 3); cv::SVDecomp(Fpre, w, u, vt, cv::SVD::MODIFY_A | cv::SVD::FULL_UV); w.at<float>(2) = 0; return u*cv::Mat::diag(w)*vt; } |
下一步就是如何選擇,也就是到底何時採用基礎矩陣,何時採用單應矩陣。這邊作者將計算好的單應矩陣或者基礎矩陣重新計算投影誤差,具體作者設定的閾值這邊我也不是很明白,把論文中這個部分貼在這邊。
這樣基本的位姿初始化就結束了,具體新增第二幀的程式碼如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
bool Initializer::addSecondFrame(FramePtr cur_frame) { cur_features_ = cur_frame->features_; if (cur_features_.size() < 100) { OPENSLAM_WARN << "Second image has less than 100 features. Retry in more textured environment." << std::endl; return false; } // 尋找對應匹配 ORBmatcher matcher(0.9, true); int nmatches = matcher.searchForInitialization(ref_frame_, cur_frame, prev_matched_, init_matchex_, 100); // 檢測是否有足夠的匹配 if (nmatches < 100) { OPENSLAM_WARN << "Matching number has less than 100. " << std::endl; return false; } cur_features_ = cur_frame->features_; // 給出當前幀與參考幀的匹配 matches_ref_cur_.clear(); matches_ref_cur_.reserve(cur_features_.size()); is_matched_ref_.resize(ref_features_.size()); for (size_t i = 0, iend = init_matchex_.size(); i < iend; i++) { if (init_matchex_[i] >= 0) { matches_ref_cur_.push_back(std::make_pair(i, init_matchex_[i])); is_matched_ref_[i] = true; } else { is_matched_ref_[i] = false; } } const int N = matches_ref_cur_.size(); // 用於隨機取8組資料的全部索引 std::vector<size_t> all_indices; all_indices.reserve(N); std::vector<size_t> available_indices; for (int i = 0; i < N; i++) { all_indices.push_back(i); } // 生成一組8個點的資料集用於RANSAC的迭代 ransac_sets_ = std::vector< std::vector<size_t> >(max_iter_num_, std::vector<size_t>(8, 0)); // 這邊隨機改time(0) srand(time(0)); for (int it = 0; it < max_iter_num_; it++) { available_indices = all_indices; // 在所有資料集中選擇8組 for (size_t j = 0; j < 8; j++) { int d = available_indices.size(); int randi = int(((double)rand() / ((double)RAND_MAX + 1.0)) * d); int idx = available_indices[randi]; ransac_sets_[it][j] = idx; available_indices[randi] = available_indices.back(); available_indices.pop_back(); } } // 啟動執行緒用於平行計算基礎矩陣和單應矩陣 std::vector<bool> matches_inliers_H, matches_inliers_F; float SH, SF; cv::Mat H, F; std::thread threadH(&Initializer::findHomography, this, std::ref(matches_inliers_H), std::ref(SH), std::ref(H)); std::thread threadF(&Initializer::findFundamental, this, std::ref(matches_inliers_F), std::ref(SF), std::ref(F)); // 等待直到兩個執行緒結束 threadH.join(); threadF.join(); // 計算scores的比值 float RH = SH / (SH + SF); cv::Mat R_cur_ref; //當前相機的旋轉,相對於上一幀,上一幀為初始幀姿態為I單位陣 cv::Mat t_cur_ref; // 當前相機的平移 std::vector<cv::Point3f> init_3d_points; std::vector<bool> is_triangulated; // 初始化匹配當中,三角定位是否成功 cv::Mat K = cur_frame->cam_->cvK(); // 具體採用基礎矩陣還是單應矩陣分解計算初始結構取決於ratio (0.40-0.45) if (RH > 0.40) { if (!reconstructH(matches_inliers_H, H, K, R_cur_ref, t_cur_ref, init_3d_points, is_triangulated, 1.0, 50)) return false; } else { if (!reconstructF(matches_inliers_F, F, K, R_cur_ref, t_cur_ref, init_3d_points, is_triangulated, 1.0, 50)) return false; } for (size_t i = 0, iend = init_matchex_.size(); i < iend; i++) { if (init_matchex_[i] >= 0 && !is_triangulated[i]) { init_matchex_[i] = -1; nmatches--; } } // 設定當前幀的位姿 cv::Mat Tcw = cv::Mat::eye(4, 4, CV_32F); R_cur_ref.copyTo(Tcw.rowRange(0, 3).colRange(0, 3)); t_cur_ref.copyTo(Tcw.rowRange(0, 3).col(3)); cur_frame->T_f_w_ = Tcw; return true; } |