1. 程式人生 > >RGB-D SLAM學習總結(7)

RGB-D SLAM學習總結(7)

第七講 迴環檢測

本講主要添加了關鍵幀的選取和迴環檢測。

關鍵幀的選取:通過估計參看幀與新幀之間的運動e來判斷

  • e>E_{error},運動過大,可能是計算錯誤,丟棄該幀;
  • 若匹配太少,說明該幀影象質量不高,丟棄;
  • e<E_{key},說明離參考幀(前一幀)太近,丟棄。

slam程式的基本流程,以下為偽碼:

  1. 初始化關鍵幀序列:F,並將第一幀f0放入F;
  2. 對於新來的一幀I,判斷是否為關鍵幀;
  3. 進入迴環檢測程式:近距離迴環:匹配I與F末尾m個關鍵幀。匹配成功時,在圖裡增加一條邊;
  4. 隨機迴環:隨機在F裡取n個幀,與I進行匹配。若匹配上,在圖裡增加一條邊;
  5. 將I放入F末尾。若有新的資料,則回2;若無,則進行優化與地圖拼接。

最終程式碼:

parameters.txt

# 這是一個引數檔案

# part 4 裡定義的引數
detector=ORB
descriptor=ORB
good_match_threshold=10
# camera
camera.cx=325.5;
camera.cy=253.5;
camera.fx=518.0;
camera.fy=519.0;
camera.scale=1000.0;

# part 5 
# 資料相關
# 起始與終止索引
start_index=1
end_index=780
# 資料所在目錄
rgb_dir=../data/rgb_png/
rgb_extension=.png
depth_dir=../data/depth_png/
depth_extension=.png
# 點雲解析度
voxel_grid=0.01
# 是否實時視覺化
visualize_pointcloud=yes
# 最小匹配數量
min_good_match=10
# 最小內點
min_inliers=5
# 最大運動誤差
max_norm=0.2

# part 7
keyframe_threshold=0.08
max_norm_lp=2.0
nearby_loops=5
random_loops=5
check_loop_closure=yes

CMakeLists.txt

FIND_PACKAGE(PCL REQUIRED COMPONENTS common io visualization filters)

SET(OpenCV_DIR "/home/lzy/opencv-2.4.13.6/build")
FIND_PACKAGE(OpenCV REQUIRED)
INCLUDE_DIRECTORIES(${OpenCV_INCLUDE_DIRS})
#MESSAGE(STATUS "version:${OpenCV_VERSION}")

ADD_DEFINITIONS(${PCL_DEFINITIONS})
INCLUDE_DIRECTORIES(${PCL_INCLUDE_DIRS})
LINK_LIBRARIES(${PCL_LIBRARY_DIRS})

# 新增g2o的依賴
LIST(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules)
SET(G2O_ROOT "/usr/local/include/g2o")
FIND_PACKAGE(G2O)
# CSparse
FIND_PACKAGE(CSparse)
INCLUDE_DIRECTORIES(${G2O_INCLUDE_DIR} ${CSPARSE_INCLUDE_DIR})

#INCLUDE_DIRECTORIES("/usr/include/eigen3")

ADD_EXECUTABLE(generate_pointcloud generatePointCloud.cpp)
TARGET_LINK_LIBRARIES(generate_pointcloud ${OpenCV_LIBS} ${PCL_LIBRARIES})

ADD_LIBRARY(slambase slamBase.cpp)
TARGET_LINK_LIBRARIES(slambase ${OpenCV_LIBS} ${PCL_LIBRARIES})

ADD_EXECUTABLE(detect_features detectFeatures.cpp)
TARGET_LINK_LIBRARIES(detect_features slambase ${OpenCV_LIBS} ${PCL_LIBRARIES})

ADD_EXECUTABLE(join_pointcloud joinPointCloud.cpp)
TARGET_LINK_LIBRARIES(join_pointcloud slambase ${OpenCV_LIBS} ${PCL_LIBRARIES})

ADD_EXECUTABLE(visual_odometry visualOdometry.cpp)
TARGET_LINK_LIBRARIES(visual_odometry slambase ${OpenCV_LIBS} ${PCL_LIBRARIES})

ADD_EXECUTABLE(slamend slamEnd.cpp)
TARGET_LINK_LIBRARIES(slamend 
	slambase 
	${OpenCV_LIBS} 
	${PCL_LIBRARIES}
	g2o_core g2o_types_slam3d g2o_solver_csparse g2o_stuff g2o_csparse_extension ${CSPARSE_LIBRARY})

ADD_EXECUTABLE(slam slam.cpp)
TARGET_LINK_LIBRARIES(slam 
	slambase 
	${OpenCV_LIBS} 
	${PCL_LIBRARIES}
	g2o_core g2o_types_slam3d g2o_solver_csparse g2o_stuff g2o_csparse_extension ${CSPARSE_LIBRARY})

slamBase.h

# pragma once

// 各種標頭檔案 
// C++標準庫
#include <fstream>
#include <vector>
#include <map>
using namespace std;
// Eigen
#include <Eigen/Core>
#include <Eigen/Geometry>
// OpenCV
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/eigen.hpp>
#include <opencv2/features2d/features2d.hpp>
//#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/calib3d/calib3d.hpp>
//PCL
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
//#include <pcl/point_cloud.h>
#include <pcl/common/transforms.h>
#include <pcl/visualization/cloud_viewer.h>
#include <pcl/filters/voxel_grid.h>
// 型別定義
typedef pcl::PointXYZRGBA PointT;
typedef pcl::PointCloud<PointT> PointCloud;

// 相機內參結構
struct CAMERA_INTRINSIC_PARAMETERS 
{ 
    double cx, cy, fx, fy, scale;
};

// 幀結構
struct FRAME
{
    int frameID;
    cv::Mat rgb, depth; //該幀對應的彩色圖與深度圖
    cv::Mat desp;       //特徵描述子
    vector<cv::KeyPoint> kp; //關鍵點
};

// PnP 結果
struct RESULT_OF_PNP
{
    cv::Mat rvec, tvec;
    int inliers;
};

// 函式介面
// image2PonitCloud 將rgb圖轉換為點雲
PointCloud::Ptr image2PointCloud( cv::Mat& rgb, cv::Mat& depth, CAMERA_INTRINSIC_PARAMETERS& camera );

// point2dTo3d 將單個點從影象座標轉換為空間座標
// input: 3維點Point3f (u,v,d)
cv::Point3f point2dTo3d( cv::Point3f& point, CAMERA_INTRINSIC_PARAMETERS& camera );

// computeKeyPointsAndDesp 同時提取關鍵點與特徵描述子
void computeKeyPointsAndDesp( FRAME& frame, string detector, string descriptor );

// estimateMotion 計算兩個幀之間的運動
// 輸入:幀1和幀2, 相機內參
RESULT_OF_PNP estimateMotion( FRAME& frame1, FRAME& frame2, CAMERA_INTRINSIC_PARAMETERS& camera );

// rvec, tvec --> T
Eigen::Isometry3d cvMat2Eigen(cv::Mat& rvec, cv::Mat& tvec);

// 拼接點雲
PointCloud::Ptr joinPointCloud(PointCloud::Ptr original, FRAME& newFrame, Eigen::Isometry3d T, CAMERA_INTRINSIC_PARAMETERS& camera);

// 引數讀取類
class ParameterReader
{
public:
    ParameterReader( string filename="../parameters.txt" )
    {
        ifstream fin( filename.c_str() );
        if (!fin)
        {
            cerr<<"parameter file does not exist."<<endl;
            return;
        }
        while(!fin.eof())
        {
            string str;
            getline( fin, str );
            if (str[0] == '#')
            {
                // 以‘#’開頭的是註釋
                continue;
            }

            int pos = str.find("=");
            if (pos == -1)
                continue;
            string key = str.substr( 0, pos );
            string value = str.substr( pos+1, str.length() );
            data[key] = value;

            if ( !fin.good() )
                break;
        }
    }
    string getData( string key )
    {
        map<string, string>::iterator iter = data.find(key);
        if (iter == data.end())
        {
            cerr<<"Parameter name "<<key<<" not found!"<<endl;
            return string("NOT_FOUND");
        }
        return iter->second;
    }
public:
    map<string, string> data;
};

inline static CAMERA_INTRINSIC_PARAMETERS getDefaultCamera()
{
	ParameterReader pd;
	CAMERA_INTRINSIC_PARAMETERS camera;
	camera.fx = atof( pd.getData( "camera.fx" ).c_str());
    	camera.fy = atof( pd.getData( "camera.fy" ).c_str());
    	camera.cx = atof( pd.getData( "camera.cx" ).c_str());
    	camera.cy = atof( pd.getData( "camera.cy" ).c_str());
    	camera.scale = atof( pd.getData( "camera.scale" ).c_str() );
	return camera;
}

//the following are UBUNTU/LINUX ONLY terminal color
#define RESET "\033[0m"
#define BLACK "\033[30m" /* Black */
#define RED "\033[31m" /* Red */
#define GREEN "\033[32m" /* Green */
#define YELLOW "\033[33m" /* Yellow */
#define BLUE "\033[34m" /* Blue */
#define MAGENTA "\033[35m" /* Magenta */
#define CYAN "\033[36m" /* Cyan */
#define WHITE "\033[37m" /* White */
#define BOLDBLACK "\033[1m\033[30m" /* Bold Black */
#define BOLDRED "\033[1m\033[31m" /* Bold Red */
#define BOLDGREEN "\033[1m\033[32m" /* Bold Green */
#define BOLDYELLOW "\033[1m\033[33m" /* Bold Yellow */
#define BOLDBLUE "\033[1m\033[34m" /* Bold Blue */
#define BOLDMAGENTA "\033[1m\033[35m" /* Bold Magenta */
#define BOLDCYAN "\033[1m\033[36m" /* Bold Cyan */
#define BOLDWHITE "\033[1m\033[37m" /* Bold White */

slamBase.cpp

#include "slamBase.h"

PointCloud::Ptr image2PointCloud( cv::Mat& rgb, cv::Mat& depth, CAMERA_INTRINSIC_PARAMETERS& camera )
{
    PointCloud::Ptr cloud ( new PointCloud );

    for (int m = 0; m < depth.rows; m+=2)
        for (int n=0; n < depth.cols; n+=2)
        {
            // 獲取深度圖中(m,n)處的值
            ushort d = depth.ptr<ushort>(m)[n];
            // d 可能沒有值,若如此,跳過此點
            if (d == 0)
                continue;
            // d 存在值,則向點雲增加一個點
            PointT p;

            // 計算這個點的空間座標
            p.z = double(d) / camera.scale;
            p.x = (n - camera.cx) * p.z / camera.fx;
            p.y = (m - camera.cy) * p.z / camera.fy;
            
            // 從rgb影象中獲取它的顏色
            // rgb是三通道的BGR格式圖,所以按下面的順序獲取顏色
            p.b = rgb.ptr<uchar>(m)[n*3];
            p.g = rgb.ptr<uchar>(m)[n*3+1];
            p.r = rgb.ptr<uchar>(m)[n*3+2];

            // 把p加入到點雲中
            cloud->points.push_back( p );
        }
    // 設定並儲存點雲
    cloud->height = 1;
    cloud->width = cloud->points.size();
    cloud->is_dense = false;

    return cloud;
}

cv::Point3f point2dTo3d( cv::Point3f& point, CAMERA_INTRINSIC_PARAMETERS& camera )
{
    cv::Point3f p; // 3D 點
    p.z = double( point.z ) / camera.scale;
    p.x = ( point.x - camera.cx) * p.z / camera.fx;
    p.y = ( point.y - camera.cy) * p.z / camera.fy;
    return p;
}

// computeKeyPointsAndDesp 同時提取關鍵點與特徵描述子
void computeKeyPointsAndDesp( FRAME& frame, string detector, string descriptor )
{
    cv::Ptr<cv::FeatureDetector> _detector;
    cv::Ptr<cv::DescriptorExtractor> _descriptor;

    //cv::initModule_nonfree();
    _detector = cv::FeatureDetector::create( detector.c_str() );
    _descriptor = cv::DescriptorExtractor::create( descriptor.c_str() );

    if (!_detector || !_descriptor)
    {
        cerr<<"Unknown detector or discriptor type !"<<detector<<","<<descriptor<<endl;
        return;
    }

    _detector->detect( frame.rgb, frame.kp );
    _descriptor->compute( frame.rgb, frame.kp, frame.desp );

    return;
}

// estimateMotion 計算兩個幀之間的運動
// 輸入:幀1和幀2
// 輸出:rvec 和 tvec
RESULT_OF_PNP estimateMotion( FRAME& frame1, FRAME& frame2, CAMERA_INTRINSIC_PARAMETERS& camera )
{
    RESULT_OF_PNP result;
    static ParameterReader pd;
    vector< cv::DMatch > matches;
    //cv::FlannBasedMatcher matcher;
    cv::BFMatcher matcher;
    matcher.match( frame1.desp, frame2.desp, matches );
   
    //cout<<"find total "<<matches.size()<<" matches."<<endl;
    vector< cv::DMatch > goodMatches;
    double minDis = 9999;
    double good_match_threshold = atof( pd.getData( "good_match_threshold" ).c_str() );
    for ( size_t i=0; i<matches.size(); i++ )
    {
        if ( matches[i].distance < minDis )
            minDis = matches[i].distance;
    }
    //cout << "min dis = " << minDis << endl;
    if(minDis<10)minDis=10;

    for ( size_t i=0; i<matches.size(); i++ )
    {
        if (matches[i].distance < good_match_threshold*minDis)
            goodMatches.push_back( matches[i] );
    }
    //cout<<"good matches: "<<goodMatches.size()<<endl;
    if (goodMatches.size() <= 5) 
    {
        result.inliers = -1;
        return result;
    }
    // 第一個幀的三維點
    vector<cv::Point3f> pts_obj;
    // 第二個幀的影象點
    vector< cv::Point2f > pts_img;

    // 相機內參
    for (size_t i=0; i<goodMatches.size(); i++)
    {
        // query 是第一個, train 是第二個
        cv::Point2f p = frame1.kp[goodMatches[i].queryIdx].pt;
        // 獲取d是要小心!x是向右的,y是向下的,所以y才是行,x是列!
        ushort d = frame1.depth.ptr<ushort>( int(p.y) )[ int(p.x) ];
        if (d == 0)
            continue;
        pts_img.push_back( cv::Point2f( frame2.kp[goodMatches[i].trainIdx].pt ) );

        // 將(u,v,d)轉成(x,y,z)
        cv::Point3f pt ( p.x, p.y, d );
        cv::Point3f pd = point2dTo3d( pt, camera );
        pts_obj.push_back( pd );
    }

    if(pts_obj.size()==0 || pts_img.size()==0)
    {
	result.inliers = -1;
	return result;
    }

    double camera_matrix_data[3][3] = {
        {camera.fx, 0, camera.cx},
        {0, camera.fy, camera.cy},
        {0, 0, 1}
    };

    //cout<<"solving pnp"<<endl;
    // 構建相機矩陣
    cv::Mat cameraMatrix( 3, 3, CV_64F, camera_matrix_data );
    cv::Mat rvec, tvec, inliers;
    // 求解pnp
    cv::solvePnPRansac( pts_obj, pts_img, cameraMatrix, cv::Mat(), rvec, tvec, false, 100, 1.0, 100, inliers );

    
    result.rvec = rvec;
    result.tvec = tvec;
    result.inliers = inliers.rows;

    return result;
}

// rvec, tvec --> T
Eigen::Isometry3d cvMat2Eigen(cv::Mat& rvec, cv::Mat& tvec)
{
	cv::Mat R;
	cv::Rodrigues(rvec, R);
	Eigen::Matrix3d r;
	//cv::cv2eigen(R, r);
	for(int i=0; i<3; i++)
		for(int j=0; j<3; j++)
			r(i,j) = R.at<double>(i,j);
	
	Eigen::Isometry3d T = Eigen::Isometry3d::Identity();
	
	Eigen::AngleAxisd angle(r);
	//Eigen::Translation<double, 3> trans(tvec.at<double>(0,0), tvec.at<double>(0,1), tvec.at<double>(0,2));
	T = angle;
	T(0,3) = tvec.at<double>(0,0);
	T(1,3) = tvec.at<double>(1,0);
	T(2,3) = tvec.at<double>(2,0);
	return T;
}

// joinPointCloud
// 輸入:原始點雲, 新來的幀以及它的位姿
// 輸出:將新來的幀加到原始幀後的影象
PointCloud::Ptr joinPointCloud(PointCloud::Ptr original, FRAME& newFrame, Eigen::Isometry3d T, CAMERA_INTRINSIC_PARAMETERS& camera)
{
	PointCloud::Ptr newCloud = image2PointCloud(newFrame.rgb, newFrame.depth, camera);
	// 合併點雲
	PointCloud::Ptr output (new PointCloud());
	pcl::transformPointCloud(*original, *output, T.matrix());
	*newCloud += *output;
	// Voxel grid 濾波降取樣
	static pcl::VoxelGrid<PointT> voxel;
	static ParameterReader pd;
	double gridsize = atof(pd.getData("voxel_grid").c_str());
	voxel.setLeafSize(gridsize, gridsize, gridsize);
	voxel.setInputCloud(newCloud);
	PointCloud::Ptr tmp(new PointCloud());
	voxel.filter(*tmp);
	return tmp;
}

slam.cpp

#include <iostream>
#include <fstream>
#include <sstream>
using namespace std;

#include "slamBase.h"

#include <pcl/filters/voxel_grid.h>
#include <pcl/filters/passthrough.h>
// g2o的標頭檔案
#include <g2o/types/slam3d/types_slam3d.h> //頂點型別
#include <g2o/core/sparse_optimizer.h>
#include <g2o/core/block_solver.h>
#include <g2o/core/factory.h>
#include <g2o/core/optimization_algorithm_factory.h>
#include <g2o/core/optimization_algorithm_gauss_newton.h>
//#include <g2o/solvers/csparse/linear_solver_csparse.h>
#include <g2o/solvers/eigen/linear_solver_eigen.h>
#include <g2o/core/robust_kernel.h>
#include <g2o/core/robust_kernel_impl.h>
//#include <g2o/core/robust_kernel_factory.h>
#include <g2o/core/optimization_algorithm_levenberg.h>

// 選擇優化方法
typedef g2o::BlockSolver_6_3 SlamBlockSolver;
typedef g2o::LinearSolverEigen< SlamBlockSolver::PoseMatrixType > SlamLinearSolver; 
//typedef g2o::LinearSolverCSparse<SlamBlockSolver::PoseMatrixType> SlamLinearSolver;
// 給定index,讀取一幀資料
FRAME readFrame(int index, ParameterReader& pd);
// 估計一個運動的大小
double normofTransform(cv::Mat rvec, cv::Mat tvec);
// 檢測兩個幀,結果定義
enum CHECK_RESULT {NOT_MATCHED=0, TOO_FAR_AWAY, TOO_CLOSE, KEYFRAME};
// 函式宣告
CHECK_RESULT checkKeyframes(FRAME& f1, FRAME& f2, g2o::SparseOptimizer& opti, bool is_loops=false);
// 檢測近距離的迴環
void checkNearbyLoops(vector<FRAME>& frames, FRAME& currFrame, g2o::SparseOptimizer& opti);
// 隨機檢測迴環
void checkRandomLoops(vector<FRAME>& frames, FRAME& currFrame, g2o::SparseOptimizer& opti);

int main(int argc, char** argv)
{
	// 前面部分和vo是一樣的
	ParameterReader pd;
	int startIndex = atoi(pd.getData("start_index").c_str());
	int endIndex = atoi(pd.getData("end_index").c_str());

	// 所有的關鍵幀都放在了這裡
	vector<FRAME> keyframes;
	// initialize
	cout << "Initializing ..." << endl;
	int currIndex = startIndex;
	FRAME currFrame = readFrame(currIndex, pd);

	string detector = pd.getData("detector");
	string descriptor = pd.getData("descriptor");
	CAMERA_INTRINSIC_PARAMETERS camera = getDefaultCamera();
	computeKeyPointsAndDesp(currFrame, detector, descriptor);
	PointCloud::Ptr cloud = image2PointCloud(currFrame.rgb, currFrame.depth, camera);
	
	//pcl::visualization::CloudViewer viewer("viewer");
	
	// 是否顯示點雲
	//bool visualize = pd.getData("visualize_pointcloud")==string("yes");
	
	/********************************
	// 新增:有關g2o的初始化
	********************************/
	// 初始化求解器
	SlamLinearSolver* linearSolver = new SlamLinearSolver();
	linearSolver->setBlockOrdering(false);
	SlamBlockSolver* blockSolver = new SlamBlockSolver(linearSolver);
	g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg(blockSolver);
	
	g2o::SparseOptimizer globalOptimizer; // 最後用的就是這個東東
	globalOptimizer.setAlgorithm(solver);
	// 不要輸出除錯資訊
	globalOptimizer.setVerbose(false);

	// 向globalOptimizer增加第一個頂點
	g2o::VertexSE3* v = new g2o::VertexSE3();
	v->setId(currIndex);
	v->setEstimate(Eigen::Isometry3d::Identity());
	v->setFixed(true);
	globalOptimizer.addVertex(v);

	keyframes.push_back(currFrame);

	// 新增新增關鍵幀的約束
	double keyframe_threshold = atof(pd.getData("keyframe_threshold").c_str());

	bool check_loop_closure = pd.getData("check_loop_closure")==string("yes");
	
	for(currIndex = startIndex+1; currIndex < endIndex; currIndex++)
	{
		cout << "Reading files " << currIndex << endl;
		FRAME currFrame = readFrame(currIndex, pd);
		computeKeyPointsAndDesp(currFrame, detector, descriptor);
		// 匹配該幀與keyframes裡最後一幀
		CHECK_RESULT result = checkKeyframes(keyframes.back(), currFrame, globalOptimizer);
		switch(result)
		{
		case NOT_MATCHED:
			// 沒匹配上,直接跳過
			cout << RED"Not enough lnliers." << endl;
			break;
		case TOO_FAR_AWAY:
			// 太遠了,直接跳過
			cout << RED"Too far away, may be an error." << endl;
			break;
		case TOO_CLOSE:
			// 太近了,沒必要新增
			cout << RESET"Too close, not a keyframe." << endl;
			break; // 靠靠靠
		case KEYFRAME:
			cout << GREEN"This is a new keyframe." << endl;
			// 檢測迴環
			if(check_loop_closure)
			{
				checkNearbyLoops(keyframes, currFrame, globalOptimizer);
				checkRandomLoops(keyframes, currFrame, globalOptimizer);
			}
			keyframes.push_back(currFrame);
			break;
		default:
			break;
		}
	}	
	//cout << "共有" << keyframes.size() << "關鍵幀" << endl;
	// 優化
	cout << "optimizing pose graph, vertices:" << globalOptimizer.vertices().size() << endl;
	globalOptimizer.save("../data/result_before.g2o");
	globalOptimizer.initializeOptimization();
	globalOptimizer.optimize(100);
	globalOptimizer.save("../data/result_after.g2o");
	cout << "Optimization done." << endl;

	// 拼接點雲地圖
	cout << "saving the point cloud map..." << endl;
	PointCloud::Ptr output (new PointCloud()); // 全域性地圖
	PointCloud::Ptr tmp (new PointCloud());

	pcl::VoxelGrid<PointT> voxel; // 網格濾波器,調整地圖解析度
	pcl::PassThrough<PointT> pass; // z方向區間濾波器,由於rgbd相機的有效深度區間有限,把太遠的去掉
	
	pass.setFilterFieldName("z");
	pass.setFilterLimits(0.0, 4.0); // 4m以上就不要了
	
	double gridsize = atof(pd.getData("voxel_grid").c_str()); // 分辨圖可以在parameters.txt裡調
	voxel.setLeafSize(gridsize, gridsize, gridsize);

	for(size_t i=0; i<keyframes.size(); i++)
	{
		// 從g2o裡取出一幀
		g2o::VertexSE3* vertex = dynamic_cast<g2o::VertexSE3*>(globalOptimizer.vertex(keyframes[i].frameID));
		Eigen::Isometry3d pose = vertex->estimate(); // 該幀優化後的位姿
		PointCloud::Ptr newCloud = image2PointCloud(keyframes[i].rgb, keyframes[i].depth, camera); // 轉成點雲
		// 以下是濾波
		voxel.setInputCloud(newCloud);
		voxel.filter(*tmp);
		pass.setInputCloud(tmp);
		pass.filter(*newCloud);
		// 把點雲變換後加入全域性地圖中
		pcl::transformPointCloud(*newCloud, *tmp, pose.matrix());
		*output += *tmp;
		tmp->clear();
		newCloud->clear();
	}

	voxel.setInputCloud(output);
	voxel.filter(*tmp);
	// 儲存
	pcl::io::savePCDFile("../data/result.pcd", *tmp);

	cout << "Final map is saved." << endl;
	//globalOptimizer.clear();

	return 0;
}

FRAME readFrame(int index, ParameterReader& pd)
{
	FRAME f;
	string rgbDir = pd.getData("rgb_dir");
	string depthDir = pd.getData("depth_dir");
	string rgbExt = pd.getData("rgb_extension");
	string depthExt = pd.getData("depth_extension");

	stringstream ss;
	ss << rgbDir << index << rgbExt;
	string filename;
	ss >> filename;
	f.rgb = cv::imread(filename);
	
	ss.clear();
	filename.clear();
	ss << depthDir << index << depthExt;
	ss >> filename;
	f.depth = cv::imread(filename, -1);
	f.frameID = index;
	return f;
}

double normofTransform(cv::Mat rvec, cv::Mat tvec)
{
	return fabs(min(cv::norm(rvec), 2*M_PI-cv::norm(rvec))) + fabs(cv::norm(tvec));
}

CHECK_RESULT checkKeyframes(FRAME& f1, FRAME& f2, g2o::SparseOptimizer& opti, bool is_loops)
{
	static ParameterReader pd;
	static int min_inliers = atoi(pd.getData("min_inliers").c_str());
	static double max_norm = atof(pd.getData("max_norm").c_str());
	static double keyframe_threshold = atof(pd.getData("keyframe_threshold").c_str());
	static double max_norm_lp = atof(pd.getData("max_norm_lp").c_str());
	static CAMERA_INTRINSIC_PARAMETERS camera = getDefaultCamera();
	//static g2o::RobustKernel* robustKernel = g2o::RobustKernelFactory::instance()->construct("Cauchy");
	// 比較f1和f2
	RESULT_OF_PNP result = estimateMotion(f1, f2, camera);
	// inliers不夠,放棄該幀
	if(result.inliers<min_inliers)return NOT_MATCHED;
	// 計算運動範圍是否太大
	double norm = normofTransform(result.rvec, result.tvec);
	if(is_loops == false)
	{
		if(norm >= max_norm)return TOO_FAR_AWAY;
	}
	else
	{
		if(norm >= max_norm_lp)return TOO_FAR_AWAY;
	}
	// 太近
	if(norm <= keyframe_threshold)return TOO_CLOSE;
	// 向g2o中增加這個頂點與上一幀聯絡的邊
	// 頂點部分
	// 頂點只需設定id即可
	if(is_loops == false)
	{
		g2o::VertexSE3 *v = new g2o::VertexSE3();
		v->setId(f2.frameID);
		v->setEstimate(Eigen::Isometry3d::Identity());
		opti.addVertex(v);
	}
	// 邊部分
	g2o::EdgeSE3* edge = new g2o::EdgeSE3();
	// 連線此邊的兩個頂點id
	//edge->vertices()[0] = opti.vertex(f1.frameID);
	//edge->vertices()[1] = opti.vertex(f2.frameID);
	edge->setVertex(0, opti.vertex(f1.frameID));
	edge->setVertex(1, opti.vertex(f2.frameID));
	//edge->setRobustKernel(robustKernel);
	edge->setRobustKernel(new g2o::RobustKernelHuber());
	// 資訊矩陣
	Eigen::Matrix<double, 6, 6> information = Eigen::Matrix<double, 6, 6>::Identity();
	// 資訊矩陣是協方差矩陣的逆,表示我們對邊的精度的預先估計
    	// 因為pose為6D的,資訊矩陣是6*6的陣,假設位置和角度的估計精度均為0.1且互相獨立
    	// 那麼協方差則為對角為0.01的矩陣,資訊陣則為100的矩陣
	information(0,0) = information(1,1) = information(2,2) = 100;
	information(3,3) = information(4,4) = information(5,5) = 100;
	// 也可以將角度設大一些,表示對角度的估計更加準確
	edge->setInformation(information);
	// 邊的估計即是pnp求解之結果
	Eigen::Isometry3d T = cvMat2Eigen(result.rvec, result.tvec);
	edge->setMeasurement(T.inverse());
	// 將此邊加入圖中
	opti.addEdge(edge);
	return KEYFRAME;
}

void checkNearbyLoops(vector<FRAME>& frames, FRAME& currFrame, g2o::SparseOptimizer& opti)
{
	static ParameterReader pd;
	static int nearby_loops = atoi(pd.getData("nearby_loops").c_str());
	// 就是把currFrame和frames裡末尾幾個測一遍
	if(frames.size() <= nearby_loops)
	{
		// no enough keyframes, check everyone
		for(size_t i=0; i<frames.size(); i++)
		{
			checkKeyframes(frames[i], currFrame, opti, true);
		}
	}
	else
	{
		// check the nearest ones
		for(size_t i=frames.size()-nearby_loops; i<frames.size(); i++)
		{
			checkKeyframes(frames[i], currFrame, opti, true);
		}
	}
}

void checkRandomLoops(vector<FRAME>& frames, FRAME& currFrame, g2o::SparseOptimizer& opti)
{
	static ParameterReader pd;
	static int random_loops = atoi(pd.getData("random_loops").c_str());
	srand((unsigned int) time(NULL));
	// 隨機取一些幀進行檢測
	if(frames.size() <= random_loops)
	{
		// no enough keyframes, check everyone
		for(size_t i=0; i<frames.size(); i++)
		{
			checkKeyframes(frames[i], currFrame, opti, true);
		}
	}
	else
	{
		// randomly check loops
		for(int i=0; i<random_loops; i++)
		{
			int index = rand()%frames.size();
			checkKeyframes(frames[index], currFrame, opti, true);
		}
	}
}

特別注意的是:作者將SIFT換成ORB特徵,主要做了如下更改

1、parameters中引數(SIFT -> ORB);

2、slamBase中computeKeyPointsAndDesp函式裡註釋掉nonfree初始化:

      //cv::initModule_nonfree();

3、slamBase中estimateMotion函式裡用於特徵匹配的方法:

      cv::FlannBasedMatcher matcher;     ->     cv::BFMatcher matcher;    它們分別表示最近鄰近似匹配和暴力匹配。

C++: BFMatcher::BFMatcher(int normType=NORM_L2, bool crossCheck=false )

parameter
normType 四個可選值:NORM_L1, NORM_L2, NORM_HAMMING, NORM_HAMMING2。L1,L2用於SIFT,SURF;後兩個用於ORB。
crossCheck 如果為true,則使用交叉過濾。

匹配過程中很可能發生錯誤的匹配,錯誤的匹配主要有兩種:匹配的特徵點是錯誤的,影象上的特徵點無法匹配。常用的刪除錯誤的匹配有

  • 交叉過濾:如果第一幅影象的一個特徵點和第二幅影象的一個特徵點相匹配,則進行一個相反的檢查,即將第二幅影象上的特徵點與第一幅影象上相應特徵點進行匹配,如果匹配成功,則認為這對匹配是正確的。OpenCV中的BFMatcher已經包含了這種過濾   BFMatcher matcher(NORM_L2,true),在構造BFMatcher是將第二個引數設定為true。
  • 比率測試:KNNMatch,可設定K = 2 ,即對每個匹配返回兩個最近鄰描述符,僅當第一個匹配與第二個匹配之間的距離足夠小時,才認為這是一個匹配。

此外,使用特徵提取過程得到的特徵描述符(descriptor)資料型別有的是float型別的,比如說SurfDescriptorExtractor, SiftDescriptorExtractor,有的是uchar型別的,比如說有ORB,BriefDescriptorExtractor。對應float型別的匹配方式有:FlannBasedMatcher,NORM_L1, NORM_L2。對應uchar型別的匹配方式有:NORM_HAMMING, NORM_HAMMING2。所以ORB和BRIEF特徵描述子只能使用BruteForce匹配法。

4、主函式裡用於圖優化的核函式的變化:

      //static g2o::RobustKernel* robustKernel = g2o::RobustKernelFactory::instance()->construct("Cauchy");

      //edge->setRobustKernel(robustKernel);

      edge->setRobustKernel(new g2o::RobustKernelHuber());

以及對應的標頭檔案:

      //#include <g2o/core/robust_kernel_factory.h>

      #include <g2o/core/robust_kernel_impl.h>

圖優化中也有一種核函式。 引入核函式的原因,是因為SLAM中可能給出錯誤的邊。SLAM中的資料關聯讓科學家頭疼了很長時間。出於變化、噪聲等原因,機器人並不能確定它看到的某個路標,就一定是資料庫中的某個路標。萬一認錯了呢?我把一條原本不應該加到圖中的邊給加進去了,會怎麼樣?

嗯,那優化演算法可就慒逼了……它會看到一條誤差很大的邊,然後試圖調整這條邊所連線的節點的估計值,使它們順應這條邊的無理要求。由於這個邊的誤差真的很大,往往會抹平了其他正確邊的影響,使優化演算法專注於調整一個錯誤的值。

於是就有了核函式的存在。核函式保證每條邊的誤差不會大的沒邊,掩蓋掉其他的邊。具體的方式是,把原先誤差的二範數度量,替換成一個增長沒有那麼快的函式,同時保證自己的光滑性質(不然沒法求導啊!)。因為它們使得整個優化結果更為魯棒,所以又叫它們為robust kernel(魯棒核函式)。

很多魯棒核函式都是分段函式,在輸入較大時給出線性的增長速率,例如cauchy核,huber核等等。

Cauchy核:k\left ( x,y \right )=\frac{1}{\left \| x-y \right \|^{2}/\sigma +1}

huber核: