機器人視覺專案:視覺檢測識別+機器人跟隨(12)
任務完成情況
opencv的資料格式轉換,這次碰上了,主要就是cvMat,IplImage,cv::Mat,InputArray之間的格式轉換,現在想要將提取的影象特徵時,用一張灰度圖傳給surf特徵檢測器,計算出特徵點,在這之前需要將影象的資料格式進行轉換:
cv::xfeatures2d::SURF::detectAndCOmpute(InputArray image, InputArray mask, std::vector<std::vector<KeyPoint>>&keypoints, OutputArray descriptors, bool useProvideKeyPoints=false)
資料格式需轉換成功,否則detectAndCompute()會提示錯誤資訊:no matching function...
detectAndCompute()要求輸入的image要求格式是InputArray型別的形參,但是一般都是使用Mat型別的實參直接傳遞。
使用
CvMat *img02 = cvCreateMat(img2->height, img2->width, CV_64FC3); cvConvert(img02, img2);//將 IplImage格式img2轉換cvMat格式img02
將將 IplImage格式img2轉換cvMat格式img02,然後轉為灰度圖,對img_1傳入到detectAndCompute(),這應該是沒有問題的,但是仍然提示
invalid initialization of type 'cv::InputArray {aka const cv""InputArray&}' from expression of type 'cv::mat () {aka cv::mat () (iplimage)}'
意思是輸入cv::mat()格式與InputArray格式不符
直接定義一個Mat類矩陣,強制將括號內的IplImage*型別的影象轉換為Mat:
Mat img01(IplImage* img1);//直接定義Mat型別矩陣 cvtColor(img01, img_1, COLOR_RGB2GRAY);
找到的
InputArray這個介面類可以是Mat、Mat<T>、Mat<T, m, n>、vector<T>、vector<vector<T>>、vector<Mat>。也就意味著當你看refman或者原始碼時,如果看見函式的引數型別是InputArray型時,把上訴幾種型別作為引數都是可以的。
有時候InputArray輸入的矩陣是個空引數,你只需要用cv::noArray()作為引數即可,或者很多程式碼裡都用cv::Mat()作為空參。
這個類只能作為函式的形參引數使用,不要試圖宣告一個InputArray型別的變數
如果在你自己編寫的函式中形參也想用InputArray,可以傳遞多型別的引數,在函式的內部可以使用InputArray::getMat()函式將傳入的引數轉換為Mat的結構,方便你函式內的操作;必要的時候,可能還需要InputArray::kind()用來區分Mat結構或者vector<>結構,但通常是不需要的。
這樣怎麼會還出現問題呢,不講道理啊,
上最新程式碼tracking_node_8_12.cpp
/******************************************************************************
*
* The MIT License (MIT)
*
* Copyright (c) 2018 Bluewhale Robot
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
* Author: Randoms
*******************************************************************************/
#include "tracking_node.h"
//beginging
#include <iostream>
#include <stdio.h>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/xfeatures2d/nonfree.hpp>
#include <opencv2/calib3d.hpp>
#include <opencv2/flann.hpp>
#include <opencv2/xfeatures2d.hpp>
//ending
using namespace cv;
using namespace std;
using namespace cv::xfeatures2d;
using namespace XiaoqiangTrack;
sensor_msgs::Image last_frame;
XiaoqiangTrack::Tracker *tracker = NULL;
Rect2d body_rect;
ros::Publisher image_pub;
ros::Publisher target_pub;
std::mutex update_track_mutex;
int track_ok_flag = 0;
cv::Rect2d previous_body_rect;
cv::Rect2d body_track_rect;
sensor_msgs::Image get_one_frame() { return last_frame; }
void update_frame(const sensor_msgs::ImageConstPtr &new_frame) //更新幀
{
last_frame = *new_frame;
cv_bridge::CvImagePtr cv_ptr =cv_bridge::toCvCopy(new_frame, "bgr8"); //影象格式轉換
cv::Mat cv_image = cv_ptr->image;
if (tracker == NULL)
return;
unique_lock<mutex> lock(update_track_mutex);
previous_body_rect = body_rect;//將檢測到的當前rect儲存為previous_body_rect
track_ok_flag = tracker->updateFrame(cv_image, body_rect);
cv::rectangle(cv_image, body_rect, cv::Scalar(0, 255, 0));
image_pub.publish(cv_ptr->toImageMsg());
xiaoqiang_track::TrackTarget target;
target.x = body_rect.x + body_rect.width / 2;
target.y = body_rect.y + body_rect.height / 2;
target.width = body_track_rect.width;
target.height = body_track_rect.height;
if (track_ok_flag == 0) {
// send stop
target.x = 0;
target.y = 0;
}
target_pub.publish(target);//target.x,target.y是跟蹤的點的座標,kalman...
}
int main(int argc, char **argv) {
ros::init(argc, argv, "xiaoqiang_track_node"); // ros節點初始化
//在一個節點中開闢多個執行緒,構造時可以指定執行緒數如(4),AsyncSpinner有start()和stop()函式
ros::AsyncSpinner spinner(4);
spinner.start();
ros::NodeHandle private_nh("~");
ros::Publisher talk_pub = private_nh.advertise<std_msgs::String>("text", 10);
image_pub = private_nh.advertise<sensor_msgs::Image>("processed_image", 10);
target_pub = private_nh.advertise<xiaoqiang_track::TrackTarget>("target", 10);
int watch_dog;
private_nh.param("watch_dog", watch_dog, 2);
ros::Subscriber image_sub = private_nh.subscribe("image", 10, update_frame);
PoseTracker *client;
std::string pose_tracker_type;
ros::param::param<std::string>("~pose_tracker", pose_tracker_type, "");
if (pose_tracker_type == "baidu") //判斷跟蹤型別:baidu track or body track
{
client = (PoseTracker *)new BaiduTrack(private_nh);
} else if (pose_tracker_type == "body_pose") {
client = (PoseTracker *)new BodyTrack(private_nh);
} else {
ROS_FATAL_STREAM("unknown pose tracker type " << pose_tracker_type);
ROS_FATAL_STREAM("supported pose trakers are body_pose and baidu");
exit(1);
}
std::string tracker_main_type; //定義主跟蹤型別
std::string tracker_aided_type; //輔跟蹤
ros::param::param<std::string>("~tracker_main", tracker_main_type, "");
ros::param::param<std::string>("~tracker_aided", tracker_aided_type, "");
tracker = new XiaoqiangTrack::Tracker(tracker_main_type,tracker_aided_type); //設定跟蹤器
// 告訴使用者站在前面
std_msgs::String words;
words.data = "請站在我前面";
talk_pub.publish(words);
// 提醒使用者調整好距離
sensor_msgs::Image frame = get_one_frame(); //得到一幀影象
body_rect.x = -1;
body_rect.y = -1;
while (!ros::isShuttingDown()) {
if (frame.data.size() != 0) {
cv::Rect2d rect = client->getBodyRect(frame); //通過frame得到人體影象區域
if (rect.x <= 1 || rect.y <= 1) {
words.data = "我沒有看到人,請站到我前面";
talk_pub.publish(words);
} else if (rect.x + rect.width / 2 > 440 ||
rect.x + rect.width / 2 < 200) {
body_rect = rect;
words.data = "請站到鏡頭中間來";
talk_pub.publish(words);
} else {
body_rect = rect;
words.data = "我看到人了,開始追蹤";
talk_pub.publish(words);
break;
}
}
sleep(1);
frame = get_one_frame();
}
/*
經過分析程式碼,在這個位置加上特徵提取方法和Opencv的特徵匹配,思路是:
-
特徵提取是從一幀影象中提取特徵,想要提取的特徵可以是ORB,FAST,SIFT,SURF等,上面的
- frame = get_one_frame()是獲取最新的一幀影象,return last_frame
-
那麼對於這一幀影象抽取想要的特徵資訊,得到特徵點,儲存檢測到的目標特徵,之後用來與再次檢測時的影象來做匹配
-
當然這裡是做特徵提取,匹配是在跟蹤丟失後,再次啟動檢測識別時,識別到多個目標,進行匹配
*/
/*fuck!sorry,i don't wanna say that,but i just lost everything i did the whole day because of clicking the close table without save it! so let me start at the begining...
*/
//begining extract feature
//int minHessian = 2000;
//int hessianThreshold = 2000;
//SurfFeatureDetector detector(minHessian);
//detector = SURF::create(hessianThreshold);
Ptr<SURF> surf;
surf = SURF::create(800);
vector<KeyPoint>keypoint1, keypoint2;
frame = get_one_frame();
body_rect = client->getBodyRect(frame);
//image1 = resize(frame, rect.x:(rect.x+rect.width), rect.y:(rect.y+rect.higth))
//Mat image1;
cv_bridge::CvImagePtr cv_ptr1 =cv_bridge::toCvCopy(frame, "bgr8");
cv::Mat imframe = cv_ptr1->image;
IplImage *im,*img1,*img_1,*image1;
CvRect rectInImage1;
rectInImage1 = cvRect(body_rect.x, body_rect.y,body_rect.width, body_rect.height);
CvSize size1;
size1.width = rectInImage1.width;
size1.height = rectInImage1.height;
img1 = cvCreateImage(size1, imframe.depth(), imframe.channels());//frame->depth
//img1 = cvCreateImage(size1, IPL_DEPTH_8U, 3);
im = cvCreateImage(imframe.size(), imframe.depth(), imframe.channels());
cvConvert(&imframe, im); //深拷貝
cvSetImageROI(im, rectInImage1);
cvCopy(img1, im);//img1是從im上提取的目標框區域
//cvtcolor(img1, img_1, CV_RGB2GRAY);
//CvMat *img01 = cvCreateMat(img1->height, img1->width, CV_64FC3);
//cvConvert(img01, img1);//將 IplImage格式img1轉換cvMat格式img01
cv::Mat img01(IplImage* img1);
cvtColor(img01, img_1, COLOR_RGB2GRAY);//灰度圖img_1
//檢測特徵點
//detector.detect(img_1, keypoint1)
surf->detectAndCompute(img_1, Mat(), keypoint1, image1); //輸出mat存放所有的特徵點描述向量
//ending
// 告訴使用者可以開始走了
sensor_msgs::Image tracking_frame = get_one_frame();
cv_bridge::CvImagePtr cv_ptr = cv_bridge::toCvCopy(tracking_frame, "bgr8");
cv::Mat cv_image = cv_ptr->image;
tracker->initTracker(cv_image, body_rect); // init
int repeat_count = 0;
int watch_dog_count = 0;
while (ros::ok()) {
usleep(1000 * 100); // 100ms
// 如果位置不變,則認為可能丟失
if (previous_body_rect == body_rect) {
repeat_count += 100;
if (repeat_count == 5 * 1000) //rect檢測到的資料不變,且超過一定時間,判Lost
{
ROS_WARN_STREAM("Target not move, may lost");
repeat_count = 0;
track_ok_flag = 0; // heihei,flag=0 -> reset
}
} //這裡判斷跟丟
else
{
repeat_count = 0;
}
if (body_rect.width < 300 && body_rect.height < 300 && track_ok_flag == 2 &&
body_rect.height > body_rect.width) //確認檢測到的rect符合這些要求
{
watch_dog_count += 100;
if (watch_dog_count <= watch_dog * 1000) // watch_dog=2,要求確認20次
{
continue;
}
} //這裡判斷是否正確的給出rect
watch_dog_count = 0;
tracking_frame = get_one_frame(); //再次獲得newframe
body_track_rect = client->getBodyRect(tracking_frame); // track
//usr code begining
if(track_ok_flag == 0)
{
//Mat image2;
cv_bridge::CvImagePtr cv_ptr2 =cv_bridge::toCvCopy(tracking_frame, "bgr8");
cv::Mat imgframe = cv_ptr2->image;
IplImage *ime, *img2, *img_2, *image2;
CvRect rectInImage2;
rectInImage2 = cvRect(body_track_rect.x, body_track_rect.y,
body_track_rect.width, body_track_rect.height);
CvSize size2;
size2.width = rectInImage2.width;
size2.height = rectInImage2.height;
img2 = cvCreateImage(size2, imgframe.depth(), imgframe.channels());
//img2 = cvCreateImage(size2, IPL_DEPTH_8U, 3); //bug
ime =cvCreateImage(imgframe.size(), imgframe.depth(), imgframe.channels());
cvConvert(&imgframe, ime);
cvSetImageROI(ime, rectInImage2);
cvCopy(img2, ime);//img2是從tracking_frame上提取的目標框區域
//cvtColor(img2, img_2, COLOR_RGB2GRAY);
//CvMat *img02 = cvCreateMat(img2->height, img2->width, CV_64FC3);
//cvConvert(img02, img2);//將 IplImage格式img2轉換cvMat格式img02
cv::Mat img02(IplImage* img2);
cvtColor(img02, img_2, COLOR_RGB2GRAY);//灰度圖img_2
//檢測特徵點
//detector.detect(img_2, keypoint2);
surf->detectAndCompute(img_2, Mat(), keypoint2, image2);
//計算特徵點描述子
//SurfDescriptorExtractor extractor;
//Mat descriptor1, descriptor2;
//extractor.compute(img1, keypoint1, descriptor1);
//extractor.compute(img2, keypoint2, descriptor2);
//使用flann匹配
FlannBasedMatcher matcher;
vector<DMatch>matches;
matcher.match(image1, image2, matches);//匹配結束
sort(matches.begin(), matches.end());//將matches進行排序
vector<DMatch>good_matches;
int ptsPairs = std::min(50, (int)(matches.size() * 0.15));
for (int i = 0; i < ptsPairs; i++)//儲存50個匹配結果
{
good_matches.push_back(matches[i]);
}
//double averagematch =0;
//for(int i = 0; i < ptsPairs; i++)
//{
//averagematch = averagematch + good_matches[i] / ptsPairs;
//}
//if(good_matches[0] < 1.0 && (goodmatches[ptsPairs} - good_matches[0] < 4.0)
//continue;
/*
double max_dist = 0;
double min_dist = 100;
for(int i = 0; i < descriptor1.rows; i++)
{
double dist = matches[i].distance;
if(dist < min_dist)
min_dist = dist;
if(dist > max_dist)
max_dist = dist;
}//得到匹配結果中的最小距離和最大距離
//處理匹配結果:判斷當前匹配的物件是否為目標,僅根據最大最小匹配距離,能否進行判斷?
if(!(min_dist < 1.0 || (max_dist - min_dist) < 2.0))// matching failed
{
//tracking_frame = tracking_frame;
//body_track_rect = body_track_rect;
continue;
}
*/
}
//usr code ending
if (body_track_rect.x <= 1 || body_track_rect.y <= 1) //識別到的rect.x / .y小於1後跟蹤停止
{
tracker->stop();
}
else
{
{
unique_lock<mutex> lock(update_track_mutex);
body_rect = body_track_rect;
cv_bridge::CvImagePtr cv_ptr =
cv_bridge::toCvCopy(tracking_frame, "bgr8");
cv::Mat cv_image = cv_ptr->image;
if (track_ok_flag == 0) //跟蹤標誌為0時跟蹤復位
{
tracker->reset(cv_image, body_rect, true); // watch out the reset praram
}
else
tracker->reset(cv_image, body_rect); /// reset
}
}
}
}