1. 程式人生 > >OpenCV視訊目標跟蹤及背景分割器

OpenCV視訊目標跟蹤及背景分割器

目標跟蹤

本文主要介紹cv2中的視訊分析Camshift和Meanshift。

目標: 

學習Meanshift演算法和Camshift演算法來尋找和追蹤視訊中的目標物體

Meanshift演算法:

meanshift演算法的原理很簡單。假設你有一堆點集,例如直方圖反向投影得到的點集。 
你還有一個小的視窗,這個視窗可能是圓形的,現在你可能要移動這個視窗到點集密度最大的區域當中。 


如下圖: 

    
最開始的視窗是藍色圓環的區域,命名為C1。藍色圓環的重音用一個藍色的矩形標註,命名為C1_o。

然而,你發現在這個視窗當中所有點的點集構成的質心在藍色圓形點處。而且,圓環的型心和質心並不重合。所以,移動藍色的視窗,使得型心與之前得到的質心重合。在新移動後的圓環的區域當中再次尋找圓環當中所包圍點集的質心,然後再次移動,通常情況下,型心和質心是不重合的。不斷執行上面的移動過程,直到型心和質心大致重合結束。 
這樣,最後圓形的視窗會落到畫素分佈最大的地方,也就是圖中的綠色圈,命名為C2。

meanshift演算法不僅僅限制在二維的影象處理問題當中,同樣也可以使用於高維的資料處理。可以通過選取不同的核函式,來改變區域當中偏移向量的權重,最後meanshift演算法的過程一定會收斂到某一個位置。(可證明)

meanshift演算法除了應用在視訊追蹤當中,在聚類,平滑等等各種涉及到資料以及非監督學習的場合當中均有重要應用,是一個應用廣泛的演算法。

假如在二維環境當中,meanshift演算法處理的資料是一群離散的二維點集,但是影象是一個矩陣資訊,如何在一個視訊當中使用meanshift演算法來追蹤一個運動的物體呢?

大致流程如下:

1.首先在影象上使用矩形框或者圓形框選定一個目標區域 
2.計算選定好區域的直方圖分佈。 
3.對下一幀影象b同樣計算直方圖分佈。 
4.計算影象b當中與選定區域直方圖分佈最為相似的區域,使用meanshift演算法將選定區域沿著最為相似的部分進行移動。(樣例當中使用的是直方圖反向投影) 
5.重複3到4的過程。

OpenCV中的meanshift演算法: 
在opencv中使用meanshift演算法,首先要設定目標,找到它的直方圖,然後可以對這個直方圖在每一幀當中進行反向投影。我們需要提供一個初試的視窗位置,計算HSV模型當中H(色調)的直方圖。為了避免低亮度造成的影響,使用 cv2.inRange()將低亮度值忽略。

import cv2
import numpy as np

# 設定初始化的視窗位置
r,h,c,w = 0,100,0,100 # 設定初試視窗位置和大小
track_window = (c,r,w,h)

cap = cv2.VideoCapture(0)

ret, frame= cap.read()

# 設定追蹤的區域
roi = frame[r:r+h, c:c+w]
# roi區域的hsv影象
hsv_roi = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# 取值hsv值在(0,60,32)到(180,255,255)之間的部分
mask = cv2.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.)))
# 計算直方圖,引數為 圖片(可多),通道數,蒙板區域,直方圖長度,範圍
roi_hist = cv2.calcHist([hsv_roi],[0],mask,[180],[0,180])
# 歸一化
cv2.normalize(roi_hist,roi_hist,0,255,cv2.NORM_MINMAX)

# 設定終止條件,迭代10次或者至少移動1次
term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 )

while(1):
    ret, frame = cap.read()
    if ret == True:
        # 計算每一幀的hsv影象
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        # 計算反向投影
        dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1)

        # 呼叫meanShift演算法在dst中尋找目標視窗,找到後返回目標視窗
        ret, track_window = cv2.meanShift(dst, track_window, term_crit)
        # Draw it on image
        x,y,w,h = track_window
        img2 = cv2.rectangle(frame, (x,y), (x+w,y+h), 255,2)
        cv2.imshow('img2',img2)


    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
cap.release()
cv2.destroyAllWindows() 

CamShift演算法:

在視訊或者是攝像頭當中,如果被追蹤的物體迎面過來,由於透視效果,物體會放大。之前設定好的視窗區域大小會不合適。

OpenCV實驗室實現了一個CAMshift演算法,首先使用meanshift演算法找到目標,然後調整視窗大小,而且還會計算目標物件的的最佳外接圓的角度,並調整視窗。並使用調整後的視窗對物體繼續追蹤。

使用方法與meanShift演算法一樣,不過返回的是一個帶有旋轉角度的矩形。

Camshift,連續的自適應MeanShift演算法,是對MeanShift演算法的改進演算法,可以在跟蹤的過程中隨著目標大小的變化實時調整搜尋視窗大小,對於視訊序列中的每一幀還是採用MeanShift來尋找最優迭代結果,至於如何實現自動調整視窗大小的,可以查到的論述較少,我的理解是通過對MeanShift演算法中零階矩的判斷實現的。

程式碼:

1.python版本:可以自行通過滑鼠設定區域進行追蹤

import cv2
import numpy as np

xs, ys, ws, hs = 0, 0, 0, 0  # selection.x selection.y
xo, yo = 0, 0  # origin.x origin.y
selectObject = False
trackObject = 0


def onMouse(event, x, y, flags, prams):
    global xs, ys, ws, hs, selectObject, xo, yo, trackObject
    if selectObject == True:
        xs = min(x, xo)
        ys = min(y, yo)
        ws = abs(x - xo)
        hs = abs(y - yo)
    if event == cv2.EVENT_LBUTTONDOWN:
        xo, yo = x, y
        xs, ys, ws, hs = x, y, 0, 0
        selectObject = True
    elif event == cv2.EVENT_LBUTTONUP:
        selectObject = False
        trackObject = -1


cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.namedWindow('imshow')
cv2.setMouseCallback('imshow', onMouse)
term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1)
while (True):
    ret, frame = cap.read()
    if trackObject != 0:
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        mask = cv2.inRange(hsv, np.array((0., 30., 10.)), np.array((180., 256., 255.)))
        if trackObject == -1:
            track_window = (xs, ys, ws, hs)
            maskroi = mask[ys:ys + hs, xs:xs + ws]
            hsv_roi = hsv[ys:ys + hs, xs:xs + ws]
            roi_hist = cv2.calcHist([hsv_roi], [0], maskroi, [180], [0, 180])
            cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)
            trackObject = 1
        dst = cv2.calcBackProject([hsv], [0], roi_hist, [0, 180], 1)
        dst &= mask
        ret, track_window = cv2.CamShift(dst, track_window, term_crit)
        pts = cv2.boxPoints(ret)
        pts = np.int0(pts)
        img2 = cv2.polylines(frame, [pts], True, 255, 2)

    if selectObject == True and ws > 0 and hs > 0:
        cv2.imshow('imshow1', frame[ys:ys + hs, xs:xs + ws])
        cv2.bitwise_not(frame[ys:ys + hs, xs:xs + ws], frame[ys:ys + hs, xs:xs + ws])
    cv2.imshow('imshow', frame)
    if cv2.waitKey(10) == 27:
        break
cv2.destroyAllWindows()


對應的C++版本:

//---------------------------------【標頭檔案、名稱空間包含部分】----------------------------
//		描述:包含程式所使用的標頭檔案和名稱空間
//-------------------------------------------------------------------------------------------------
#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <ctype.h>
#include <windows.h>
 
using namespace cv;
using namespace std;
 
 
 
//-----------------------------------【全域性變數宣告】-----------------------------------------
//		描述:宣告全域性變數
//-------------------------------------------------------------------------------------------------
Mat image;
bool backprojMode = false;
bool selectObject = false;
int trackObject = 0;
bool showHist = true;
Point origin;
Rect selection;
int vmin = 10, vmax = 256, smin = 30;
 
 
//--------------------------------【onMouse( )回撥函式】------------------------------------
//		描述:滑鼠操作回撥
//-------------------------------------------------------------------------------------------------
static void onMouse( int event, int x, int y, int, void* )
{
	if( selectObject )
	{
		selection.x = MIN(x, origin.x);
		selection.y = MIN(y, origin.y);
		selection.width = std::abs(x - origin.x);
		selection.height = std::abs(y - origin.y);
 
		selection &= Rect(0, 0, image.cols, image.rows);
	}
 
	switch( event )
	{
	//此句程式碼的OpenCV2版為:
	//case CV_EVENT_LBUTTONDOWN:
	//此句程式碼的OpenCV3版為:
	case EVENT_LBUTTONDOWN:
		origin = Point(x,y);
		selection = Rect(x,y,0,0);
		selectObject = true;
		break;
	//此句程式碼的OpenCV2版為:
	//case CV_EVENT_LBUTTONUP:
	//此句程式碼的OpenCV3版為:
	case EVENT_LBUTTONUP:
		selectObject = false;
		if( selection.width > 0 && selection.height > 0 )
			trackObject = -1;
		break;
	}
}
 
//--------------------------------【help( )函式】----------------------------------------------
//		描述:輸出幫助資訊
//-------------------------------------------------------------------------------------------------
static void ShowHelpText()
{
	cout <<"\n\n\t\t\t非常感謝購買《OpenCV3程式設計入門》一書!\n"
		<<"\n\n\t\t\t此為本書OpenCV3版的第8個配套示例程式\n"
		<<	"\n\n\t\t\t   當前使用的OpenCV版本為:" << CV_VERSION 
		<<"\n\n  ----------------------------------------------------------------------------" ;
 
	cout << "\n\n\t此Demo顯示了基於均值漂移的追蹤(tracking)技術\n"
		"\t請用滑鼠框選一個有顏色的物體,對它進行追蹤操作\n";
 
	cout << "\n\n\t操作說明: \n"
		"\t\t用滑鼠框選物件來初始化跟蹤\n"
		"\t\tESC - 退出程式\n"
		"\t\tc - 停止追蹤\n"
		"\t\tb - 開/關-投影檢視\n"
		"\t\th - 顯示/隱藏-物件直方圖\n"
		"\t\tp - 暫停視訊\n";
}
 
const char* keys =
{
	"{1|  | 0 | camera number}"
};
 
 
//-----------------------------------【main( )函式】--------------------------------------------
//		描述:控制檯應用程式的入口函式,我們的程式從這裡開始
//-------------------------------------------------------------------------------------------------
int main( int argc, const char** argv )
{
	ShowHelpText();
 
	VideoCapture cap;
	Rect trackWindow;
	int hsize = 16;
	float hranges[] = {0,180};
	const float* phranges = hranges;
 
	cap.open(0);
	//cap.open("H:\\opencv\\ai.avi");
 
	if( !cap.isOpened() )
	{
		cout << "不能初始化攝像頭\n";
	}
 
	namedWindow( "Histogram", 0 );//顏色直方圖視窗
	namedWindow( "CamShift Demo", 0 );//跟蹤影象視窗
	setMouseCallback( "CamShift Demo", onMouse, 0 );//關聯滑鼠事件
	createTrackbar( "Vmin", "CamShift Demo", &vmin, 256, 0 );//顏色空間引數設定
	createTrackbar( "Vmax", "CamShift Demo", &vmax, 256, 0 );
	createTrackbar( "Smin", "CamShift Demo", &smin, 256, 0 );
 
	Mat frame, hsv, hue, mask, hist, histimg = Mat::zeros(200, 320, CV_8UC3), backproj;
	bool paused = false;//暫停
	LARGE_INTEGER  _start, _stop;
	double   start, stop;
	for(;;)
	{
		QueryPerformanceCounter(&_start);
		start = (double)_start.QuadPart;          //獲得計數器計數初值 
		if( !paused )
		{
			cap >> frame;
			if( frame.empty() )
				break;
		}
		QueryPerformanceCounter(&_stop);    //獲取計數器當前值
		stop = (double)_stop.QuadPart;
		cout << (stop - start) * 10 / 25332 << endl;;
		frame.copyTo(image);
 
		if( !paused )//如果麼有暫停。。。要是我就不會那麼多事設定一個暫停在這
		{
			cvtColor(image, hsv, COLOR_BGR2HSV);//將影象轉換為hsv顏色空間
 
			if( trackObject )//只有等於0的時候不跟蹤?
			{
				int _vmin = vmin, _vmax = vmax;//顏色空間引數
 
				inRange(hsv, Scalar(0, smin, MIN(_vmin,_vmax)),
					Scalar(180, 256, MAX(_vmin, _vmax)), mask);
				int ch[] = {0, 0};
				hue.create(hsv.size(), hsv.depth());//反向直方圖
				mixChannels(&hsv, 1, &hue, 1, ch, 1);
 
				if( trackObject < 0 )//已經用滑鼠選取完區域後就可以跟蹤了。。
				{
					Mat roi(hue, selection), maskroi(mask, selection);
					calcHist(&roi, 1, 0, maskroi, hist, 1, &hsize, &phranges);
					//此句程式碼的OpenCV3版為:
					normalize(hist, hist, 0, 255, NORM_MINMAX);
					//此句程式碼的OpenCV2版為:
					//normalize(hist, hist, 0, 255, CV_MINMAX);
 
					trackWindow = selection;
					trackObject = 1;
					histimg = Scalar::all(0);
					int binW = histimg.cols / hsize;
					Mat buf(1, hsize, CV_8UC3);
					for( int i = 0; i < hsize; i++ )
						buf.at<Vec3b>(i) = Vec3b(saturate_cast<uchar>(i*180./hsize), 255, 255);
 
					//此句程式碼的OpenCV3版為:
					cvtColor(buf, buf, COLOR_HSV2BGR);
					//此句程式碼的OpenCV2版為:
					//cvtColor(buf, buf, CV_HSV2BGR);
 
					for( int i = 0; i < hsize; i++ )
					{
						int val = saturate_cast<int>(hist.at<float>(i)*histimg.rows/255);
						rectangle( histimg, Point(i*binW,histimg.rows),
							Point((i+1)*binW,histimg.rows - val),
							Scalar(buf.at<Vec3b>(i)), -1, 8 );
					}
				}
				calcBackProject(&hue, 1, 0, hist, backproj, &phranges);
				cv::imshow("backproj", backproj);
				backproj &= mask;
				RotatedRect trackBox = CamShift(backproj, trackWindow,
 
				//此句程式碼的OpenCV3版為:
				TermCriteria( TermCriteria::EPS | TermCriteria::COUNT, 10, 1 ));
				//此句程式碼的OpenCV2版為:
				//TermCriteria( CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 10, 1 ));
 
				if( trackWindow.area() <= 1 )
				{
					int cols = backproj.cols, rows = backproj.rows, r = (MIN(cols, rows) + 5)/6;
					trackWindow = Rect(trackWindow.x - r, trackWindow.y - r,
						trackWindow.x + r, trackWindow.y + r) &
						Rect(0, 0, cols, rows);
				}
 
				if( backprojMode )
					cvtColor( backproj, image, COLOR_GRAY2BGR );
 
				//此句程式碼的OpenCV3版為:
				ellipse( image, trackBox, Scalar(0,0,255), 3, LINE_AA );
				//此句程式碼的OpenCV2版為:
				//ellipse( image, trackBox, Scalar(0,0,255), 3, CV_AA );
 
			}
		}
		else if( trackObject < 0 )//也就是說滑鼠選定區域後,暫停鍵失效
			paused = false;
 
		if( selectObject && selection.width > 0 && selection.height > 0 )
		{
			Mat roi(image, selection);
			bitwise_not(roi, roi);
		}
 
		cv::imshow( "CamShift Demo", image );
		cv::imshow( "Histogram", histimg );
		char c = (char)waitKey(90);
		if( c == 27 )
			break;
		switch(c)
		{
		case 'b':
			backprojMode = !backprojMode;
			break;
		case 'c':
			trackObject = 0;
			histimg = Scalar::all(0);
			break;
		case 'h':
			showHist = !showHist;
			if( !showHist )
				destroyWindow( "Histogram" );
			else
				namedWindow( "Histogram", 1 );
			break;
		case 'p':
			paused = !paused;
			break;
		case 'k':
		{
			imwrite("pic.jpg", image);
			break;
		}
		default:
			;
		}
	}
 
	return 0;
}

 

效果:

 

 

或者直接預定義:


'''


import numpy as np
import cv2

cap = cv2.VideoCapture(0)

# take first frame of the video
ret,frame = cap.read()

# setup initial location of window
r,h,c,w = 300,200,400,300  # simply hardcoded the values
track_window = (c,r,w,h)


roi = frame[r:r+h, c:c+w]
hsv_roi =  cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv_roi, np.array((100., 30.,32.)), np.array((180.,120.,255.)))
roi_hist = cv2.calcHist([hsv_roi],[0],mask,[180],[0,180])
cv2.normalize(roi_hist,roi_hist,0,255,cv2.NORM_MINMAX)
term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 )

while(1):
    ret ,frame = cap.read()

    if ret == True:
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1)

        ret, track_window = cv2.CamShift(dst, track_window, term_crit)
        pts = cv2.boxPoints(ret)
        pts = np.int0(pts)
        img2 = cv2.polylines(frame,[pts],True, 255,2)

        cv2.imshow('img2',img2)
        k = cv2.waitKey(60) & 0xff
        if k == 27:
            break

    else:
        break

cv2.destroyAllWindows()
cap.release()

'''

import cv2
import numpy as np

# 設定初始化的視窗位置
r,h,c,w = 0,100,0,100 # 設定初試視窗位置和大小
track_window = (c,r,w,h)

cap = cv2.VideoCapture(0)

ret, frame= cap.read()

# 設定追蹤的區域
roi = frame[r:r+h, c:c+w]
# roi區域的hsv影象
hsv_roi = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# 取值hsv值在(0,60,32)到(180,255,255)之間的部分
mask = cv2.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.)))
# 計算直方圖,引數為 圖片(可多),通道數,蒙板區域,直方圖長度,範圍
roi_hist = cv2.calcHist([hsv_roi],[0],mask,[180],[0,180])
# 歸一化
cv2.normalize(roi_hist,roi_hist,0,255,cv2.NORM_MINMAX)

# 設定終止條件,迭代10次或者至少移動1次
term_crit = ( cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1 )

while(1):
    ret, frame = cap.read()
    if ret == True:
        # 計算每一幀的hsv影象
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        # 計算反向投影
        dst = cv2.calcBackProject([hsv],[0],roi_hist,[0,180],1)

        # 呼叫meanShift演算法在dst中尋找目標視窗,找到後返回目標視窗
        ret, track_window = cv2.CamShift(dst, track_window, term_crit)
        # Draw it on image
        pts = cv2.boxPoints(ret)
        pts = np.int0(pts)
        img2 = cv2.polylines(frame,[pts],True, 255,2)
        cv2.imshow('img2',img2)


    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
cap.release()
cv2.destroyAllWindows()

 

背景分割:

視訊的背景分割


本文用到的視訊traffic.flv,來源於原作者Github,地址為: 
https://github.com/techfort/pycv/tree/master/chapter8/surveillance_demo 


OpenCV中有幾種背景分割器(Background Subtractor),這裡使用最常用的兩種: 
K-Nearest (KNN
Mixture of Gaussian (MOG2)

KNN背景分割器:

# -*- coding:utf-8 -*-

import cv2

# Step1. 構造VideoCapture物件
cap = cv2.VideoCapture('traffic.flv')

# Step2. 建立一個背景分割器
# createBackgroundSubtractorKNN()函式裡,可以指定detectShadows的值
# detectShadows=True,表示檢測陰影,反之不檢測陰影
knn = cv2.createBackgroundSubtractorKNN(detectShadows=True)

while True :
    ret, frame = cap.read() # 讀取視訊
    fgmask = knn.apply(frame) # 背景分割
    cv2.imshow('frame', fgmask) # 顯示分割結果
    if cv2.waitKey(100) & 0xff == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

執行效果如下:

 

MOG2背景分割器的小例子

# -*- coding:utf-8 -*-

import cv2

# Step1. 構造VideoCapture物件
cap = cv2.VideoCapture('traffic.flv')

# Step2. 建立一個背景分割器
# createBackgroundSubtractorMOG2()函式裡,可以指定detectShadows的值
# detectShadows=True,表示檢測陰影,反之不檢測陰影
mog = cv2.createBackgroundSubtractorMOG2()

while True :
    ret, frame = cap.read() # 讀取視訊
    fgmask = mog.apply(frame) # 背景分割
    cv2.imshow('frame', fgmask) # 顯示分割結果
    if cv2.waitKey(100) & 0xff == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()


 

 

運動檢測跟蹤的小例子

 

# -*- coding:utf-8 -*-

import cv2

# Step1. 初始化VideoCapture物件
cap = cv2.VideoCapture('traffic.flv')

# Step2. 使用KNN背景分割器
knn= cv2.createBackgroundSubtractorKNN(detectShadows=True)

while True :
    ret, frame = cap.read()
    fgmask = knn.apply(frame) # 分割背景

    # 閾值化,將非純白色(244~255)的所有畫素設為0
    th = cv2.threshold(fgmask.copy(), 244, 255, cv2.THRESH_BINARY)[1]

    # 為了使效果更好,進行一次膨脹
    dilated = cv2.dilate(th, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3)), iterations=2)

    # 檢測輪廓
    image, contours, hier = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

    # 繪製輪廓
    for c in contours:
        if cv2.contourArea(c) > 1600:
            (x,y,w,h) = cv2.boundingRect(c)
            cv2.rectangle(frame, (x,y), (x+w, y+h), (0,255,0), 2)

    cv2.imshow('detection', frame)
    if cv2.waitKey(100) & 0xff == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

 

 

 

Surveillance Demo: Tracking Pedestrians in Camera Feed

#! /usr/bin/python
# 目標跟蹤
"""Surveillance Demo: Tracking Pedestrians in Camera Feed
The application opens a video (could be a camera or a video file)
and tracks pedestrians in the video.
"""
__author__ = "joe minichino"
__copyright__ = "property of mankind."
__license__ = "MIT"
__version__ = "0.0.1"
__maintainer__ = "Joe Minichino"
__email__ = "[email protected]"
__status__ = "Development"

import cv2
import numpy as np
import os.path as path
import argparse

parser = argparse.ArgumentParser()
parser.add_argument("-a", "--algorithm",
                    help="m (or nothing) for meanShift and c for camshift")
args = vars(parser.parse_args())


def center(points):
    """calculates centroid of a given matrix"""
    x = (points[0][0] + points[1][0] + points[2][0] + points[3][0]) / 4
    y = (points[0][1] + points[1][1] + points[2][1] + points[3][1]) / 4
    return np.array([np.float32(x), np.float32(y)], np.float32)


font = cv2.FONT_HERSHEY_SIMPLEX


class Pedestrian():
    """Pedestrian class
    each pedestrian is composed of a ROI, an ID and a Kalman filter
    so we create a Pedestrian class to hold the object state
    """

    def __init__(self, id, frame, track_window):
        """init the pedestrian object with track window coordinates"""
        # set up the roi
        self.id = int(id)
        x, y, w, h = track_window
        self.track_window = track_window
        self.roi = cv2.cvtColor(frame[y:y + h, x:x + w], cv2.COLOR_BGR2HSV)
        roi_hist = cv2.calcHist([self.roi], [0], None, [16], [0, 180])
        self.roi_hist = cv2.normalize(roi_hist, roi_hist, 0, 255, cv2.NORM_MINMAX)

        # set up the kalman
        self.kalman = cv2.KalmanFilter(4, 2)
        self.kalman.measurementMatrix = np.array([[1, 0, 0, 0], [0, 1, 0, 0]], np.float32)
        self.kalman.transitionMatrix = np.array([[1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0], [0, 0, 0, 1]], np.float32)
        self.kalman.processNoiseCov = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]],
                                               np.float32) * 0.03
        self.measurement = np.array((2, 1), np.float32)
        self.prediction = np.zeros((2, 1), np.float32)
        self.term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1)
        self.center = None
        self.update(frame)

    def __del__(self):
        print("Pedestrian %d destroyed" % self.id)

    def update(self, frame):
        # print "updating %d " % self.id
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        back_project = cv2.calcBackProject([hsv], [0], self.roi_hist, [0, 180], 1)

        if args.get("algorithm") == "c":
            ret, self.track_window = cv2.CamShift(back_project, self.track_window, self.term_crit)
            pts = cv2.boxPoints(ret)
            pts = np.int0(pts)
            self.center = center(pts)
            cv2.polylines(frame, [pts], True, 255, 1)

        if not args.get("algorithm") or args.get("algorithm") == "m":
            ret, self.track_window = cv2.meanShift(back_project, self.track_window, self.term_crit)
            x, y, w, h = self.track_window
            self.center = center([[x, y], [x + w, y], [x, y + h], [x + w, y + h]])
            cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 255, 0), 2)

        self.kalman.correct(self.center)
        prediction = self.kalman.predict()
        cv2.circle(frame, (int(prediction[0]), int(prediction[1])), 4, (255, 0, 0), -1)
        # fake shadow
        cv2.putText(frame, "ID: %d -> %s" % (self.id, self.center), (11, (self.id + 1) * 25 + 1),
                    font, 0.6,
                    (0, 0, 0),
                    1,
                    cv2.LINE_AA)
        # actual info
        cv2.putText(frame, "ID: %d -> %s" % (self.id, self.center), (10, (self.id + 1) * 25),
                    font, 0.6,
                    (0, 255, 0),
                    1,
                    cv2.LINE_AA)


def main():
    #camera = cv2.VideoCapture(path.join(path.dirname(__file__), "traffic.flv"))
    camera = cv2.VideoCapture(path.join(path.dirname(__file__), "768x576.avi"))
    # camera = cv2.VideoCapture(path.join(path.dirname(__file__), "..", "movie.mpg"))
    # camera = cv2.VideoCapture(0)
    history = 20
    # KNN background subtractor
    bs = cv2.createBackgroundSubtractorKNN()

    # MOG subtractor
    # bs = cv2.bgsegm.createBackgroundSubtractorMOG(history = history)
    # bs.setHistory(history)

    # GMG
    # bs = cv2.bgsegm.createBackgroundSubtractorGMG(initializationFrames = history)

    cv2.namedWindow("surveillance")
    pedestrians = {}
    firstFrame = True
    frames = 0
    fourcc = cv2.VideoWriter_fourcc(*'XVID')
    out = cv2.VideoWriter('output.avi', fourcc, 20.0, (640, 480))
    while True:
        print(" -------------------- FRAME %d --------------------" % frames)
        grabbed, frame = camera.read()
        if (grabbed is False):
            print("failed to grab frame.")
            break

        fgmask = bs.apply(frame)

        # this is just to let the background subtractor build a bit of history
        if frames < history:
            frames += 1
            continue

        th = cv2.threshold(fgmask.copy(), 127, 255, cv2.THRESH_BINARY)[1]
        th = cv2.erode(th, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)), iterations=2)
        dilated = cv2.dilate(th, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (8, 3)), iterations=2)
        image, contours, hier = cv2.findContours(dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

        counter = 0
        for c in contours:
            if cv2.contourArea(c) > 500:
                (x, y, w, h) = cv2.boundingRect(c)
                cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 1)
                # only create pedestrians in the first frame, then just follow the ones you have
                if firstFrame is True:
                    pedestrians[counter] = Pedestrian(counter, frame, (x, y, w, h))
                counter += 1

        for i, p in pedestrians.items():
            p.update(frame)

        firstFrame = False
        frames += 1

        cv2.imshow("surveillance", frame)
        out.write(frame)
        if cv2.waitKey(110) & 0xff == 27:
            break
    out.release()
    camera.release()


if __name__ == "__main__":
    main()

 

參考文獻:

1.https://blog.csdn.net/github_39611196/article/details/81164962

2.https://blog.csdn.net/zhangruijerry/article/details/79088945?%3E

3.https://blog.csdn.net/u010429424/article/details/73863902

4.https://blog.csdn.net/tengfei461807914/article/details/80412482