NCC(Normalized Cross Correlation)歸一化互相關原理和C++程式碼實現
阿新 • • 發佈:2018-12-15
NCC(Normalized Cross Correlation)歸一化互相關
影象匹配指在已知目標基準圖的子圖集合中,尋找與實時影象最相似的子圖,以達到目標識別與定位目的的影象技術。主要方法有:基於影象灰度相關方法、基於影象特徵方法、基於神經網路相關的人工智慧方法(還在完善中)。基於影象灰度的匹配演算法簡單,匹配準確度高,主要用空間域的一維或二維滑動模版進行影象匹配,不同的演算法區別主要體現在模版及相關準則的選擇方面,但計算量大,不利於實時處理,對灰度變化、旋轉、形變以及遮擋等比較敏感;基於影象特徵的方法計算量相對較小,對灰度變化、形變及遮擋有較好的適應性,通過在原始圖中提取點、線、區域等顯著特徵作為匹配基元,進而用於特徵匹配,但是匹配精度不高。 通常又把基於灰度的匹配演算法,稱作相關匹配演算法。相關匹配演算法又分為兩類:一類強調景物之間的差別程度如平法差法(SD)和平均絕對差值法(MAD)等;另一類強調景物之間的相似程度,主要演算法又分成兩類,一是積相關匹配法,二是相關係數法。今天我們就來說說歸一化互相關係數法(NCC).
- NCC原理: 假設兩幅進行匹配計算的影象中的小影象為g,大小為m×n,大影象為S,大小為M×N.用Sx,y表示S中以(x,y)為左上角點與g大小相同的子塊。利用相關係數公式計算實時圖和基準圖之間的相關係數,得到相關係數矩陣ρ(x,y),通過對相關係數矩陣的分析,判斷兩幅影象是否相關。 ρ(x,y)的定義為 隨機變數X和Y的(Pearson)相關係數) 式中: 是Sx,y和g的協方差;
Dx,y為Sx,y的方差
D為g的方差, g的灰度均值 影象Sx,y的灰度均值 將Dx,y和D代入式得到: 相關係數滿足: 在[-1,1]絕對尺度範圍之間衡量兩者的相似性。相關係數刻畫了兩者之間的近似程度的線性描述。一般說來,越接近於1,兩者越近似的有線性關係。 2. C++程式碼實現
Mat image1 = imread("E:xx.tif", IMREAD_GRAYSCALE); Mat image2 = imread("E:yy.tif", IMREAD_GRAYSCALE); int overlap = 350; float pearsonCorrelationCoefficientMax = 0; int overlapMaxCorrelationCoefficient = 0; for (int overlap = 350; overlap < 650; overlap += 50) { //****************************************// Mat imageTemp = image2(Rect(0, 0, overlap, image1.rows)); long double tempTotalcount = 0; long double tempTotalPixel = 0; for (int i = 0; i < overlap; i++) { for (int j = 0; j < image1.rows; j++) { tempTotalcount += 1; //cout << i<<","<<j<<":"<<int(imageTemp.at<uchar>(j,i)) << ","; tempTotalPixel += float(imageTemp.at<uchar>(j, i)); } cout << endl; } float tempAvg = tempTotalPixel / tempTotalcount; //**************************************// long double tempSubstract = 0; for (int i = 0; i < overlap; i++) { for (int j = 0; j < image1.rows; j++) { long double tempSquare = (long double(imageTemp.at<uchar>(j, i)) - tempAvg)* (long double(imageTemp.at<uchar>(j, i)) - tempAvg); tempSubstract = tempSubstract + tempSquare; } cout << endl; } float tempVariance = sqrt(tempSubstract / tempTotalcount); //***********************************************// Mat imageBase = image1(Rect(image1.cols-overlap, 0, overlap, image1.rows)); int baseTotalcount = 0; int baseTotalPixel = 0; for (int i = 0; i < overlap; i++) { for (int j = 0; j < image1.rows; j++) { baseTotalcount += 1; //cout << i<<","<<j<<":"<<int(imageTemp.at<uchar>(j,i)) << ","; baseTotalPixel += float(imageBase.at<uchar>(j, i)); } cout << endl; } float baseAvg = baseTotalPixel / baseTotalcount; //*****************************************// long double baseSubstract = 0; for (int i = 0; i < overlap; i++) { for (int j = 0; j < image1.rows; j++) { long double baseSquare = (long double(imageBase.at<uchar>(j, i)) - baseAvg)* (long double(imageBase.at<uchar>(j, i)) - baseAvg); baseSubstract = baseSubstract + baseSquare; } cout << endl; } float baseVariance = sqrt(baseSubstract / baseTotalcount); //***************************************// long double dotMul = 0; for (int i = 0; i < overlap; i++) { for (int j = 0; j < image1.rows; j++) { dotMul += abs((long double(imageBase.at<uchar>(j, i)) - baseAvg)*(long double(imageTemp.at<uchar>(j, i)) - tempAvg)); } cout << endl; } float dotMulAvg = dotMul / baseTotalcount; float pearsonCorrelationCoefficient=dotMulAvg / (baseVariance*tempVariance); if (pearsonCorrelationCoefficientMax < pearsonCorrelationCoefficient) { pearsonCorrelationCoefficientMax = pearsonCorrelationCoefficient; overlapMaxCorrelationCoefficient = overlap; } } cout << "最大相關係數" << pearsonCorrelationCoefficientMax << endl; cout << "最大相關係數時重疊區域" << overlapMaxCorrelationCoefficient << endl;