1. 程式人生 > >單應性變換(Homography)的學習與理解

單應性變換(Homography)的學習與理解

內容源於:

What is Homography ?

Consider two images of a plane (top of the book) shown in Figure 1. The red dot represents the same physical point in the two images. In computer vision jargon we call these corresponding points. Figure 1. shows four corresponding points in four different colors — red, green, yellow and orange. A Homography

 is a transformation ( a 3×3 matrix ) that maps the points in one image to the corresponding points in the other image.

homography exampleFigure 1 : Two images of a 3D plane ( top of the book ) are related by a Homography

Now since a homography is a 3×3 matrix we can write it as

\[ H = \left[ \begin{array}{ccc} h_{00} & h_{01} & h_{02} \\ h_{10} & h_{11} & h_{12} \\ h_{20} & h_{21} & h_{22} \end{array} \right] \]

Let us consider the first set of corresponding points — (x_1,y_1)

 in the first image and (x_2,y_2)} in the second image. Then, the Homography H maps them in the following way

\[ \left[ \begin{array}{c} x_1 \\ y_1 \\ 1 \end{array} \right] &= H \left[ \begin{array}{c} x_2 \\ y_2 \\ 1 \end{array} \right] &= \left[ \begin{array}{ccc} h_{00} & h_{01} & h_{02} \\ h_{10} & h_{11} & h_{12} \\ h_{20} & h_{21} & h_{22} \end{array} \right] \left[ \begin{array}{c} x_2 \\ y_2 \\ 1 \end{array} \right] \]

Image Alignment Using Homography

The above equation is true for ALL sets of corresponding points as long as they lie on the same plane in the real world. In other words you can apply the homography to the first image and the book in the first image will get aligned with the book in the second image! See Figure 2.

Image Alignment using HomographyFigure 2 : One image of a 3D plane can be aligned with another image of the same plane using Homography

But what about points that are not on the plane ? Well, they will NOT be aligned by a homography as you can see in Figure 2. But wait, what if there are two planes in the image ? Well, then you have two homographies — one for each plane.

Panorama : An Application of Homography

In the previous section, we learned that if a homography between two images is known, we can warp one image onto the other. However, there was one big caveat. The images had to contain a plane ( the top of a book ), and only the planar part was aligned properly. It turns out that if you take a picture of any scene ( not just a plane ) and then take a second picture by rotating the camera, the two images are related by a homography! In other words you can mount your camera on a tripod and take a picture. Next, pan it about the vertical axis and take another picture. The two images you just took of a completely arbitrary 3D scene are related by a homography. The two images will share some common regions that can be aligned and stitched and bingo you have a panorama of two images. Is it really that easy ? Nope! (sorry to disappoint) A lot more goes into creating a good panorama, but the basic principle is to align using a homography and stitch intelligently so that you do not see the seams. Creating panoramas will definitely be part of a future post.

How to calculate a Homography ?

To calculate a homography between two images, you need to know at least 4 point correspondences between the two images. If you have more than 4 corresponding points, it is even better. OpenCV will robustly estimate a homography that best fits all corresponding points. Usually, these point correspondences are found automatically by matching features like SIFT or SURF between the images, but in this post we are simply going to click the points by hand.

Let’s look at the usage first.

C++

12345678910// pts_src and pts_dst are vectors of points in source // and destination images. They are of type vector<Point2f>. // We need at least 4 corresponding points. Mat h = findHomography(pts_src, pts_dst);// The calculated homography can be used to warp // the source image to destination. im_src and im_dst are// of type Mat. Size is the size (width,height) of im_dst. warpPerspective(im_src, im_dst, h, size);

Python

1234567891011121314'''pts_src and pts_dst are numpy arrays of pointsin source and destination images. We need at least 4 corresponding points. '''h, status = cv2.findHomography(pts_src, pts_dst)''' The calculated homography can be used to warp the source image to destination. Size is the size (width,height) of im_dst'''im_dst = cv2.warpPerspective(im_src, h, size)

Let us look at a more complete example in both C++ and Python.

OpenCV C++ Homography Example

Images in Figure 2. can be generated using the following C++ code. The code below shows how to take four corresponding points in two images and warp image onto the other.

1234567891011121314151617181920212223242526272829303132333435363738394041#include "opencv2/opencv.hpp" using namespace cv;using namespace std;int main( int argc, char** argv){// Read source image.Mat im_src = imread("book2.jpg");// Four corners of the book in source imagevector<Point2f> pts_src;pts_src.push_back(Point2f(141, 131));pts_src.push_back(Point2f(480, 159));pts_src.push_back(Point2f(493, 630));pts_src.push_back(Point2f(64, 601));// Read destination image.Mat im_dst = imread("book1.jpg");// Four corners of the book in destination image.vector<Point2f> pts_dst;pts_dst.push_back(Point2f(318, 256));pts_dst.push_back(Point2f(534, 372));pts_dst.push_back(Point2f(316, 670));pts_dst.push_back(Point2f(73, 473));// Calculate HomographyMat h = findHomography(pts_src, pts_dst);// Output imageMat im_out;// Warp source image to destination based on homographywarpPerspective(im_src, im_out, h, im_dst.size());

相關推薦

變換(Homography)的學習理解

內容源於:What is Homography ?Consider two images of a plane (top of the book) shown in Figure 1. The red dot represents the same physical poin

變換(Homography)

概要:單應性變換就是一個平面到另一個平面的對映關係。如圖,兩張圖片中相同顏色的點叫做corresponding Points,比如兩個紅點就是一對corresponding points。單應性矩陣(Homography)就是一個從一張影象到另一張影象對映關係的轉換矩陣(3*

變換、仿射變換、透視變換 很到位

單應性變換 如下圖所示的平面的兩幅影象。紅點表示兩幅影象中的相同物理點,我們稱之為對應點。這裡顯示了四種不同顏色的四個對應點 - 紅色,綠色,黃色和橙色。 一個Homography是一個變換(3×3矩陣),將一個影象中的點對映到另一個影象中的對應點。單應性變換其實就是一

【懶懶的計算機視覺筆記之變換

最近一直在學習Python計算機視覺程式設計中影象到影象之間的對映,這些變化可以用於影象扭曲變形和影象配準。所謂單應性變換就是將一個平面內的點對映到另一個平面內的二維投影變換。單應性變換具有很強的實用性,比如影象配準、影象糾正和紋理扭曲,以及建立全景影象等。其實,單應性變換

變換、仿射變換、透視變換

單應性變換 如下圖所示的平面的兩幅影象。紅點表示兩幅影象中的相同物理點,我們稱之為對應點。這裡顯示了四種不同顏色的四個對應點 - 紅色,綠色,黃色和橙色。 一個Homography是一個變換(3×3矩陣),將一個影象中的點對映到另一個影象中的對應點。單應性變

關於矩陣的理解Homography matrix for dummies

儘量寫的通俗一點,因為從某種程度上講,本人也是dummy..... 1. 先說homogeneous coordinate,齊次座標 一幅2D影象上的非齊次座標為(x,y),而齊次座標為(x,y,1),也可以寫成(x/z,y/z,1)或(x,y,z)。齊次座標有很多好處,

(homography)變換的公式推導過程

原文地址:http://www.cnblogs.com/ml-cv/p/5871052.html 矩陣的一個重要作用是將空間中的點變換到另一個空間中。這個作用在國內的《線性代數》教學中基本沒有介紹。要能形像地理解這一作用,比較直觀的方法就是影象變換,影象變換

【影象基礎】相似性變換、放射變換

本篇博文整合了幾篇博文,意在說明放射變換與透視變換的原理,首先感謝參考文獻中的博主以及還有未提及的博主,如侵犯你的權利請聯絡我刪除 後續博文由於不方便編輯直接給出圖片。具體的文章我寫到有道雲筆記上,連結如下: 平移變換:translation,2個自由度 旋轉變換:ro

OpenCV仿射變換+投射變換+矩陣

arpa title tle 匹配 之間 phy 帶來 http cti OpenCV仿射變換+投射變換+單應性矩陣 本來想用單應性求解小規模運動的物體的位移,但是後來發現即使是很微小的位移也會帶來超級大的誤差甚至錯誤求解,看起來這個方法各種行不通,還是要匹配知道深度

矩陣的理解及求解

單應性矩陣的理解及求解 1. 齊次座標(Homogeneous Coordinate) 一幅2D影象上的非齊次座標為(x,y),而齊次座標為(x,y,1),也可以寫成(x/z,y/z,1)或(x,y,z)。齊次座標有很多好處,比如可以很清楚的確定一個點在不在直線上: T(x)*I=0

矩陣的理解及求解1

儘量寫的通俗一點,因為從某種程度上講,本人也是dummy..... 1. 先說homogeneous coordinate,齊次座標 一幅2D影象上的非齊次座標為(x,y),而齊次座標為(x,y,1),也可以寫成(x/z,y/z,1)或(x,y,z)。齊次座標有很多好

矩陣的理解及求解2

單應矩陣Homography求解 在計算機視覺中,平面的單應性被定義為一個平面到另外一個平面的投影對映。因此一個二維平面上的點對映到攝像機CCD上的對映就是平面單應性的例子。如果點Q到CCD上的點q的對映使用齊次座標,這種對映可以用矩陣相乘的方式表示。若有一下定義:

Homography,opencv,矩陣的計算原理

Features2D + Homography to find a known object #include <stdio.h> #include <iostream> #include "opencv2/core/core.hpp"

直接線性變換(DLT)求解矩陣

在影象拼接中,得到了兩張影象的特徵匹配,兩個點集分別記作X和X′。用單應性變換來擬合二者的關係,可表達為 c⎛⎝⎜uv1⎞⎠⎟=H⎛⎝⎜xy1⎞⎠⎟(1) 其中(uv1)T是X′中特徵點的座標,(xy1)T是X中特徵點的座標,H即是單應性矩陣,代表它們之間的

矩陣的理解及求解3

前面文章《從零開始學習「張氏相機標定法」(一)成像幾何模型》中我們已經得到了畫素座標系和世界座標系下的座標對映關係: 其中,u、v表示畫素座標系中的座標,s表示尺度因子,fx、fy、u0、v0、γ(由於製造誤差產生的兩個座標軸偏斜引數,通常很小)表示5個相機內參,R

計算機視覺 Homography

在ORB-SLAM初始化的時候,作者提到,如果場景是平面,或者近似平面,或者低視差時,我們能應用單應性矩陣(homography),這三種情形在我應用SVO的過程中頗有同感,打破了我對矩陣的固有映像,即只能用於平面或近似平面。但是我不知道如何去具體分析這裡面的誤差,比如不

仿射變換透射變換矩陣

estimateRigidTransform():計算多個二維點對或者影象之間的最優仿射變換矩陣 (2行x3列),H可以是部分自由度,比如各向一致的切變。getAffineTransform():計算3個二維點對之間的仿射變換矩陣H(2行x3列),自由度為6.warpAffine():對輸入影象進行仿射變

Myibaits的學習理解,使用

行為 機制 概念 adding 數據 測試 數據結構 創建 -h WF鏈接:https://workflowy.com/s/FG9l.CwEJajH0BD Myibaits的學習、使用與理解前言最近看了MyIbatis的相關內容,因為項目中使用,需要自己有一定的理解。在網上

java中容器的學習理解

優化 是我 move 查找 map 常常 ise 線性 arr   以前一直對於java中容器的概念不理解,雖然學習過,但始終沒有認真理解過,這幾天老師提出了這樣一個問題,你怎麽理解java中的容器。瞬間就蒙了。於是各種搜資料學習了一下,下面是我學習後整理出來的的一些心得。

「4+1視圖」學習理解

uml聲明:部分內容摘錄了簡書「橘色對白」作者的文章片段。 之前經常看到文章中提到「4+1視圖」,對其也有片面的理解,但一直沒有實踐過,不清楚其真正的作用,這兩天在業務需求分析中運用了其中的一部分,想談談自己的粗淺理解。 最近在調研「多租戶」實現方案時,看到簡友「橘色對白」的3篇關於多租戶架構的文章,其中