IOS+openCV在Xcode的入門開發(轉)
看這篇文章之前先看看這個地址:OpenCV iOS開發(一)——安裝
昨天折騰了一天,終於搞定了openCV+IOS在Xcode下的環境並且實現一個基於霍夫算法的圓形識別程序。廢話不多說,下面就是具體的折騰流程:
------------------------------------------------------安裝openCV-------------------------------------------------------------------
官網上有教程:http://docs.opencv.org/doc/tutorials/introduction/ios_install/ios_install.html#ios-installation
如果一切都和官網上說的這麽簡單的話我就不用寫博客啦~(請把<my_working _directory>換成你要安裝openCV的路徑)
cd ~/<my_working _directory> git clone https://github.com/Itseez/opencv.git cd / sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
一直到這步都安裝正常(如果你沒有git的話去http://sourceforge.net/projects/git-osx-installer/)下載安裝
cd ~/<my_working_directory> python opencv/platforms/ios/build_framework.py ios
最後一步卡在最後一句是大概是因為沒有安裝 Cmake ,安裝Cmake去官網上下的dmg貌似是沒用的=。=,所以我換了一種方式,通過 homebrew 來安裝,首先安裝homebrew(ruby是Mac自帶的所以不用擔心啦)
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
然後安裝Cmake:
brew install cmake
安裝成功後,我們回到上面命令的最後一步,安裝openCV庫。接著就是漫長的等待,大概有半個時辰(一個時辰=兩個小時)安裝結束後,你會看到在安裝路徑下面有個ios的文件夾裏面就是千辛萬苦得來的openCV IOS庫,喘口氣,我們繼續配置Xcode項目運行環境
---------------------------------------------------------配置Xcode openCV 環境------------------------------------------------------------------
安裝還不是最折騰的,作為一個小白來使用Xcode本身就是很大的挑戰(我前天還不會開發IOS。。。),更何況官網上的教程 http://docs.opencv.org/doc/tutorials/ios/hello/hello.html#opencvioshelloworld是對應的Xcode5.0的,而在Mac OSX 10.10上的Xcode已經到了6.3,兩者界面存在了一定的差異,所以只能碰運氣了。。。好在我運氣不錯~
其實上面這段教程,大部分都是可以跟著做的
1、Create a new XCode project.
2、Now we need to link opencv2.framework with Xcode. Select the project Navigator in the left hand panel and click on project name.
3、Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
4、Click on Add others and go to directory where opencv2.framework is located and click open
5、Now you can start writing your application.
這段說的就是建立一個新項目然後選中項目,找到Build Phases,然後把剛才生成的openCV庫加入就行了,但這個時候要仔細看下面的圖,他的圖上還有三個庫也一並引入
之後說的是配置宏命令
Link your project with OpenCV as shown in previous section.
Open the file named NameOfProject-Prefix.pch ( replace NameOfProject with name of your project) and add the following lines of code.#ifdef __cplusplus #import <opencv2/opencv.hpp> #endif
這段說的是要將openCV庫的預編譯命令在pch文件中聲明,但是,Xcode從5.0以後創建項目就不會自動生成這個文件了,必須手動生成,於是我們選擇file->new,在彈出框裏面選擇IOS下面的other,在裏面找到pch文件,命名為與項目命相同的文件,並加入這段代碼,同樣仔細看教程中的圖,把其余兩個也一並添上。文件碼完之後,要開始關聯改文件到項目中了。選中項目,然後再Build Phases邊上找到Build Settings,選中下面一行的All,然後搜索prefix,在Apple LLVM 6.1 LANGUAGE中找到這一項,在後面添入 $(SRCROOT)/項目文件/名稱.pch,然後找到這一項上面的Precompile prefix Header 選擇Yes, 這樣文件就加入到項目的預編譯命令當中了。但這還沒完:
With the newer XCode and iOS versions you need to watch out for some specific details
The *.m file in your project should be renamed to *.mm.
You have to manually include AssetsLibrary.framework into your project, which is not done anymore by default.
這段說的就是所有運用了openCV的地方的.m文件都要改成.mm文件,然後要在項目中引入AssetsLibrary.framework(見前文引入openCV庫的步驟)
環境基本上已經配齊了,之後就是運用它們來開發HelloWorld了~
-----------------------------------------------HelloOpenCV----------------------------------------------------------------------
第一步:打開這個頁面http://docs.opencv.org/doc/tutorials/ios/image_manipulation/image_manipulation.html#opencviosimagemanipulation
- (cv::Mat)cvMatFromUIImage:(UIImage *)image { CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage); CGFloat cols = image.size.width; CGFloat rows = image.size.height; cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha) CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data cols, // Width of bitmap rows, // Height of bitmap 8, // Bits per component cvMat.step[0], // Bytes per row colorSpace, // Colorspace kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault); // Bitmap info flags CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage); CGContextRelease(contextRef); return cvMat; } - (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image { CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage); CGFloat cols = image.size.width; CGFloat rows = image.size.height; cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data cols, // Width of bitmap rows, // Height of bitmap 8, // Bits per component cvMat.step[0], // Bytes per row colorSpace, // Colorspace kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault); // Bitmap info flags CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage); CGContextRelease(contextRef); return cvMat; } -(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat { NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()]; CGColorSpaceRef colorSpace; if (cvMat.elemSize() == 1) { colorSpace = CGColorSpaceCreateDeviceGray(); } else { colorSpace = CGColorSpaceCreateDeviceRGB(); } CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data); // Creating CGImage from cv::Mat CGImageRef imageRef = CGImageCreate(cvMat.cols, //width cvMat.rows, //height 8, //bits per component 8 * cvMat.elemSize(), //bits per pixel cvMat.step[0], //bytesPerRow colorSpace, //colorspace kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info provider, //CGDataProviderRef NULL, //decode false, //should interpolate kCGRenderingIntentDefault //intent ); // Getting UIImage from CGImage UIImage *finalImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpace); return finalImage; }
不管別的先建立一組文件(.h+.mm)把這三個函數收了,註意要在頭上引入
#import <Foundation/Foundation.h> #import <UIKit/UIKit.h> #import <opencv2/opencv.hpp>
這三個函數的功能是把UIImage轉成cv::Mat類型和轉回來,然後有了cv::Mat就可以想幹嘛幹嘛啦,比如做個霍夫算法檢測圓形:
+ (UIImage *) hough:(UIImage *) image { cv::Mat img = [self cvMatFromUIImage:image]; cv::Mat gray(img.size(), CV_8UC4); cv::Mat background(img.size(), CV_8UC4,cvScalar(255,255,255)); cvtColor(img, gray, CV_BGR2GRAY ); std::vector lines; std::vector circles; HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 2, image.size.width/8, 200, 100 ); for( size_t i = 0; i < circles.size(); i++ ) { cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); int radius = cvRound(circles[i][2]); cv::circle( background, center, 3, cvScalar(0,0,0), -1, 8, 0 ); cv::circle( background, center, radius, cvScalar(0,0,0), 3, 8, 0 ); } UIImage *res = [self UIImageFromCVMat:background]; return res; }
------------------------------------------------------我是最後的分割線------------------------------------------------------
這個架子搭好以後就可以方便的開發了,objective-C + Cocoa的語法真是奇葩到了極點,怎麽都看不順眼。。。。不過storyboard開發界面倒是挺好用的,希望不久的將來能做個美圖秀秀一樣的軟件玩玩~
轉至:http://www.cnblogs.com/tonyspotlight/p/4568305.html
IOS+openCV在Xcode的入門開發(轉)