IOS視訊編輯功能詳解上篇-新增水印
前言
用程式碼在簡單視訊編輯中,主要就是加美顏、水印(貼圖)、視訊擷取、視訊拼接、音視訊的處理,在美顏中,使用GPUImage即可實現多種濾鏡、磨皮美顏的功能,並且可以臉部識別實時美顏等功能,這個有很多成熟的處理方案,所以現在主要說後面的水印(貼圖)、視訊擷取、視訊拼接、音視訊的處理,在文章結尾會給出一個完整的測試demo,該demo可以操作視訊之後儲存到系統相簿,文章主要說明下注意的點。
一、視訊加水印
之前為了給視訊增加美顏效果,所以主流使用了GPUImage的庫,所以在選擇新增視訊水印的功能時也首先想到了GPUImage的方案。但是使用過程中,發現效能並沒有AVFoundation的合成快,但是對視訊的相容性還是GPUImage的比較好,所以如果注重速度,首選AVFoundation的方案。但是如果是考慮對視訊的相容性的話,可以採用GPUImage方案。
1.1、GPUImage方案
GPUImage使用GPUImageUIElement
和GPUImageMovieWriter
重新進行渲染,通過疊加濾鏡來重新生成視訊。濾鏡可以採用GPUImageDissolveBlendFilter
、GPUImageAlphaBlendFilter
、GPUImageNormalBlendFilter
這個三個濾鏡任選一個都可以。
實現程式碼:
/**
使用GPUImage載入水印
@param vedioPath 視訊路徑
@param img 水印圖片
@param coverImg 水印圖片二
@param question 字串水印
@param fileName 生成之後的視訊名字
*/
-(void)saveVedioPath:(NSURL*)vedioPath WithWaterImg:(UIImage*)img WithCoverImage:(UIImage*)coverImg WithQustion:(NSString*)question WithFileName:(NSString*)fileName
{
[SVProgressHUD showWithStatus:@"生成水印視訊到系統相簿"];
// 濾鏡
// filter = [[GPUImageDissolveBlendFilter alloc] init];
// [(GPUImageDissolveBlendFilter *)filter setMix:0.0f];
//也可以使用透明濾鏡
// filter = [[GPUImageAlphaBlendFilter alloc] init];
// //mix即為疊加後的透明度,這裡就直接寫1.0了
// [(GPUImageDissolveBlendFilter *)filter setMix:1.0f];
filter = [[GPUImageNormalBlendFilter alloc] init];
NSURL *sampleURL = vedioPath;
AVAsset *asset = [AVAsset assetWithURL:sampleURL];
CGSize size = asset.naturalSize;
movieFile = [[GPUImageMovie alloc] initWithAsset:asset];
movieFile.playAtActualSpeed = NO;
// 文字水印
UILabel *label = [[UILabel alloc] init];
label.text = question;
label.font = [UIFont systemFontOfSize:30];
label.textColor = [UIColor whiteColor];
[label setTextAlignment:NSTextAlignmentCenter];
[label sizeToFit];
label.layer.masksToBounds = YES;
label.layer.cornerRadius = 18.0f;
[label setBackgroundColor:[UIColor colorWithRed:0 green:0 blue:0 alpha:0.5]];
[label setFrame:CGRectMake(50, 100, label.frame.size.width+20, label.frame.size.height)];
//圖片水印
UIImage *coverImage1 = [img copy];
UIImageView *coverImageView1 = [[UIImageView alloc] initWithImage:coverImage1];
[coverImageView1 setFrame:CGRectMake(0, 100, 210, 50)];
//第二個圖片水印
UIImage *coverImage2 = [coverImg copy];
UIImageView *coverImageView2 = [[UIImageView alloc] initWithImage:coverImage2];
[coverImageView2 setFrame:CGRectMake(270, 100, 210, 50)];
UIView *subView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, size.width, size.height)];
subView.backgroundColor = [UIColor clearColor];
[subView addSubview:coverImageView1];
[subView addSubview:coverImageView2];
[subView addSubview:label];
GPUImageUIElement *uielement = [[GPUImageUIElement alloc] initWithView:subView];
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:@"Documents/%@.mp4",fileName]];
unlink([pathToMovie UTF8String]);
NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(720.0, 1280.0)];
GPUImageFilter* progressFilter = [[GPUImageFilter alloc] init];
[progressFilter addTarget:filter];
[movieFile addTarget:progressFilter];
[uielement addTarget:filter];
movieWriter.shouldPassthroughAudio = YES;
// movieFile.playAtActualSpeed = true;
if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] > 0){
movieFile.audioEncodingTarget = movieWriter;
} else {//no audio
movieFile.audioEncodingTarget = nil;
}
[movieFile enableSynchronizedEncodingUsingMovieWriter:movieWriter];
// 顯示到介面
[filter addTarget:movieWriter];
[movieWriter startRecording];
[movieFile startProcessing];
// dlink = [CADisplayLink displayLinkWithTarget:self selector:@selector(updateProgress)];
// [dlink setFrameInterval:15];
// [dlink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
// [dlink setPaused:NO];
__weak typeof(self) weakSelf = self;
//渲染
[progressFilter setFrameProcessingCompletionBlock:^(GPUImageOutput *output, CMTime time) {
//水印可以移動
CGRect frame = coverImageView1.frame;
frame.origin.x += 1;
frame.origin.y += 1;
coverImageView1.frame = frame;
//第5秒之後隱藏coverImageView2
if (time.value/time.timescale>=5.0) {
[coverImageView2 removeFromSuperview];
}
[uielement update];
}];
//儲存相簿
[movieWriter setCompletionBlock:^{
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.2 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
__strong typeof(self) strongSelf = weakSelf;
[strongSelf->filter removeTarget:strongSelf->movieWriter];
[strongSelf->movieWriter finishRecording];
__block PHObjectPlaceholder *placeholder;
if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(pathToMovie))
{
NSError *error;
[[PHPhotoLibrary sharedPhotoLibrary] performChangesAndWait:^{
PHAssetChangeRequest* createAssetRequest = [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:movieURL];
placeholder = [createAssetRequest placeholderForCreatedAsset];
} error:&error];
if (error) {
[SVProgressHUD showErrorWithStatus:[NSString stringWithFormat:@"%@",error]];
}
else{
[SVProgressHUD showSuccessWithStatus:@"視訊已經儲存到相簿"];
}
}
});
}];
}
使用的時候直接呼叫即可
-(void)useGpuimage{
NSURL *videoPath = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"selfS" ofType:@"MOV"]];
[self saveVedioPath:videoPath WithWaterImg:[UIImage imageNamed:@"avatar.png"] WithCoverImage:[UIImage imageNamed:@"demo.png"] WithQustion:@"文字水印:hudongdongBlog" WithFileName:@"waterVideo"];
}
程式碼中在progressFilter setFrameProcessingCompletionBlock
的回撥中,設定了各個元素的顯示、移動,這樣的話就可以更加自由的設定水印的顯示與否,比如水印在剛開始顯示,五秒之後消失等功能,這個相當於每一幀都在渲染次,所以如果是採用什麼動畫效果的話可以直接設定frame
即可。
這個相當於把視訊先錄製一遍,然後邊錄製邊重新新增水印,所以這個相容性比較好,幾乎只要支援的視訊格式全都可以處理,但是注意沒有聲音視訊的情況,如果原視訊沒有聲音,在建立新視訊採集聲音的時候,會出現錯誤Assertion failure in -[GPUImageMovieWriter createDataFBO]
,崩潰在這裡
NSAssert(status == GL_FRAMEBUFFER_COMPLETE, @"Incomplete filter FBO: %d", status);
所以在採集的時候需要先判斷是否有聲音來源,然後再加判斷
if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] > 0){
movieFile.audioEncodingTarget = movieWriter;
} else {//no audio
movieFile.audioEncodingTarget = nil;
}
總體來說GPUImage只是簡單的提供了載入濾鏡的變相方案,原理其實就是加了一個濾鏡,這個濾鏡的紋理是水印的那個圖片,然後混合罷了,並沒有提供更深層的編輯功能。
1.2、AVFoundation方案
AVFoundation這種使用之後,發現對於同一個視訊AVFoundation處理的更快。並且這個是單獨對視訊軌、音軌等操作,所以操作性更高,玩過Adobe Premiere、會聲會影等視訊編輯軟體的人都知道,在對一個視訊進行操作編輯的時候,只需要在相應的軌道上面拖進去相應的資源即可。
這個就是在視訊軌編輯圖片,但是試過的人肯定發現如果只是單獨編輯視訊軌的話,出來的視訊是沒有聲音的,網上有很多這個程式碼,但是都沒有聲音,複製貼上的人也不知道解決方案這個是因為視訊只採集了視訊軌資源,卻沒有編輯音軌的資源,所以如果編輯裁剪的視訊沒有聲音的話,需要加上音軌的資源。
實現程式碼:
///使用AVfoundation新增水印
- (void)AVsaveVideoPath:(NSURL*)videoPath WithWaterImg:(UIImage*)img WithCoverImage:(UIImage*)coverImg WithQustion:(NSString*)question WithFileName:(NSString*)fileName
{
if (!videoPath) {
return;
}
//1 建立AVAsset例項 AVAsset包含了video的所有資訊 self.videoUrl輸入視訊的路徑
//封面圖片
NSDictionary *opts = [NSDictionary dictionaryWithObject:@(YES) forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
videoAsset = [AVURLAsset URLAssetWithURL:videoPath options:opts]; //初始化視訊媒體檔案
CMTime startTime = CMTimeMakeWithSeconds(0.2, 600);
CMTime endTime = CMTimeMakeWithSeconds(videoAsset.duration.value/videoAsset.duration.timescale-0.2, videoAsset.duration.timescale);
//聲音採集
AVURLAsset * audioAsset = [[AVURLAsset alloc] initWithURL:videoPath options:opts];
//2 建立AVMutableComposition例項. apple developer 裡邊的解釋 【AVMutableComposition is a mutable subclass of AVComposition you use when you want to create a new composition from existing assets. You can add and remove tracks, and you can add, remove, and scale time ranges.】
AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init];
//3 視訊通道 工程檔案中的軌道,有音訊軌、視訊軌等,裡面可以插入各種對應的素材
AVMutableCompositionTrack *videoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];
//把視訊軌道資料加入到可變軌道中 這部分可以做視訊裁剪TimeRange
[videoTrack insertTimeRange:CMTimeRangeMake(startTime, endTime)
ofTrack:[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]
atTime:kCMTimeZero error:nil];
//音訊通道
AVMutableCompositionTrack * audioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
//音訊採集通道
AVAssetTrack * audioAssetTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] firstObject];
[audioTrack insertTimeRange:CMTimeRangeMake(startTime, endTime) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil];
//3.1 AVMutableVideoCompositionInstruction 視訊軌道中的一個視訊,可以縮放、旋轉等
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoTrack.timeRange.duration);
// 3.2 AVMutableVideoCompositionLayerInstruction 一個視訊軌道,包含了這個軌道上的所有視訊素材
AVMutableVideoCompositionLayerInstruction *videolayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// UIImageOrientation videoAssetOrientation_ = UIImageOrientationUp;
BOOL isVideoAssetPortrait_ = NO;
CGAffineTransform videoTransform = videoAssetTrack.preferredTransform;
if (videoTransform.a == 0 && videoTransform.b == 1.0 && videoTransform.c == -1.0 && videoTransform.d == 0) {
// videoAssetOrientation_ = UIImageOrientationRight;
isVideoAssetPortrait_ = YES;
}
if (videoTransform.a == 0 && videoTransform.b == -1.0 && videoTransform.c == 1.0 && videoTransform.d == 0) {
// videoAssetOrientation_ = UIImageOrientationLeft;
isVideoAssetPortrait_ = YES;
}
// if (videoTransform.a == 1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == 1.0) {
// videoAssetOrientation_ = UIImageOrientationUp;
// }
// if (videoTransform.a == -1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == -1.0) {
// videoAssetOrientation_ = UIImageOrientationDown;
// }
[videolayerInstruction setTransform:videoAssetTrack.preferredTransform atTime:kCMTimeZero];
[videolayerInstruction setOpacity:0.0 atTime:endTime];
// 3.3 - Add instructions
mainInstruction.layerInstructions = [NSArray arrayWithObjects:videolayerInstruction,nil];
//AVMutableVideoComposition:管理所有視訊軌道,可以決定最終視訊的尺寸,裁剪需要在這裡進行
AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition];
CGSize naturalSize;
if(isVideoAssetPortrait_){
naturalSize = CGSizeMake(videoAssetTrack.naturalSize.height, videoAssetTrack.naturalSize.width);
} else {
naturalSize = videoAssetTrack.naturalSize;
}
float renderWidth, renderHeight;
renderWidth = naturalSize.width;
renderHeight = naturalSize.height;
mainCompositionInst.renderSize = CGSizeMake(renderWidth, renderHeight);
mainCompositionInst.renderSize = CGSizeMake(renderWidth, renderHeight);
mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction];
mainCompositionInst.frameDuration = CMTimeMake(1, 25);
[self applyVideoEffectsToComposition:mainCompositionInst WithWaterImg:img WithCoverImage:coverImg WithQustion:question size:CGSizeMake(renderWidth, renderHeight)];
// 4 - 輸出路徑
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *myPathDocs = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"%@.mp4",fileName]];
unlink([myPathDocs UTF8String]);
NSURL* videoUrl = [NSURL fileURLWithPath:myPathDocs];
dlink = [CADisplayLink displayLinkWithTarget:self selector:@selector(updateProgress)];
[dlink setFrameInterval:15];
[dlink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[dlink setPaused:NO];
// 5 - 視訊檔案輸出
exporter = [[AVAssetExportSession alloc] initWithAsset:mixComposition
presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL=videoUrl;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mainCompositionInst;
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
//這裡是輸出視訊之後的操作,做你想做的
[self exportDidFinish:exporter];
});
}];
}
- (void)applyVideoEffectsToComposition:(AVMutableVideoComposition *)composition WithWaterImg:(UIImage*)img WithCoverImage:(UIImage*)coverImg WithQustion:(NSString*)question size:(CGSize)size {
UIFont *font = [UIFont systemFontOfSize:30.0];
CATextLayer *subtitle1Text = [[CATextLayer alloc] init];
[subtitle1Text setFontSize:30];
[subtitle1Text setString:question];
[subtitle1Text setAlignmentMode:kCAAlignmentCenter];
[subtitle1Text setForegroundColor:[[UIColor whiteColor] CGColor]];
subtitle1Text.masksToBounds = YES;
subtitle1Text.cornerRadius = 23.0f;
[subtitle1Text setBackgroundColor:[UIColor colorWithRed:0 green:0 blue:0 alpha:0.5].CGColor];
CGSize textSize = [question sizeWithAttributes:[NSDictionary dictionaryWithObjectsAndKeys:font,NSFontAttributeName, nil]];
[subtitle1Text setFrame:CGRectMake(50, 100, textSize.width+20, textSize.height+10)];
//水印
CALayer *imgLayer = [CALayer layer];
imgLayer.contents = (id)img.CGImage;
// imgLayer.bounds = CGRectMake(0, 0, size.width, size.height);
imgLayer.bounds = CGRectMake(0, 0, 210, 50);
imgLayer.position = CGPointMake(size.width/2.0, size.height/2.0);
//第二個水印
CALayer *coverImgLayer = [CALayer layer];
coverImgLayer.contents = (id)coverImg.CGImage;
// [coverImgLayer setContentsGravity:@"resizeAspect"];
coverImgLayer.bounds = CGRectMake(50, 200,210, 50);
coverImgLayer.position = CGPointMake(size.width/4.0, size.height/4.0);
// 2 - The usual overlay
CALayer *overlayLayer = [CALayer layer];
[overlayLayer addSublayer:subtitle1Text];
[overlayLayer addSublayer:imgLayer];
overlayLayer.frame = CGRectMake(0, 0, size.width, size.height);
[overlayLayer setMasksToBounds:YES];
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, size.width, size.height);
videoLayer.frame = CGRectMake(0, 0, size.width, size.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:overlayLayer];
[parentLayer addSublayer:coverImgLayer];
//設定封面
CABasicAnimation *anima = [CABasicAnimation animationWithKeyPath:@"opacity"];
anima.fromValue = [NSNumber numberWithFloat:1.0f];
anima.toValue = [NSNumber numberWithFloat:0.0f];
anima.repeatCount = 0;
anima.duration = 5.0f; //5s之後消失
[anima setRemovedOnCompletion:NO];
[anima setFillMode:kCAFillModeForwards];
anima.beginTime = AVCoreAnimationBeginTimeAtZero;
[coverImgLayer addAnimation:anima forKey:@"opacityAniamtion"];
composition.animationTool = [AVVideoCompositionCoreAnimationTool
videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
}
//更新生成進度
- (void)updateProgress {
[SVProgressHUD showProgress:exporter.progress status:NSLocalizedString(@"生成中...", nil)];
if (exporter.progress>=1.0) {
[dlink setPaused:true];
[dlink invalidate];
// [SVProgressHUD dismiss];
}
}
其中主要編輯音軌和視訊軌的資源,
[videoTrack insertTimeRange:CMTimeRangeMake(startTime, endTime)
ofTrack:[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]
atTime:kCMTimeZero error:nil];
[audioTrack insertTimeRange:CMTimeRangeMake(startTime, endTime) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil];
在後面新增背景音樂和視訊的程式碼中就主要是調節這兩個軌道的資源內容。
float renderWidth, renderHeight;
這兩個引數則是控制的輸出的渲染大小,在視訊裁剪的過程中,控制視訊輸出大小就是控制的這兩個引數。這個在視訊裁剪的時候再詳細說。
水印圖片的編輯則是在- (void)applyVideoEffectsToComposition
這個函式中,控制的主要是視訊的Layer新增圖片和圖片動畫,從而達到編輯水印的功能。
但是這個加水印的時候其實也是在編輯資源的視訊軌和音軌,所以會出現有的資源解析不到視訊軌,造成生成的影象是藍色的,還有的資源解析不到音軌造成生成失敗。比如iphone的延遲攝像,是沒有音軌的,所以會導致生成失敗,這兩個問題在視訊裁剪裡面我做了解決,解析不到視訊軌的就是用GPUImage單獨的錄製一份之後再次解析,而解析不到音軌的就代表視訊沒有聲音,就不新增音軌就可以了。