Ios 實現麥克風捕獲和AAC編碼
阿新 • • 發佈:2019-02-19
在Ios中,實現開啟和捕獲麥克風大多是用的AVCaptureSession這個元件來實現的,它可以不僅可以實現音訊捕獲,還可以實現視訊的捕獲。本文將主要實現麥克風音訊的捕獲和編碼。
針對開啟麥克風和捕獲音訊的程式碼,網上也有一些,我就簡單的整理了一下:
首先,我們需要定義一個AVCaptureSession型別的變數,它是架起在麥克風裝置和資料輸出上的一座橋,通過它可以方便的得到麥克風的實時原始資料。
AVCaptureSession *m_capture
同時,定義一組函式,用來開啟和關閉麥克風;為了能使資料順利的匯出,你還需要實現AVCaptureAudioDataOutputSampleBufferDelegate這個協議
-(void)open;
-(void)close;
-(BOOL)isOpen;
下面我們將分別實現上述引數函式,來完成資料的捕獲。
-(void)open { NSError *error; m_capture = [[AVCaptureSession alloc]init]; AVCaptureDevice *audioDev = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; if (audioDev == nil) { CKPrint("Couldn't create audio capture device"); return ; } // create mic device AVCaptureDeviceInput *audioIn = [AVCaptureDeviceInput deviceInputWithDevice:audioDev error:&error]; if (error != nil) { CKPrint("Couldn't create audio input"); return ; } // add mic device in capture object if ([m_capture canAddInput:audioIn] == NO) { CKPrint("Couldn't add audio input") return ; } [m_capture addInput:audioIn]; // export audio data AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init]; [audioOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()]; if ([m_capture canAddOutput:audioOutput] == NO) { CKPrint("Couldn't add audio output"); return ; } [m_capture addOutput:audioOutput]; [audioOutput connectionWithMediaType:AVMediaTypeAudio]; [m_capture startRunning]; return ; }
-(void)close {
if (m_capture != nil && [m_capture isRunning])
{
[m_capture stopRunning];
}
return;
}
-(BOOL)isOpen {
if (m_capture == nil)
{
return NO;
}
return [m_capture isRunning];
}
通過上面三個函式,即可完成所有麥克風捕獲的準備工作,現在我們就等著資料主動送上門了。要想資料主動送上門,我們還需要實現一個協議介面:
到這裡,我們的工作也就差不多做完了,所捕獲出來的資料是原始的PCM資料。- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { char szBuf[4096]; int nSize = sizeof(szBuf); #if SUPPORT_AAC_ENCODER if ([self encoderAAC:sampleBuffer aacData:szBuf aacLen:&nSize] == YES) { [g_pViewController sendAudioData:szBuf len:nSize channel:0]; } #else //#if SUPPORT_AAC_ENCODER AudioStreamBasicDescription outputFormat = *(CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer))); nSize = CMSampleBufferGetTotalSampleSize(sampleBuffer); CMBlockBufferRef databuf = CMSampleBufferGetDataBuffer(sampleBuffer); if (CMBlockBufferCopyDataBytes(databuf, 0, nSize, szBuf) == kCMBlockBufferNoErr) { [g_pViewController sendAudioData:szBuf len:nSize channel:outputFormat.mChannelsPerFrame]; } #endif }
當然,由於PCM資料本身比較大,不利於網路傳輸,所以如果需要進行網路傳輸時,就需要對資料進行編碼;Ios系統本身支援多種音訊編碼格式,這裡我們就以AAC為例來實現一個PCM編碼AAC的函式。
在Ios系統中,PCM編碼AAC的例子,在網上也是一找一大片,但是大多都是不太完整的,而且相當一部分都是E文的,對於某些童鞋而言,這些都是深惡痛絕的。我這裡就做做好人,把它們整理了一下,寫成了一個函式,方便使用。
在編碼前,需要先建立一個編碼轉換物件
AVAudioConverterRef m_converter;
#if SUPPORT_AAC_ENCODER
-(BOOL)createAudioConvert:(CMSampleBufferRef)sampleBuffer { //根據輸入樣本初始化一個編碼轉換器
if (m_converter != nil)
{
return TRUE;
}
AudioStreamBasicDescription inputFormat = *(CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer))); // 輸入音訊格式
AudioStreamBasicDescription outputFormat; // 這裡開始是輸出音訊格式
memset(&outputFormat, 0, sizeof(outputFormat));
outputFormat.mSampleRate = inputFormat.mSampleRate; // 取樣率保持一致
outputFormat.mFormatID = kAudioFormatMPEG4AAC; // AAC編碼
outputFormat.mChannelsPerFrame = 2;
outputFormat.mFramesPerPacket = 1024; // AAC一幀是1024個位元組
AudioClassDescription *desc = [self getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
if (AudioConverterNewSpecific(&inputFormat, &outputFormat, 1, desc, &m_converter) != noErr)
{
CKPrint(@"AudioConverterNewSpecific failed");
return NO;
}
return YES;
}
-(BOOL)encoderAAC:(CMSampleBufferRef)sampleBuffer aacData:(char*)aacData aacLen:(int*)aacLen { // 編碼PCM成AAC
if ([self createAudioConvert:sampleBuffer] != YES)
{
return NO;
}
CMBlockBufferRef blockBuffer = nil;
AudioBufferList inBufferList;
if (CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &inBufferList, sizeof(inBufferList), NULL, NULL, 0, &blockBuffer) != noErr)
{
CKPrint(@"CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer failed");
return NO;
}
// 初始化一個輸出緩衝列表
AudioBufferList outBufferList;
outBufferList.mNumberBuffers = 1;
outBufferList.mBuffers[0].mNumberChannels = 2;
outBufferList.mBuffers[0].mDataByteSize = *aacLen; // 設定緩衝區大小
outBufferList.mBuffers[0].mData = aacData; // 設定AAC緩衝區
UInt32 outputDataPacketSize = 1;
if (AudioConverterFillComplexBuffer(m_converter, inputDataProc, &inBufferList, &outputDataPacketSize, &outBufferList, NULL) != noErr)
{
CKPrint(@"AudioConverterFillComplexBuffer failed");
return NO;
}
*aacLen = outBufferList.mBuffers[0].mDataByteSize; //設定編碼後的AAC大小
CFRelease(blockBuffer);
return YES;
}
-(AudioClassDescription*)getAudioClassDescriptionWithType:(UInt32)type fromManufacturer:(UInt32)manufacturer { // 獲得相應的編碼器
static AudioClassDescription audioDesc;
UInt32 encoderSpecifier = type, size = 0;
OSStatus status;
memset(&audioDesc, 0, sizeof(audioDesc));
status = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(encoderSpecifier), &encoderSpecifier, &size);
if (status)
{
return nil;
}
uint32_t count = size / sizeof(AudioClassDescription);
AudioClassDescription descs[count];
status = AudioFormatGetProperty(kAudioFormatProperty_Encoders, sizeof(encoderSpecifier), &encoderSpecifier, &size, descs);
for (uint32_t i = 0; i < count; i++)
{
if ((type == descs[i].mSubType) && (manufacturer == descs[i].mManufacturer))
{
memcpy(&audioDesc, &descs[i], sizeof(audioDesc));
break;
}
}
return &audioDesc;
}
OSStatus inputDataProc(AudioConverterRef inConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData,AudioStreamPacketDescription **outDataPacketDescription, void *inUserData) { //<span style="font-family: Arial, Helvetica, sans-serif;">AudioConverterFillComplexBuffer 編碼過程中,會要求這個函式來填充輸入資料,也就是原始PCM資料</span>
AudioBufferList bufferList = *(AudioBufferList*)inUserData;
ioData->mBuffers[0].mNumberChannels = 1;
ioData->mBuffers[0].mData = bufferList.mBuffers[0].mData;
ioData->mBuffers[0].mDataByteSize = bufferList.mBuffers[0].mDataByteSize;
return noErr;
}
#endif
好了,世界是那麼美好,一個函式即可所有的事情搞定了。當你需要進行AAC編碼時,呼叫encoderAAC這個函式就可以了(在上面有完整的程式碼)
char szBuf[4096];
int nSize = sizeof(szBuf);
if ([self encoderAAC:sampleBuffer aacData:szBuf aacLen:&nSize] == YES)
{
// do something
}