(三)Audio子系統之AudioRecord.startRecording (二)Audio子系統之new AudioRecord()
在上一篇文章《(二)Audio子系統之new AudioRecord()》中已經介紹了Audio系統如何建立AudioRecord物件以及輸入流,並建立了RecordThread執行緒,接下來,繼續分析AudioRecord方法中的startRecording的實現
函式原型:
public void startRecording() throws IllegalStateException
作用:
開始進行錄製
引數:
無
返回值:
無
異常:
若沒有初始化完成時,丟擲IllegalStateException
接下來進入系統分析具體實現
frameworks/base/media/java/android/media/AudioRecord.java
public void startRecording() throws IllegalStateException { if (mState != STATE_INITIALIZED) { throw new IllegalStateException("startRecording() called on an " + "uninitialized AudioRecord."); } // start recording synchronized(mRecordingStateLock) { if (native_start(MediaSyncEvent.SYNC_EVENT_NONE, 0) == SUCCESS) { handleFullVolumeRec(true); mRecordingState = RECORDSTATE_RECORDING; } } }
首先判斷是否已經初始化完畢了,在前一篇文章中,mState已經是STATE_INITIALIZED狀態了。所以繼續分析native_start函式
frameworks/base/core/jni/android_media_AudioRecord.cpp
static jint android_media_AudioRecord_start(JNIEnv *env, jobject thiz, jint event, jint triggerSession) { sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz); if (lpRecorder == NULL ) { jniThrowException(env, "java/lang/IllegalStateException", NULL); return (jint) AUDIO_JAVA_ERROR; } return nativeToJavaStatus( lpRecorder->start((AudioSystem::sync_event_t)event, triggerSession)); }
繼續往下:lpRecorder->start
frameworks\av\media\libmedia\AudioRecord.cpp
status_t AudioRecord::start(AudioSystem::sync_event_t event, int triggerSession) { AutoMutex lock(mLock); if (mActive) { return NO_ERROR; } // reset current position as seen by client to 0 mProxy->setEpoch(mProxy->getEpoch() - mProxy->getPosition()); // force refresh of remaining frames by processAudioBuffer() as last // read before stop could be partial. mRefreshRemaining = true; mNewPosition = mProxy->getPosition() + mUpdatePeriod; int32_t flags = android_atomic_acquire_load(&mCblk->mFlags); status_t status = NO_ERROR; if (!(flags & CBLK_INVALID)) { ALOGV("mAudioRecord->start()"); status = mAudioRecord->start(event, triggerSession); if (status == DEAD_OBJECT) { flags |= CBLK_INVALID; } } if (flags & CBLK_INVALID) { status = restoreRecord_l("start"); } if (status != NO_ERROR) { ALOGE("start() status %d", status); } else { mActive = true; sp<AudioRecordThread> t = mAudioRecordThread; if (t != 0) { t->resume(); } else { mPreviousPriority = getpriority(PRIO_PROCESS, 0); get_sched_policy(0, &mPreviousSchedulingGroup); androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO); } } return status; }
在這個函式中主要的工作如下:
1.重置當前錄音Buffer中的錄音資料寫入的起始位置,錄音Buffer的組成在第一篇文章中已經介紹了;
2.標記mRefreshRemaining為true,從註釋中可以看到,他應該是用來強制重新整理剩餘的frames,後面應該會突出這個變數的作用,先不急;
3.從mCblk->mFlags的地方獲取flags,這裡是0x0;
4.第一次來,肯定走mAudioRecord->start();
5.如果start失敗了,會重新呼叫restoreRecord_l函式,再次建立輸入流通道,這個函式在前一篇文章已經分析過了;
6.呼叫AudioRecordThread執行緒的resume函式;
這裡我們主要分析第4、6步;
首先分析下AudioRecord.cpp::start()的第4步:mAudioRecord->start()
mAudioRecord是sp<IAudioRecord>型別的,也就是說他是Binder中的Bp端,我們需要找到BnAudioRecord,可以在AudioFlinger.h中找到Bn端的定義
frameworks\av\services\audioflinger\AudioFlinger.h
// server side of the client's IAudioRecord class RecordHandle : public android::BnAudioRecord { public: RecordHandle(const sp<RecordThread::RecordTrack>& recordTrack); virtual ~RecordHandle(); virtual status_t start(int /*AudioSystem::sync_event_t*/ event, int triggerSession); virtual void stop(); virtual status_t onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags); private: const sp<RecordThread::RecordTrack> mRecordTrack; // for use from destructor void stop_nonvirtual(); };
所以我們繼續找RecordHandle類是在哪裡實現的,同時,這裡可以看到除了start方法以外還有stop方法。
frameworks\av\services\audioflinger\Tracks.cpp
status_t AudioFlinger::RecordHandle::start(int /*AudioSystem::sync_event_t*/ event, int triggerSession) { return mRecordTrack->start((AudioSystem::sync_event_t)event, triggerSession); }
在AudioFlinger.h檔案中可以看到const sp<RecordThread::RecordTrack> mRecordTrack;他還是在Tracks.cpp中實現的,繼續往下走
status_t AudioFlinger::RecordThread::RecordTrack::start(AudioSystem::sync_event_t event, int triggerSession) { sp<ThreadBase> thread = mThread.promote(); if (thread != 0) { RecordThread *recordThread = (RecordThread *)thread.get(); return recordThread->start(this, event, triggerSession); } else { return BAD_VALUE; } }
這裡的Thread是在AudioRecord.cpp::openRecord_l()中呼叫createRecordTrack_l的Thread物件,再深入一點,在thread->createRecordTrack_l方法中呼叫了new RecordTrack(this,...),而RecordTrack是繼承TrackBase的,在TrackBase父類的建構函式中TrackBase(ThreadBase *thread,...): RefBase(), mThread(thread),...{},這個父類的實現也是在Tracks.cpp。所以這裡的mThread就是RecordThread
所以這裡繼續呼叫RecordThread的start方法
frameworks\av\services\audioflinger\Threads.cpp
status_t AudioFlinger::RecordThread::start(RecordThread::RecordTrack* recordTrack, AudioSystem::sync_event_t event, int triggerSession) { sp<ThreadBase> strongMe = this; status_t status = NO_ERROR; if (event == AudioSystem::SYNC_EVENT_NONE) { recordTrack->clearSyncStartEvent(); } else if (event != AudioSystem::SYNC_EVENT_SAME) { recordTrack->mSyncStartEvent = mAudioFlinger->createSyncEvent(event, triggerSession, recordTrack->sessionId(), syncStartEventCallback, recordTrack); // Sync event can be cancelled by the trigger session if the track is not in a // compatible state in which case we start record immediately if (recordTrack->mSyncStartEvent->isCancelled()) { recordTrack->clearSyncStartEvent(); } else { // do not wait for the event for more than AudioSystem::kSyncRecordStartTimeOutMs recordTrack->mFramesToDrop = - ((AudioSystem::kSyncRecordStartTimeOutMs * recordTrack->mSampleRate) / 1000); } } { // This section is a rendezvous between binder thread executing start() and RecordThread AutoMutex lock(mLock); if (mActiveTracks.indexOf(recordTrack) >= 0) { if (recordTrack->mState == TrackBase::PAUSING) { ALOGV("active record track PAUSING -> ACTIVE"); recordTrack->mState = TrackBase::ACTIVE; } else { ALOGV("active record track state %d", recordTrack->mState); } return status; } // TODO consider other ways of handling this, such as changing the state to :STARTING and // adding the track to mActiveTracks after returning from AudioSystem::startInput(), // or using a separate command thread recordTrack->mState = TrackBase::STARTING_1; mActiveTracks.add(recordTrack); mActiveTracksGen++; status_t status = NO_ERROR; if (recordTrack->isExternalTrack()) { mLock.unlock(); status = AudioSystem::startInput(mId, (audio_session_t)recordTrack->sessionId()); mLock.lock(); // FIXME should verify that recordTrack is still in mActiveTracks if (status != NO_ERROR) {//0 mActiveTracks.remove(recordTrack); mActiveTracksGen++; recordTrack->clearSyncStartEvent(); ALOGV("RecordThread::start error %d", status); return status; } } // Catch up with current buffer indices if thread is already running. // This is what makes a new client discard all buffered data. If the track's mRsmpInFront // was initialized to some value closer to the thread's mRsmpInFront, then the track could // see previously buffered data before it called start(), but with greater risk of overrun. recordTrack->mRsmpInFront = mRsmpInRear; recordTrack->mRsmpInUnrel = 0; // FIXME why reset? if (recordTrack->mResampler != NULL) { recordTrack->mResampler->reset(); } recordTrack->mState = TrackBase::STARTING_2; // signal thread to start mWaitWorkCV.broadcast(); if (mActiveTracks.indexOf(recordTrack) < 0) { ALOGV("Record failed to start"); status = BAD_VALUE; goto startError; } return status; } startError: if (recordTrack->isExternalTrack()) { AudioSystem::stopInput(mId, (audio_session_t)recordTrack->sessionId()); } recordTrack->clearSyncStartEvent(); // FIXME I wonder why we do not reset the state here? return status; }
在這個函式中主要的工作如下:
1.判斷傳過來的event的值,從AudioRecord.java可以看到他一直是SYNC_EVENT_NONE,所以這裡就清除SyncStartEvent;
2.判斷在mActiveTracks集合中傳過來的recordTrack是否是第一個,而我們這是第一次來,肯定會是第一個,而如果不是第一個,也就是說之前因為某種狀態已經開始了錄音,所以再判斷是否是PAUSING狀態,更新狀態到ACTIVE,然後直接return;
3.設定recordTrack的狀態為STARTING_1,然後加到mActiveTracks集合中,如果此時再去indexOf的話,肯定就是1了;
4.判斷recordTrack是否是外部的Track,而isExternalTrack的定義如下:
bool isTimedTrack() const { return (mType == TYPE_TIMED); } bool isOutputTrack() const { return (mType == TYPE_OUTPUT); } bool isPatchTrack() const { return (mType == TYPE_PATCH); } bool isExternalTrack() const { return !isOutputTrack() && !isPatchTrack(); }
再回憶下,我們在new RecordTrack的時候傳入的mType是TrackBase::TYPE_DEFAULT,所以這個recordTrack是外部的Track;
5.確定是ExternalTrack,那麼就會呼叫AudioSystem::startInput方法開始採集資料,這個sessionId就是上一篇文章中出現的那個了,而對於這個mId,在AudioSystem::startInput中他的型別是audio_io_handle_t,在上一篇文章中,這個io_handle是通過AudioSystem::getInputForAttr獲取到的,獲取到之後通過checkRecordThread_l(input)獲取到了一個RecordThread物件,我們看下RecordThread類:class RecordThread : public ThreadBase,再看下ThreadBase父類,父類的建構函式實現在Threads.cpp檔案中,在這裡我們發現把input賦值給了mId,也就是說,呼叫AudioSystem::startInput函式的引數,就是之前建立的輸入流input以及生成的sessionId了。
AudioFlinger::ThreadBase::ThreadBase(const sp<AudioFlinger>& audioFlinger, audio_io_handle_t id, audio_devices_t outDevice, audio_devices_t inDevice, type_t type) : Thread(false /*canCallJava*/), mType(type), mAudioFlinger(audioFlinger), // mSampleRate, mFrameCount, mChannelMask, mChannelCount, mFrameSize, mFormat, mBufferSize // are set by PlaybackThread::readOutputParameters_l() or // RecordThread::readInputParameters_l() //FIXME: mStandby should be true here. Is this some kind of hack? mStandby(false), mOutDevice(outDevice), mInDevice(inDevice), mAudioSource(AUDIO_SOURCE_DEFAULT), mId(id), // mName will be set by concrete (non-virtual) subclass mDeathRecipient(new PMDeathRecipient(this)) { }
6.如果mRsmpInRear不為null的話,就重置mRsmpInFront等緩衝區索引;這裡顯然還沒開始錄音,所以mRsmpInRear是null的;
7.設定recordTrack的狀態為STARTING_2,然後呼叫mWaitWorkCV.broadcast()廣播通知所有的執行緒開始工作。注意:這裡不得不提前劇透下,在AudioSystem::startInput中,AudioFlinger::RecordThread已經開始跑起來了,所以其實broadcast對RecordThread是沒有作用的,並且,需要特別注意的是,這裡更新了recordTrack->mState為STARTING_2,之前在加入mActiveTracks時的狀態是STARTING_1,這個地方比較有意思,這裡先標記下,到時候在分析RecordThread的時候揭曉答案;
8.判斷下recordTrack是否已經加到mActiveTracks集合中了,如果沒有的話,就說明start失敗了,需要stopInput等;
接下來繼續分析AudioSystem::startInput方法
frameworks\av\media\libmedia\AudioSystem.cpp
status_t AudioSystem::startInput(audio_io_handle_t input, audio_session_t session) { const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service(); if (aps == 0) return PERMISSION_DENIED; return aps->startInput(input, session); }
繼續呼叫AudioPolicyService的startInput方法
frameworks\av\services\audiopolicy\AudioPolicyInterfaceImpl.cpp
status_t AudioPolicyService::startInput(audio_io_handle_t input, audio_session_t session) { if (mAudioPolicyManager == NULL) { return NO_INIT; } Mutex::Autolock _l(mLock); return mAudioPolicyManager->startInput(input, session); }
繼續轉發
frameworks\av\services\audiopolicy\AudioPolicyManager.cpp
status_t AudioPolicyManager::startInput(audio_io_handle_t input, audio_session_t session) { ssize_t index = mInputs.indexOfKey(input); if (index < 0) { ALOGW("startInput() unknown input %d", input); return BAD_VALUE; } sp<AudioInputDescriptor> inputDesc = mInputs.valueAt(index); index = inputDesc->mSessions.indexOf(session); if (index < 0) { ALOGW("startInput() unknown session %d on input %d", session, input); return BAD_VALUE; } // virtual input devices are compatible with other input devices if (!isVirtualInputDevice(inputDesc->mDevice)) { // for a non-virtual input device, check if there is another (non-virtual) active input audio_io_handle_t activeInput = getActiveInput(); if (activeInput != 0 && activeInput != input) { // If the already active input uses AUDIO_SOURCE_HOTWORD then it is closed, // otherwise the active input continues and the new input cannot be started. sp<AudioInputDescriptor> activeDesc = mInputs.valueFor(activeInput); if (activeDesc->mInputSource == AUDIO_SOURCE_HOTWORD) { ALOGW("startInput(%d) preempting low-priority input %d", input, activeInput); stopInput(activeInput, activeDesc->mSessions.itemAt(0)); releaseInput(activeInput, activeDesc->mSessions.itemAt(0)); } else { ALOGE("startInput(%d) failed: other input %d already started", input, activeInput); return INVALID_OPERATION; } } } if (inputDesc->mRefCount == 0) { if (activeInputsCount() == 0) { SoundTrigger::setCaptureState(true); } setInputDevice(input, getNewInputDevice(input), true /* force */); // automatically enable the remote submix output when input is started if not // used by a policy mix of type MIX_TYPE_RECORDERS // For remote submix (a virtual device), we open only one input per capture request. if (audio_is_remote_submix_device(inputDesc->mDevice)) { ALOGV("audio_is_remote_submix_device(inputDesc->mDevice)"); String8 address = String8(""); if (inputDesc->mPolicyMix == NULL) { address = String8("0"); } else if (inputDesc->mPolicyMix->mMixType == MIX_TYPE_PLAYERS) { address = inputDesc->mPolicyMix->mRegistrationId; } if (address != "") { setDeviceConnectionStateInt(AUDIO_DEVICE_OUT_REMOTE_SUBMIX, AUDIO_POLICY_DEVICE_STATE_AVAILABLE, address); } } } ALOGV("AudioPolicyManager::startInput() input source = %d", inputDesc->mInputSource); inputDesc->mRefCount++; return NO_ERROR; }
在這個函式中主要工作如下:
1.通過input找到mInputs集合中的位置,並獲取他的inputDesc;
2.判斷input裝置是否是虛擬裝置,若不是則再判斷是否存在active的裝置,我們第一次來,不存在的!
3.第一次來嘛,所以會呼叫SoundTrigger::setCaptureState(true),不過這個是和語音識別有關係,這裡就不多說了;
4.繼續呼叫setInputDevice函式,其中getNewInputDevice函式的作用是根據input獲取audio_devices_t裝置,同樣,這個裝置在上一篇文章中的AudioPolicyManager::getInputForAttr方法中通過getDeviceAndMixForInputSource獲取到的,即AUDIO_DEVICE_IN_BUILTIN_MIC內建MIC裝置,同時在該函式最後更新了inputDesc->mDevice;
5.判斷是否是remote_submix裝置,然後做相應處理;
6.inputDesc的mRefCount計數+1;
繼續分析setInputDevice函式
status_t AudioPolicyManager::setInputDevice(audio_io_handle_t input, audio_devices_t device, bool force, audio_patch_handle_t *patchHandle) { status_t status = NO_ERROR; sp<AudioInputDescriptor> inputDesc = mInputs.valueFor(input); if ((device != AUDIO_DEVICE_NONE) && ((device != inputDesc->mDevice) || force)) { inputDesc->mDevice = device; DeviceVector deviceList = mAvailableInputDevices.getDevicesFromType(device); if (!deviceList.isEmpty()) { struct audio_patch patch; inputDesc->toAudioPortConfig(&patch.sinks[0]); // AUDIO_SOURCE_HOTWORD is for internal use only: // handled as AUDIO_SOURCE_VOICE_RECOGNITION by the audio HAL if (patch.sinks[0].ext.mix.usecase.source == AUDIO_SOURCE_HOTWORD && !inputDesc->mIsSoundTrigger) { patch.sinks[0].ext.mix.usecase.source = AUDIO_SOURCE_VOICE_RECOGNITION; } patch.num_sinks = 1; //only one input device for now deviceList.itemAt(0)->toAudioPortConfig(&patch.sources[0]); patch.num_sources = 1; ssize_t index; if (patchHandle && *patchHandle != AUDIO_PATCH_HANDLE_NONE) { index = mAudioPatches.indexOfKey(*patchHandle); } else { index = mAudioPatches.indexOfKey(inputDesc->mPatchHandle); } sp< AudioPatch> patchDesc; audio_patch_handle_t afPatchHandle = AUDIO_PATCH_HANDLE_NONE; if (index >= 0) { patchDesc = mAudioPatches.valueAt(index); afPatchHandle = patchDesc->mAfPatchHandle; } status_t status = mpClientInterface->createAudioPatch(&patch, &afPatchHandle, 0); if (status == NO_ERROR) { if (index < 0) { patchDesc = new AudioPatch((audio_patch_handle_t)nextUniqueId(), &patch, mUidCached); addAudioPatch(patchDesc->mHandle, patchDesc); } else { patchDesc->mPatch = patch; } patchDesc->mAfPatchHandle = afPatchHandle; patchDesc->mUid = mUidCached; if (patchHandle) { *patchHandle = patchDesc->mHandle; } inputDesc->mPatchHandle = patchDesc->mHandle; nextAudioPortGeneration(); mpClientInterface->onAudioPatchListUpdate(); } } } return status; }
在這個函式中主要的工作如下:
1.這裡已經知道device與inputDesc->mDevice都已經是AUDIO_DEVICE_IN_BUILTIN_MIC,但是force是true;
2.通過device獲取mAvailableInputDevices集合中的所有裝置,到此刻,我們還只向該集合中新增一個device;
3.這裡我們分析下struct audio_patch;他定義在system\core\include\system\audio.h,這裡對audio_patch中的source與sinks進行賦值,注意一點,他把mId(audio_io_handle_t)賦值給了id,然後在這個audio_patch中儲存了InputSource,sample_rate,channel_mask,format,hw_module等等,幾乎都存進去了;
struct audio_patch { audio_patch_handle_t id; /* patch unique ID */ unsigned int num_sources; /* number of sources in following array */ struct audio_port_config sources[AUDIO_PATCH_PORTS_MAX]; unsigned int num_sinks; /* number of sinks in following array */ struct audio_port_config sinks[AUDIO_PATCH_PORTS_MAX]; }; struct audio_port_config { audio_port_handle_t id; /* port unique ID */ audio_port_role_t role; /* sink or source */ audio_port_type_t type; /* device, mix ... */ unsigned int config_mask; /* e.g AUDIO_PORT_CONFIG_ALL */ unsigned int sample_rate; /* sampling rate in Hz */ audio_channel_mask_t channel_mask; /* channel mask if applicable */ audio_format_t format; /* format if applicable */ struct audio_gain_config gain; /* gain to apply if applicable */ union { struct audio_port_config_device_ext device; /* device specific info */ struct audio_port_config_mix_ext mix; /* mix specific info */ struct audio_port_config_session_ext session; /* session specific info */ } ext; }; struct audio_port_config_device_ext { audio_module_handle_t hw_module; /* module the device is attached to */ audio_devices_t type; /* device type (e.g AUDIO_DEVICE_OUT_SPEAKER) */ char address[AUDIO_DEVICE_MAX_ADDRESS_LEN]; /* device address. "" if N/A */ }; struct audio_port_config_mix_ext { audio_module_handle_t hw_module; /* module the stream is attached to */ audio_io_handle_t handle; /* I/O handle of the input/output stream */ union { //TODO: change use case for output streams: use strategy and mixer attributes audio_stream_type_t stream; audio_source_t source; } usecase; };
4.呼叫mpClientInterface->createAudioPatch建立Audio通路;
5.更新patchDesc的屬性;
6.如果createAudioPatch的status是NO_ERROR的話,就呼叫mpClientInterface->onAudioPatchListUpdate更新AudioPatch列表;
這裡我們著重分析第4、6步:
首先分析下AudioPolicyManager.cpp的AudioPolicyManager::setInputDevice的第4步:建立Audio通路
frameworks\av\services\audiopolicy\AudioPolicyClientImpl.cpp
status_t AudioPolicyService::AudioPolicyClient::createAudioPatch(const struct audio_patch *patch, audio_patch_handle_t *handle, int delayMs) { return mAudioPolicyService->clientCreateAudioPatch(patch, handle, delayMs); }
繼續向下
frameworks\av\services\audiopolicy\AudioPolicyService.cpp
status_t AudioPolicyService::clientCreateAudioPatch(const struct audio_patch *patch, audio_patch_handle_t *handle, int delayMs) { return mAudioCommandThread->createAudioPatchCommand(patch, handle, delayMs); }
還是在這個檔案中
status_t AudioPolicyService::AudioCommandThread::createAudioPatchCommand( const struct audio_patch *patch, audio_patch_handle_t *handle, int delayMs) { status_t status = NO_ERROR; sp<AudioCommand> command = new AudioCommand(); command->mCommand = CREATE_AUDIO_PATCH; CreateAudioPatchData *data = new CreateAudioPatchData(); data->mPatch = *patch; data->mHandle = *handle; command->mParam = data; command->mWaitStatus = true; ALOGV("AudioCommandThread() adding create patch delay %d", delayMs); status = sendCommand(command, delayMs); if (status == NO_ERROR) { *handle = data->mHandle; } return status; }
後面就是把audio_patch封裝下,然後加入到AudioCommands佇列中去,所以接下來直接看threadLoop中是怎麼處理的
bool AudioPolicyService::AudioCommandThread::threadLoop() { nsecs_t waitTime = INT64_MAX; mLock.lock(); while (!exitPending()) { sp<AudioPolicyService> svc; while (!mAudioCommands.isEmpty() && !exitPending()) { nsecs_t curTime = systemTime(); // commands are sorted by increasing time stamp: execute them from index 0 and up if (mAudioCommands[0]->mTime <= curTime) { sp<AudioCommand> command = mAudioCommands[0]; mAudioCommands.removeAt(0); mLastCommand = command; switch (command->mCommand) { case START_TONE: { mLock.unlock(); ToneData *data = (ToneData *)command->mParam.get(); ALOGV("AudioCommandThread() processing start tone %d on stream %d", data->mType, data->mStream); delete mpToneGenerator; mpToneGenerator = new ToneGenerator(data->mStream, 1.0); mpToneGenerator->startTone(data->mType); mLock.lock(); }break; case STOP_TONE: { mLock.unlock(); ALOGV("AudioCommandThread() processing stop tone"); if (mpToneGenerator != NULL) { mpToneGenerator->stopTone(); delete mpToneGenerator; mpToneGenerator = NULL; } mLock.lock(); }break; case SET_VOLUME: { VolumeData *data = (VolumeData *)command->mParam.get(); ALOGV("AudioCommandThread() processing set volume stream %d, \ volume %f, output %d", data->mStream, data->mVolume, data->mIO); command->mStatus = AudioSystem::setStreamVolume(data->mStream, data->mVolume, data->mIO); }break; case SET_PARAMETERS: { ParametersData *data = (ParametersData *)command->mParam.get(); ALOGV("AudioCommandThread() processing set parameters string %s, io %d", data->mKeyValuePairs.string(), data->mIO); command->mStatus = AudioSystem::setParameters(data->mIO, data->mKeyValuePairs); }break; case SET_VOICE_VOLUME: { VoiceVolumeData *data = (VoiceVolumeData *)command->mParam.get(); ALOGV("AudioCommandThread() processing set voice volume volume %f", data->mVolume); command->mStatus = AudioSystem::setVoiceVolume(data->mVolume); }break; case STOP_OUTPUT: { StopOutputData *data = (StopOutputData *)command->mParam.get(); ALOGV("AudioCommandThread() processing stop output %d", data->mIO); svc = mService.promote(); if (svc == 0) { break; } mLock.unlock(); svc->doStopOutput(data->mIO, data->mStream, data->mSession); mLock.lock(); }break; case RELEASE_OUTPUT: { ReleaseOutputData *data = (ReleaseOutputData *)command->mParam.get(); ALOGV("AudioCommandThread() processing release output %d", data->mIO); svc = mService.promote(); if (svc == 0) { break; } mLock.unlock(); svc->doReleaseOutput(data->mIO, data->mStream, data->mSession); mLock.lock(); }break; case CREATE_AUDIO_PATCH: { CreateAudioPatchData *data = (CreateAudioPatchData *)command->mParam.get(); ALOGV("AudioCommandThread() processing create audio patch"); sp<IAudioFlinger> af = AudioSystem::get_audio_flinger(); if (af == 0) { command->mStatus = PERMISSION_DENIED; } else { command->mStatus = af->createAudioPatch(&data->mPatch, &data->mHandle); } } break; case RELEASE_AUDIO_PATCH: { ReleaseAudioPatchData *data = (ReleaseAudioPatchData *)command->mParam.get(); ALOGV("AudioCommandThread() processing release audio patch"); sp<IAudioFlinger> af = AudioSystem::get_audio_flinger(); if (af == 0) { command->mStatus = PERMISSION_DENIED; } else { command->mStatus = af->releaseAudioPatch(data->mHandle); } } break; case UPDATE_AUDIOPORT_LIST: { ALOGV("AudioCommandThread() processing update audio port list"); svc = mService.promote(); if (svc == 0) { break; } mLock.unlock(); svc->doOnAudioPortListUpdate(); mLock.lock(); }break; case UPDATE_AUDIOPATCH_LIST: { ALOGV("AudioCommandThread() processing update audio patch list"); svc = mService.promote(); if (svc == 0) { break; } mLock.unlock(); svc->doOnAudioPatchListUpdate(); mLock.lock(); }break; case SET_AUDIOPORT_CONFIG: { SetAudioPortConfigData *data = (SetAudioPortConfigData *)command->mParam.get(); ALOGV("AudioCommandThread() processing set port config"); sp<IAudioFlinger> af = AudioSystem::get_audio_flinger(); if (af == 0) { command->mStatus = PERMISSION_DENIED; } else { command->mStatus = af->setAudioPortConfig(&data->mConfig); } } break; default: ALOGW("AudioCommandThread() unknown command %d", command->mCommand); } { Mutex::Autolock _l(command->mLock); if (command->mWaitStatus) { command->mWaitStatus = false; command->mCond.signal(); } } waitTime = INT64_MAX; } else { waitTime = mAudioCommands[0]->mTime - curTime; break; } } // release mLock before releasing strong reference on the service as // AudioPolicyService destructor calls AudioCommandThread::exit() which acquires mLock. mLock.unlock(); svc.clear(); mLock.lock(); if (!exitPending() && mAudioCommands.isEmpty()) { // release delayed commands wake lock release_wake_lock(mName.string()); ALOGV("AudioCommandThread() going to sleep"); mWaitWorkCV.waitRelative(mLock, waitTime); ALOGV("AudioCommandThread() waking up"); } } // release delayed commands wake lock before quitting if (!mAudioCommands.isEmpty()) { release_wake_lock(mName.string()); } mLock.unlock(); return false; }
這裡直接看CREATE_AUDIO_PATCH的分支,他呼叫了AF端的af->createAudioPatch函式,同樣在這個loop中,也有後面的UPDATE_AUDIOPATCH_LIST分支
frameworks\av\services\audioflinger\PatchPanel.cpp
status_t AudioFlinger::createAudioPatch(const struct audio_patch *patch, audio_patch_handle_t *handle) { Mutex::Autolock _l(mLock); if (mPatchPanel != 0) { return mPatchPanel->createAudioPatch(patch, handle); } return NO_INIT; }
繼續往下走 (唉,老實說我都不想走了,繞來繞去的。。
status_t AudioFlinger::PatchPanel::createAudioPatch(const struct audio_patch *patch, audio_patch_handle_t *handle) { ALOGV("createAudioPatch() num_sources %d num_sinks %d handle %d", patch->num_sources, patch->num_sinks, *handle); status_t status = NO_ERROR; audio_patch_handle_t halHandle = AUDIO_PATCH_HANDLE_NONE; sp<AudioFlinger> audioflinger = mAudioFlinger.promote(); if (audioflinger == 0) { return NO_INIT; } if (handle == NULL || patch == NULL) { return BAD_VALUE; } if (patch->num_sources == 0 || patch->num_sources > AUDIO_PATCH_PORTS_MAX || patch->num_sinks == 0 || patch->num_sinks > AUDIO_PATCH_PORTS_MAX) { return BAD_VALUE; } // limit number of sources to 1 for now or 2 sources for special cross hw module case. // only the audio policy manager can request a patch creation with 2 sources. if (patch->num_sources > 2) { return INVALID_OPERATION; } if (*handle != AUDIO_PATCH_HANDLE_NONE) { for (size_t index = 0; *handle != 0 && index < mPatches.size(); index++) { if (*handle == mPatches[index]->mHandle) { ALOGV("createAudioPatch() removing patch handle %d", *handle); halHandle = mPatches[index]->mHalHandle; Patch *removedPatch = mPatches[index]; mPatches.removeAt(index); delete removedPatch; break; } } } Patch *newPatch = new Patch(patch); switch (patch->sources[0].type) { case AUDIO_PORT_TYPE_DEVICE: { audio_module_handle_t srcModule = patch->sources[0].ext.device.hw_module; ssize_t index = audioflinger->mAudioHwDevs.indexOfKey(srcModule); if (index < 0) { ALOGW("createAudioPatch() bad src hw module %d", srcModule); status = BAD_VALUE; goto exit; } AudioHwDevice *audioHwDevice = audioflinger->mAudioHwDevs.valueAt(index); for (unsigned int i = 0; i < patch->num_sinks; i++) { // support only one sink if connection to a mix or across HW modules if ((patch->sinks[i].type == AUDIO_PORT_TYPE_MIX || patch->sinks[i].ext.mix.hw_module != srcModule) && patch->num_sinks > 1) { status = INVALID_OPERATION; goto exit; } // reject connection to different sink types if (patch->sinks[i].type != patch->sinks[0].type) { ALOGW("createAudioPatch() different sink types in same patch not supported"); status = BAD_VALUE; goto exit; } // limit to connections between devices and input streams for HAL before 3.0 if (patch->sinks[i].ext.mix.hw_module == srcModule && (audioHwDevice->version() < AUDIO_DEVICE_API_VERSION_3_0) && (patch->sinks[i].type != AUDIO_PORT_TYPE_MIX)) { ALOGW("createAudioPatch() invalid sink type %d for device source", patch->sinks[i].type); status = BAD_VALUE; goto exit; } } if (patch->sinks[0].ext.device.hw_module != srcModule) { // limit to device to device connection if not on same hw module if (patch->sinks[0].type != AUDIO_PORT_TYPE_DEVICE) { ALOGW("createAudioPatch() invalid sink type for cross hw module"); status = INVALID_OPERATION; goto exit; } // special case num sources == 2 -=> reuse an exiting output mix to connect to the // sink if (patch->num_sources == 2) { if (patch->sources[1].type != AUDIO_PORT_TYPE_MIX || patch->sinks[0].ext.device.hw_module != patch->sources[1].ext.mix.hw_module) { ALOGW("createAudioPatch() invalid source combination"); status = INVALID_OPERATION; goto exit; } sp<ThreadBase> thread = audioflinger->checkPlaybackThread_l(patch->sources[1].ext.mix.handle); newPatch->mPlaybackThread = (MixerThread *)thread.get(); if (thread == 0) { ALOGW("createAudioPatch() cannot get playback thread"); status = INVALID_OPERATION; goto exit; } } else { audio_config_t config = AUDIO_CONFIG_INITIALIZER; audio_devices_t device = patch->sinks[0].ext.device.type; String8 address = String8(patch->sinks[0].ext.device.address); audio_io_handle_t output = AUDIO_IO_HANDLE_NONE; newPatch->mPlaybackThread = audioflinger->openOutput_l( patch->sinks[0].ext.device.hw_module, &output, &config, device, address, AUDIO_OUTPUT_FLAG_NONE); ALOGV("audioflinger->openOutput_l() returned %p", newPatch->mPlaybackThread.get()); if (newPatch->mPlaybackThread == 0) { status = NO_MEMORY; goto exit; } } uint32_t channelCount = newPatch->mPlaybackThread->channelCount(); audio_devices_t device = patch->sources[0].ext.device.type; String8 address = String8(patch->sources[0].ext.device.address); audio_config_t config = AUDIO_CONFIG_INITIALIZER; audio_channel_mask_t inChannelMask = audio_channel_in_mask_from_count(channelCount); config.sample_rate = newPatch->mPlaybackThread->sampleRate(); config.channel_mask = inChannelMask; config.format = newPatch->mPlaybackThread->format(); audio_io_handle_t input = AUDIO_IO_HANDLE_NONE; newPatch->mRecordThread = audioflinger->openInput_l(srcModule, &input, &config, device, address, AUDIO_SOURCE_MIC, AUDIO_INPUT_FLAG_NONE); ALOGV("audioflinger->openInput_l() returned %p inChannelMask %08x", newPatch->mRecordThread.get(), inChannelMask); if (newPatch->mRecordThread == 0) { status = NO_MEMORY; goto exit; } status = createPatchConnections(newPatch, patch); if (status != NO_ERROR) { goto exit; } } else { if (audioHwDevice->version() >= AUDIO_DEVICE_API_VERSION_3_0) { if (patch->sinks[0].type == AUDIO_PORT_TYPE_MIX) { sp<ThreadBase> thread = audioflinger->checkRecordThread_l( patch->sinks[0].ext.mix.handle); if (thread == 0) { ALOGW("createAudioPatch() bad capture I/O handle %d", patch->sinks[0].ext.mix.handle); status = BAD_VALUE; goto exit; } status = thread->sendCreateAudioPatchConfigEvent(patch, &halHandle); } else { audio_hw_device_t *hwDevice = audioHwDevice->hwDevice(); status = hwDevice->create_audio_patch(hwDevice, patch->num_sources, patch->sources, patch->num_sinks, patch->sinks, &halHandle); } } else { sp<ThreadBase> thread = audioflinger->checkRecordThread_l( patch->sinks[0].ext.mix.handle); if (thread == 0) { ALOGW("createAudioPatch() bad capture I/O handle %d", patch->sinks[0].ext.mix.handle); status = BAD_VALUE; goto exit; } char *address; if (strcmp(patch->sources[0].ext.device.address, "") != 0) { address = audio_device_address_to_parameter( patch->sources[0].ext.device.type, patch->sources[0].ext.device.address); } else { address = (char *)calloc(1, 1); } AudioParameter param = AudioParameter(String8(address)); free(address); param.addInt(String8(AUDIO_PARAMETER_STREAM_ROUTING), (int)patch->sources[0].ext.device.type); param.addInt(String8(AUDIO_PARAMETER_STREAM_INPUT_SOURCE), (int)patch->sinks[0].ext.mix.usecase.source); ALOGV("createAudioPatch() AUDIO_PORT_TYPE_DEVICE setParameters %s", param.toString().string()); status = thread->setParameters(param.toString()); } } } break; case AUDIO_PORT_TYPE_MIX: { audio_module_handle_t srcModule = patch->sources[0].ext.mix.hw_module; ssize_t index = audioflinger->mAudioHwDevs.indexOfKey(srcModule); if (index < 0) { ALOGW("createAudioPatch() bad src hw module %d", srcModule); status = BAD_VALUE; goto exit; } // limit to connections between devices and output streams for (unsigned int i = 0; i < patch->num_sinks; i++) { if (patch->sinks[i].type != AUDIO_PORT_TYPE_DEVICE) { ALOGW("createAudioPatch() invalid sink type %d for mix source", patch->sinks[i].type); status = BAD_VALUE; goto exit; } // limit to connections between sinks and sources on same HW module if (patch->sinks[i].ext.device.hw_module != srcModule) { status = BAD_VALUE; goto exit; } } AudioHwDevice *audioHwDevice = audioflinger->mAudioHwDevs.valueAt(index); sp<ThreadBase> thread = audioflinger->checkPlaybackThread_l(patch->sources[0].ext.mix.handle); if (thread == 0) { ALOGW("createAudioPatch() bad playback I/O handle %d", patch->sources[0].ext.mix.handle); status = BAD_VALUE; goto exit; } if (audioHwDevice->version() >= AUDIO_DEVICE_API_VERSION_3_0) { status = thread->sendCreateAudioPatchConfigEvent(patch, &halHandle); } else { audio_devices_t type = AUDIO_DEVICE_NONE; for (unsigned int i = 0; i < patch->num_sinks; i++) { type |= patch->sinks[i].ext.device.type; } char *address; if (strcmp(patch->sinks[0].ext.device.address, "") != 0) { //FIXME: we only support address on first sink with HAL version < 3.0 address = audio_device_address_to_parameter( patch->sinks[0].ext.device.type, patch->sinks[0].ext.device.address); } else { address = (char *)calloc(1, 1); } AudioParameter param = AudioParameter(String8(address)); free(address); param.addInt(String8(AUDIO_PARAMETER_STREAM_ROUTING), (int)type); status = thread->setParameters(param.toString()); } } break; default: status = BAD_VALUE; goto exit; } exit: ALOGV("createAudioPatch() status %d", status); if (status == NO_ERROR) { *handle = audioflinger->nextUniqueId(); newPatch->mHandle = *handle; newPatch->mHalHandle = halHandle; mPatches.add(newPatch); ALOGV("createAudioPatch() added new patch handle %d halHandle %d", *handle, halHandle); } else { clearPatchConnections(newPatch); delete newPatch; } return status; }
在這個函式中主要工作如下:
1.在AudioPolicyManager::setInputDevice()函式中,num_sources與num_sinks都為1;
2.當halHandle不是AUDIO_PATCH_HANDLE_NONE的時候,就去mPatches集合中找到這個halHandle,然後刪除他,而在這裡,halHandle就是AUDIO_PATCH_HANDLE_NONE;
3.這裡的source.type為AUDIO_PORT_TYPE_DEVICE,獲取patch中的audio_module_handle_t,獲取AF端的AudioHwDevice,後面有個for迴圈判斷,根據之前的引數設定,均不會進到if裡面;
4.判斷source裡的audio_module_handle_t與sink裡的是否一致,那肯定一致噻;
5.再判斷hal程式碼中的version版本,我們看下hardware\aw\audio\tulip\audio_hw.c的adev->hw_device.common.version = AUDIO_DEVICE_API_VERSION_2_0;
6.呼叫AF端的checkRecordThread_l函式,即通過audio_io_handle_t從mRecordThreads中獲取到RecordThread執行緒;
7.通過address建立一個AudioParameter物件,並把source.type與source放入AudioParameter物件中;
8.呼叫thread->setParameters把AudioParameter物件傳遞過去
這裡我們繼續分析下第8步:thread->setParameters
frameworks\av\services\audioflinger\Threads.cpp
status_t AudioFlinger::ThreadBase::setParameters(const String8& keyValuePairs) { status_t status; Mutex::Autolock _l(mLock); return sendSetParameterConfigEvent_l(keyValuePairs); }
emmm,感覺又要繞上一大圈
status_t AudioFlinger::ThreadBase::sendSetParameterConfigEvent_l(const String8& keyValuePair) { sp<ConfigEvent> configEvent = (ConfigEvent *)new SetParameterConfigEvent(keyValuePair); return sendConfigEvent_l(configEvent); }
把AudioParameter物件轉化為ConfigEvent物件,繼續呼叫
status_t AudioFlinger::ThreadBase::sendConfigEvent_l(sp<ConfigEvent>& event) { status_t status = NO_ERROR; mConfigEvents.add(event); mWaitWorkCV.signal(); mLock.unlock(); { Mutex::Autolock _l(event->mLock); while (event->mWaitStatus) { if (event->mCond.waitRelative(event->mLock, kConfigEventTimeoutNs) != NO_ERROR) { event->mStatus = TIMED_OUT; event->mWaitStatus = false; } } status = event->mStatus; } mLock.lock(); return status; }
把ConfigEvent加入到mConfigEvents中,然後呼叫mWaitWorkCV.signal()通知RecordThread執行緒可以start了,線上程中接受到mWaitWorkCV.wait(mLock);時就會回到reacquire_wakelock位置,再繼續往下,這時候呼叫了processConfigEvents_l,他就是用來處理ConfigEvent事件的
void AudioFlinger::ThreadBase::processConfigEvents_l() { bool configChanged = false; while (!mConfigEvents.isEmpty()) { ALOGV("processConfigEvents_l() remaining events %d", mConfigEvents.size()); sp<ConfigEvent> event = mConfigEvents[0]; mConfigEvents.removeAt(0); switch (event->mType) { case CFG_EVENT_PRIO: { PrioConfigEventData *data = (PrioConfigEventData *)event->mData.get(); // FIXME Need to understand why this has to be done asynchronously int err = requestPriority(data->mPid, data->mTid, data->mPrio, true /*asynchronous*/); if (err != 0) { ALOGW("Policy SCHED_FIFO priority %d is unavailable for pid %d tid %d; error %d", data->mPrio, data->mPid, data->mTid, err); } } break; case CFG_EVENT_IO: { IoConfigEventData *data = (IoConfigEventData *)event->mData.get(); audioConfigChanged(data->mEvent, data->mParam); } break; case CFG_EVENT_SET_PARAMETER: { SetParameterConfigEventData *data = (SetParameterConfigEventData *)event->mData.get(); if (checkForNewParameter_l(data->mKeyValuePairs, event->mStatus)) { configChanged = true; } } break; case CFG_EVENT_CREATE_AUDIO_PATCH: { CreateAudioPatchConfigEventData *data = (CreateAudioPatchConfigEventData *)event->mData.get(); event->mStatus = createAudioPatch_l(&data->mPatch, &data->mHandle); } break; case CFG_EVENT_RELEASE_AUDIO_PATCH: { ReleaseAudioPatchConfigEventData *data = (ReleaseAudioPatchConfigEventData *)event->mData.get(); event->mStatus = releaseAudioPatch_l(data->mHandle); } break; default: ALOG_ASSERT(false, "processConfigEvents_l() unknown event type %d", event->mType); break; } { Mutex::Autolock _l(event->mLock); if (event->mWaitStatus) { event->mWaitStatus = false; event->mCond.signal(); } } ALOGV_IF(mConfigEvents.isEmpty(), "processConfigEvents_l() DONE thread %p", this); } if (configChanged) { cacheParameters_l(); } }
這裡果然是一直在等待處理mConfigEvents中的事件,而這個event->mType是CFG_EVENT_SET_PARAMETER,所以繼續呼叫checkForNewParameter_l函式,而他肯定是呼叫的是RecordThread中的啦,這..繞了真·一大圈。
bool AudioFlinger::RecordThread::checkForNewParameter_l(const String8& keyValuePair, status_t& status) { bool reconfig = false; status = NO_ERROR; audio_format_t reqFormat = mFormat; uint32_t samplingRate = mSampleRate; audio_channel_mask_t channelMask = audio_channel_in_mask_from_count(mChannelCount); AudioParameter param = AudioParameter(keyValuePair); int value; if (param.getInt(String8(AudioParameter::keySamplingRate), value) == NO_ERROR) { samplingRate = value; reconfig = true; } if (param.getInt(String8(AudioParameter::keyFormat), value) == NO_ERROR) { if ((audio_format_t) value != AUDIO_FORMAT_PCM_16_BIT) { status = BAD_VALUE; } else { reqFormat = (audio_format_t) value; reconfig = true; } } if (param.getInt(String8(AudioParameter::keyChannels), value) == NO_ERROR) { audio_channel_mask_t mask = (audio_channel_mask_t) value; if (mask != AUDIO_CHANNEL_IN_MONO && mask != AUDIO_CHANNEL_IN_STEREO) { status = BAD_VALUE; } else { channelMask = mask; reconfig = true; } } if (param.getInt(String8(AudioParameter::keyFrameCount), value) == NO_ERROR) { // do not accept frame count changes if tracks are open as the track buffer // size depends on frame count and correct behavior would not be guaranteed // if frame count is changed after track creation if (mActiveTracks.size() > 0) { status = INVALID_OPERATION; } else { reconfig = true; } } if (param.getInt(String8(AudioParameter::keyRouting), value) == NO_ERROR) { // forward device change to effects that have requested to be // aware of attached audio device. for (size_t i = 0; i < mEffectChains.size(); i++) { mEffectChains[i]->setDevice_l(value); }