Android新增一個音訊型別及雙音訊輸出的實現
Android新增一個音訊型別及雙音訊輸出的實現
2016年01月18日 18:08:44 這歌聲無聊可是輝煌 閱讀數:7946
android定義了很多種音訊型別,完整定義在native層如下,system\core\include\system\audio.h檔案中:
/* Audio stream types */ typedef enum { /* These values must kept in sync with * frameworks/base/media/java/android/media/AudioSystem.java */ AUDIO_STREAM_DEFAULT = -1, AUDIO_STREAM_MIN = 0, AUDIO_STREAM_VOICE_CALL = 0, AUDIO_STREAM_SYSTEM = 1, AUDIO_STREAM_RING = 2, AUDIO_STREAM_MUSIC = 3, AUDIO_STREAM_ALARM = 4, AUDIO_STREAM_NOTIFICATION = 5, AUDIO_STREAM_BLUETOOTH_SCO = 6, AUDIO_STREAM_ENFORCED_AUDIBLE = 7, /* Sounds that cannot be muted by user * and must be routed to speaker */ AUDIO_STREAM_DTMF = 8, AUDIO_STREAM_TTS = 9, /* Transmitted Through Speaker. * Plays over speaker only, silent on other devices. */ AUDIO_STREAM_USB_HEADSET = 10, /* For accessibility talk back prompts */ AUDIO_STREAM_REROUTING = 11, /* For dynamic policy output mixes */ AUDIO_STREAM_PATCH = 12, /* For internal audio flinger tracks. Fixed volume */ AUDIO_STREAM_USB_MIC = 13, AUDIO_STREAM_ACCESSIBILITY = 14, AUDIO_STREAM_PUBLIC_CNT = AUDIO_STREAM_USB_MIC + 1, AUDIO_STREAM_CNT = AUDIO_STREAM_ACCESSIBILITY + 1, } audio_stream_type_t;
android為不同音訊型別設定了不同的路由,根據路由選擇不同的輸出裝置,這便是android的音訊管理策略。
比如,應用層傳入的音訊型別是STREAM_MUSIC,插上耳機時,這種型別的聲音會從speaker切換到耳機,如果音訊型別是STREAM_RING,則會從耳機和speaker同時傳出來。
AudioPolicyManager.h中定義了一下幾種路由策略:
enum routing_strategy { STRATEGY_MEDIA, STRATEGY_PHONE, STRATEGY_SONIFICATION, STRATEGY_SONIFICATION_RESPECTFUL, STRATEGY_DTMF, STRATEGY_ENFORCED_AUDIBLE, STRATEGY_TRANSMITTED_THROUGH_SPEAKER, STRATEGY_ACCESSIBILITY, STRATEGY_REROUTING, STRATEGY_USB_HEADST, NUM_STRATEGIES };
根據路由為不同音訊型別選擇輸出裝置主要在AudioPolicyManager的getDeviceForStrategy方法,因此通過增加自定義音訊型別和修改getDeviceForStrategy的音訊策略,即可以對android的音訊管理策略實現自定義。
例如實現這樣的一個功能,在android智慧電視上配合應用實現雙音訊輸出的功能,即使用者在看電視的過程中同時還可以聽音樂,電視的聲音從揚聲器輸出,而音樂的聲音從耳機中輸出,這裡我們選擇了一個usb 耳機裝置。
實現原理即增加一個音訊型別為音樂應用使用,開啟雙音訊輸出功能時,該應用傳入的音訊型別為我們自定義的,為該音訊型別選擇usb audio裝置,同時,普通的tv及第三方應用使用的則是STREAM_MUSIC型別,該音訊型別對應路由策略的是STRATEGY_MEDIA型別,我們在雙音訊功能開啟的時候為該策略強制選擇speaker裝置,這樣即實現了我們的雙音訊功能。
<pre name="code" class="csharp"> case STRATEGY_USB_HEADST: case STRATEGY_MEDIA: { char propDoubOutput[PROPERTY_VALUE_MAX]; property_get("audio.output.double_output",propDoubOutput,"null"); if ((strcmp(propDoubOutput,"1") == 0) && strategy == STRATEGY_USB_HEADST) { device = mAvailableOutputDevices.types() & AUDIO_DEVICE_OUT_USB_DEVICE; if (device != AUDIO_DEVICE_NONE) { device = AUDIO_DEVICE_OUT_USB_DEVICE; }else{ ALOGE("getDeviceForStrategy() no device found for STRATEGY_USB_HEADST"); } } else { uint32_t device2 = AUDIO_DEVICE_NONE; if (strategy != STRATEGY_SONIFICATION) { // no sonification on remote submix (e.g. WFD) if (mAvailableOutputDevices.getDevice(AUDIO_DEVICE_OUT_REMOTE_SUBMIX, String8("0")) != 0) { device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_REMOTE_SUBMIX; } }
STRATEGY_USB_HEADST型別是我們自定義的策略型別,"audio.output.double_output"為我們自己新增的一個屬性,作為底層判斷是否上層設定了雙音訊屬性,可以看出在非雙音訊模式下,STRATEGY_USB_HEADST型別與MEDIA型別是完全一樣的,在設定了雙音訊屬性時,我們為 STRATEGY_USB_HEADST型別選擇了usbaudio裝置,device = AUDIO_DEVICE_OUT_USB_DEVICE;而同時我們還要為MEDIA裝置選擇speaker裝置:property_get("audio.output.double_output",propDoubOutput,"null"); if (strcmp(propDoubOutput, "1") ==0) { device = AUDIO_DEVICE_OUT_AUX_DIGITAL |AUDIO_DEVICE_OUT_SPEAKER; } else { device |= device2; }
選擇裝置的工作基本就做完了,但是前提是需要j從ava層到framework層為該音訊型別打通過程。實際上這個參照一種音訊型別的實現就很容易解決。基本上理清一個audiotrack從java層到native層的呼叫過程即可,在java層audiomanger與audiosystem中新增我們自定義的音訊型別之後來看audiotrack的建構函式,5.1之於4.4多了一個AudioAttributes,這對上層傳下來的streamType做了一層封裝,看上去是更方便了我們的擴充套件,通過上層stream_type轉化得到 private int mUsage = USAGE_UNKNOWN;
和 private int mContentType = CONTENT_TYPE_UNKNOWN兩種型別,到了native層AudioTrack.cpp的set函式中:
status_t AudioTrack::set( audio_stream_type_t streamType, uint32_t sampleRate, audio_format_t format, audio_channel_mask_t channelMask, size_t frameCount, audio_output_flags_t flags, callback_t cbf, void* user, uint32_t notificationFrames, const sp<IMemory>& sharedBuffer, bool threadCanCallJava, int sessionId, transfer_type transferType, const audio_offload_info_t *offloadInfo, int uid, pid_t pid, const audio_attributes_t* pAttributes) { ALOGI("set(): %p streamType %d, sampleRate %u, format %#x, channelMask %#x, frameCount %zu, " "flags #%x, notificationFrames %u, sessionId %d, transferType %d", this,streamType, sampleRate, format, channelMask, frameCount, flags, notificationFrames, sessionId, transferType); switch (transferType) { case TRANSFER_DEFAULT: if (sharedBuffer != 0) { transferType = TRANSFER_SHARED; } else if (cbf == NULL || threadCanCallJava) { transferType = TRANSFER_SYNC; } else { transferType = TRANSFER_CALLBACK; } break; case TRANSFER_CALLBACK: if (cbf == NULL || sharedBuffer != 0) { ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL || sharedBuffer != 0"); return BAD_VALUE; } break; case TRANSFER_OBTAIN: case TRANSFER_SYNC: if (sharedBuffer != 0) { ALOGE("Transfer type TRANSFER_OBTAIN but sharedBuffer != 0"); return BAD_VALUE; } break; case TRANSFER_SHARED: if (sharedBuffer == 0) { ALOGE("Transfer type TRANSFER_SHARED but sharedBuffer == 0"); return BAD_VALUE; } break; default: ALOGE("Invalid transfer type %d", transferType); return BAD_VALUE; } mSharedBuffer = sharedBuffer; mTransfer = transferType; ALOGV_IF(sharedBuffer != 0, "sharedBuffer: %p, size: %d", sharedBuffer->pointer(), sharedBuffer->size()); ALOGV("set() streamType %d frameCount %zu flags %04x", streamType, frameCount, flags); AutoMutex lock(mLock); // invariant that mAudioTrack != 0 is true only after set() returns successfully if (mAudioTrack != 0) { ALOGE("Track already in use"); return INVALID_OPERATION; } // handle default values first. if (streamType == AUDIO_STREAM_DEFAULT) { streamType = AUDIO_STREAM_MUSIC; } if (pAttributes == NULL) { if (uint32_t(streamType) >= AUDIO_STREAM_PUBLIC_CNT) { ALOGE("Invalid stream type %d", streamType); return BAD_VALUE; } mStreamType = streamType; } else { <span style="color:#ff6666;"> // stream type shouldn't be looked at, this track has audio attributes memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t)); ALOGV("Building AudioTrack with attributes: usage=%d content=%d flags=0x%x tags=[%s]", mAttributes.usage, mAttributes.content_type, mAttributes.flags, mAttributes.tags); mStreamType = AUDIO_STREAM_DEFAULT;</span> } // these below should probably come from the audioFlinger too... if (format == AUDIO_FORMAT_DEFAULT) { format = AUDIO_FORMAT_PCM_16_BIT; } ......
看到 mStreamType = AUDIO_STREAM_DEFAULT; stream_type已經被設為-1,後面獲取裝置時不再關心stream_type,而是由audio_attributes_t這個結構體來選擇,再來看看這個結構體的定義:
typedef struct { audio_content_type_t content_type; audio_usage_t usage; audio_source_t source; audio_flags_mask_t flags; char tags[AUDIO_ATTRIBUTES_TAGS_MAX_SIZE]; /* UTF8 */ } audio_attributes_t;
正是前面提到的mUsage 和mContentType 。
再回到AudioPolicyManager,看看getOutputForAttr介面,改介面呼叫了我們之前修改過的getDeviceForStrategy來獲取裝置:
......
ALOGV("getOutputForAttr() usage=%d, content=%d, tag=%s flags=%08x",
attributes.usage, attributes.content_type, attributes.tags, attributes.flags);
routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes);
audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);
......
所以在上層將stream_type 與AudioAttributes的轉換做好,這條路就基本打通了,雙音訊輸出的功能就實現了。