1. 程式人生 > >(一)Audio子系統之AudioRecord.getMinBufferSize

(一)Audio子系統之AudioRecord.getMinBufferSize

 

在文章《基於Allwinner的Audio子系統分析(Android-5.1)》中已經介紹了Audio的系統架構以及應用層呼叫的流程,接下來,繼續分析AudioRecorder方法中的getMinBufferSize的實現

  

  函式原型:

    public static int getMinBufferSize (int sampleRateInHz, int channelConfig, int audioFormat)

  作用:

    返回成功建立AudioRecord物件所需要的最小緩衝區大小

  引數:

    sampleRateInHz:預設取樣率,單位Hz,這裡設定為44100,44100Hz是當前唯一能保證在所有裝置上工作的取樣率

    channelConfig: 描述音訊聲道設定,這裡設定為AudioFormat.CHANNEL_CONFIGURATION_MONO,CHANNEL_CONFIGURATION_MONO保證能在所有裝置上工作;

    audioFormat:音訊資料的取樣精度,這裡設定為AudioFormat.ENCODING_16BIT;

  返回值:

    返回成功建立AudioRecord物件所需要的最小緩衝區大小。 注意:這個大小並不保證在負荷下的流暢錄製,應根據預期的頻率來選擇更高的值,AudioRecord例項在推送新資料時使用此值

    如果硬體不支援錄製引數,或輸入了一個無效的引數,則返回ERROR_BAD_VALUE(-2),如果硬體查詢到輸出屬性沒有實現,或最小緩衝區用byte表示,則返回ERROR(-1)

 

接下來進入系統分析具體實現

  frameworks/base/media/java/android/media/AudioRecord.java

 static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) {
        int channelCount = 0;
        switch (channelConfig) {
        case AudioFormat.CHANNEL_IN_DEFAULT: // AudioFormat.CHANNEL_CONFIGURATION_DEFAULT //1
        case AudioFormat.CHANNEL_IN_MONO: //16
        case AudioFormat.CHANNEL_CONFIGURATION_MONO://2
            channelCount = 1;
            break;
        case AudioFormat.CHANNEL_IN_STEREO: //12
        case AudioFormat.CHANNEL_CONFIGURATION_STEREO://3
        case (AudioFormat.CHANNEL_IN_FRONT | AudioFormat.CHANNEL_IN_BACK): // 16||32
            channelCount = 2;
            break;
        case AudioFormat.CHANNEL_INVALID://0
        default:
            loge("getMinBufferSize(): Invalid channel configuration.");
            return ERROR_BAD_VALUE;
        }

        // PCM_8BIT is not supported at the moment
        if (audioFormat != AudioFormat.ENCODING_PCM_16BIT) {
            loge("getMinBufferSize(): Invalid audio format.");
            return ERROR_BAD_VALUE;
        }
		
        int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);

        if (size == 0) {
            return ERROR_BAD_VALUE;
        }
        else if (size == -1) {
            return ERROR;
        }
        else {
            return size;
        }
    }

     對音訊通道與音訊取樣精度進行判斷,單聲道(MONO)時channelCount為1,立體聲(STEREO)時channelCount為2,且A64上僅支援PCM_16BIT取樣,其值為2,然後繼續呼叫native函式

        frameworks/base/core/jni/android_media_AudioRecord.cpp

static jint android_media_AudioRecord_get_min_buff_size(JNIEnv *env,  jobject thiz,
    jint sampleRateInHertz, jint channelCount, jint audioFormat) {

    ALOGV(">> android_media_AudioRecord_get_min_buff_size(%d, %d, %d)",
          sampleRateInHertz, channelCount, audioFormat);

    size_t frameCount = 0;
	//從java轉成jni的format型別
    audio_format_t format = audioFormatToNative(audioFormat);//AUDIO_FORMAT_PCM_16_BIT=0x1

	//獲取frameCount,並判斷硬體是否支援
    status_t result = AudioRecord::getMinFrameCount(&frameCount,
            sampleRateInHertz,
            format,
            audio_channel_in_mask_from_count(channelCount));

    if (result == BAD_VALUE) {
        return 0;
    }
    if (result != NO_ERROR) {
        return -1;
    }
    return frameCount * channelCount * audio_bytes_per_sample(format);
}

    呼叫服務端的函式,獲取frameCount大小,最後返回了frameCount*聲道數*取樣精度,其中frameCount表示最小取樣幀數,繼續分析frameCount的計算方法

        frameworks/av/media/libmedia/AudioRecord.cpp

status_t AudioRecord::getMinFrameCount(
        size_t* frameCount,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask)
{
    if (frameCount == NULL) {
        return BAD_VALUE;
    }
	
    size_t size;
    status_t status = AudioSystem::getInputBufferSize(sampleRate, format, channelMask, &size);
    if (status != NO_ERROR) {
        ALOGE("AudioSystem could not query the input buffer size for sampleRate %u, format %#x, "
              "channelMask %#x; status %d", sampleRate, format, channelMask, status);
        return status;
    }
    //計算出最小的frame
    // We double the size of input buffer for ping pong use of record buffer.
    // Assumes audio_is_linear_pcm(format)
    if ((*frameCount = (size * 2) / (audio_channel_count_from_in_mask(channelMask) *
            audio_bytes_per_sample(format))) == 0) {
        ALOGE("Unsupported configuration: sampleRate %u, format %#x, channelMask %#x",
            sampleRate, format, channelMask);
        return BAD_VALUE;
    }

    return NO_ERROR;
}

    此時frameCount= size*2/(聲道數*取樣精度),注意這裡需要double一下,而size是由hal層得到的,AudioSystem::getInputBufferSize()函式最終會呼叫到HAL層

        frameworks/av/media/libmedia/AudioSystem.cpp

status_t AudioSystem::getInputBufferSize(uint32_t sampleRate, audio_format_t format,
        audio_channel_mask_t channelMask, size_t* buffSize)
{
    const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        return PERMISSION_DENIED;
    }
    Mutex::Autolock _l(gLockCache);
    // Do we have a stale gInBufferSize or are we requesting the input buffer size for new values
    size_t inBuffSize = gInBuffSize;
    if ((inBuffSize == 0) || (sampleRate != gPrevInSamplingRate) || (format != gPrevInFormat)
        || (channelMask != gPrevInChannelMask)) {
        gLockCache.unlock();
        inBuffSize = af->getInputBufferSize(sampleRate, format, channelMask);
        gLockCache.lock();
        if (inBuffSize == 0) {
            ALOGE("AudioSystem::getInputBufferSize failed sampleRate %d format %#x channelMask %x",
                    sampleRate, format, channelMask);
            return BAD_VALUE;
        }
        // A benign race is possible here: we could overwrite a fresher cache entry
        // save the request params
        gPrevInSamplingRate = sampleRate;
        gPrevInFormat = format;
        gPrevInChannelMask = channelMask;

        gInBuffSize = inBuffSize;
    }
    *buffSize = inBuffSize;

    return NO_ERROR;
}

這裡通過get_audio_flinger獲取到了一個AudioFlinger物件

const sp<IAudioFlinger> AudioSystem::get_audio_flinger()
{
    sp<IAudioFlinger> af;
    sp<AudioFlingerClient> afc;
    {
        Mutex::Autolock _l(gLock);
        if (gAudioFlinger == 0) {
            sp<IServiceManager> sm = defaultServiceManager();
            sp<IBinder> binder;
            do {
                binder = sm->getService(String16("media.audio_flinger"));
                if (binder != 0)
                    break;
                ALOGW("AudioFlinger not published, waiting...");
                usleep(500000); // 0.5 s
            } while (true);
            if (gAudioFlingerClient == NULL) {
                gAudioFlingerClient = new AudioFlingerClient();
            } else {
                if (gAudioErrorCallback) {
                    gAudioErrorCallback(NO_ERROR);
                }
            }
            binder->linkToDeath(gAudioFlingerClient);
            gAudioFlinger = interface_cast<IAudioFlinger>(binder);
            LOG_ALWAYS_FATAL_IF(gAudioFlinger == 0);
            afc = gAudioFlingerClient;
        }
        af = gAudioFlinger;
    }
    if (afc != 0) {
        af->registerClient(afc);
    }
    return af;
}

然後判斷是否引數是之前配置過的引數,這樣做是為了防止重複多次呼叫getMinBufferSize導致佔用硬體資源,所以當第一次呼叫或更新引數呼叫後,則呼叫AF中的getInputBufferSize方法獲取BuffSize,而af是IAudioFlinger型別的智慧指標,所以實際上會通過binder到達AudioFlinger中

frameworks\av\services\audioflinger\AudioFlinger.cpp

size_t AudioFlinger::getInputBufferSize(uint32_t sampleRate, audio_format_t format,
        audio_channel_mask_t channelMask) const
{
    status_t ret = initCheck();
    if (ret != NO_ERROR) {
        return 0;
    }

    AutoMutex lock(mHardwareLock);
    mHardwareStatus = AUDIO_HW_GET_INPUT_BUFFER_SIZE;
    audio_config_t config;
    memset(&config, 0, sizeof(config));
    config.sample_rate = sampleRate;
    config.channel_mask = channelMask;
    config.format = format;

    audio_hw_device_t *dev = mPrimaryHardwareDev->hwDevice();
    size_t size = dev->get_input_buffer_size(dev, &config);
    mHardwareStatus = AUDIO_HW_IDLE;
    return size;
}

把引數傳遞給hal層,獲取buffer大小

hardware\aw\audio\tulip\audio_hw.c

static size_t adev_get_input_buffer_size(const struct audio_hw_device *dev,
                                         const struct audio_config *config)
{
    size_t size;
    int channel_count = popcount(config->channel_mask);
    if (check_input_parameters(config->sample_rate, config->format, channel_count) != 0)
        return 0;
    return get_input_buffer_size(config->sample_rate, config->format, channel_count);
}

再次檢查一次引數是否正確,為什麼在很多函式裡面都做一次檢查引數呢?可能在其他的地方也呼叫到了這個函式,所以最好做一次檢查,確保萬無一失

static size_t get_input_buffer_size(uint32_t sample_rate, int format, int channel_count)
{
    size_t size;
    size_t device_rate;

    if (check_input_parameters(sample_rate, format, channel_count) != 0)
        return 0;

    /* take resampling into account and return the closest majoring
    multiple of 16 frames, as audioflinger expects audio buffers to
    be a multiple of 16 frames */
    size = (pcm_config_mm_in.period_size * sample_rate) / pcm_config_mm_in.rate;
    size = ((size + 15) / 16) * 16;

    return size * channel_count * sizeof(short);
}

這裡包含一個結構體struct pcm_config,定義了一個週期包含了多少採樣幀,並根據結構體的rate資料進行重取樣計算,這裡的rate是以MM_SAMPLING_RATE為標準,即44100,一個取樣週期有1024個取樣幀,然後計算出重取樣之後的size

同時audioflinger的音訊buffer是16的整數倍,所以再次計算得出一個最接近16倍的整數,最後返回size*聲道數*1幀資料所佔位元組數

struct pcm_config pcm_config_mm_in = {
    .channels = 2,
    .rate = MM_SAMPLING_RATE,
    .period_size = 1024,
    .period_count = CAPTURE_PERIOD_COUNT,
    .format = PCM_FORMAT_S16_LE,
};

總結:

minBuffSize = ((((((((pcm_config_mm_in.period_size * sample_rate) / pcm_config_mm_in.rate) + 15) / 16) * 16) * channel_count * sizeof(short)) * 2) / (audio_channel_count_from_in_mask(channelMask) * audio_bytes_per_sample(format))) * channelCount * audio_bytes_per_sample(format);

      =(((((((pcm_config_mm_in.period_size * sample_rate) / pcm_config_mm_in.rate) + 15) / 16) * 16) * channel_count * sizeof(short)) * 2)

  其中:pcm_config_mm_in.period_size=1024;pcm_config_mm_in.rate=44100;這裡我們可以看到他除掉(channelCount*format),後面又乘回來了,這個是因為在AudioRecord.cpp對frameCount進行了一次校驗,判斷是否支援該引數的設定。

以getMinBufferSize(44100, MONO, 16BIT);為例,即sample_rate=44100,channel_count=1, format=2,那麼

BufferSize = (((1024*sample_rate/44100)+15)/16)*16*channel_count*sizeof(short)*2 = 4096

即最小緩衝區大小為:週期大小 *  重取樣  * 取樣聲道數 * 2 * 取樣精度所佔位元組數;這裡的2的解釋為We double the size of input buffer for ping pong use of record buffer,取樣精度:PCM_8_BIT為unsigned char,PCM_16_BIT為short,PCM_32_BIT為int。

 

由於作者內功有限,若文章中存在錯誤或不足的地方,還請給位大佬指出,不勝感激!