1. 程式人生 > >QT+ffmpeg打造跨平臺多功能播放器

QT+ffmpeg打造跨平臺多功能播放器

概述

此程式用QT的Qwidget做視訊渲染,QT Mutimedia做音訊渲染,ffmpeg作為音視訊編解碼核心,以CMake作跨平臺編譯。

編譯引數:
DepsPath : ffmpeg庫cmake路徑
QT_Dir: Qt cmake路徑

程式分為輸入以及渲染兩個部分,輸入負責開啟視訊流並從中解碼出音/視訊幀資料並分開儲存到對應的資料佇列,渲染部分負責從佇列中獲取到資料然後轉換對應影象/音訊格式,最後通過QT渲染出來。

程式流程如下圖:

流程圖

資料模組

資料模組封裝了資料佇列以及影象格式轉換以及音訊資料格式轉換。包含AudioCvt ImageCvt 以及DataContext 幾個類以及對AvFrame資料的再封裝;

影象轉換

ImageConvert::ImageConvert(AVPixelFormat in_pixelFormat, int in_width, int in_height,
                                 AVPixelFormat out_pixelFormat, int out_width, int out_height)
{
    this->in_pixelFormat    = in_pixelFormat;
    this->out_pixelFormat   = out_pixelFormat;
    this->in_width          = in_width;
    this->in_height         = in_height;
    this->out_width         = out_width;
    this->out_height        = out_height;
    this->swsContext        = sws_getContext(in_width, in_height, in_pixelFormat,
                                              out_width, out_height, out_pixelFormat,
                                              SWS_POINT
, nullptr, nullptr, nullptr); this->frame = av_frame_alloc(); this->buffer = (uint8_t*)av_malloc(avpicture_get_size(out_pixelFormat, out_width, out_height)); avpicture_fill((AVPicture *)this->frame, this->buffer, out_pixelFormat, out_width, out_height); } ImageConvert
::~ImageConvert() { sws_freeContext(this->swsContext); av_frame_free(&this->frame); } void ImageConvert::convertImage(AVFrame *frame) { sws_scale(this->swsContext, (const uint8_t *const *) frame->data, frame->linesize, 0, this->in_height, this->frame->data, this->frame->linesize); this->frame->width = this->out_width; this->frame->height = this->out_height; this->frame->format = this->out_pixelFormat; this->frame->pts = frame->pts; }

音訊轉換

AudioConvert::AudioConvert(AVSampleFormat in_sample_fmt, int in_sample_rate, int in_channels,
                                 AVSampleFormat out_sample_fmt, int out_sample_rate, int out_channels)
{

    this->in_sample_fmt     = in_sample_fmt;
    this->out_sample_fmt    = out_sample_fmt;
    this->in_channels       = in_channels;
    this->out_channels      = out_channels;
    this->in_sample_rate    = in_sample_rate;
    this->out_sample_rate   = out_sample_rate;
    this->swrContext        = swr_alloc_set_opts(nullptr,
                                                 av_get_default_channel_layout(out_channels),
                                                 out_sample_fmt,
                                                 out_sample_rate,
                                                 av_get_default_channel_layout(in_channels),
                                                 in_sample_fmt,
                                                 in_sample_rate, 0, nullptr);

    this->invalidated       = false;

    swr_init(this->swrContext);

    this->buffer = nullptr;

    this->buffer = (uint8_t**)calloc(out_channels,sizeof(**this->buffer));

}d

AudioConvert::~AudioConvert()
{
    swr_free(&this->swrContext);

    av_freep(&this->buffer[0]);

}

int AudioConvert::allocContextSamples(int nb_samples)
{
    if(!this->invalidated)
    {
        this->invalidated = true;

        return av_samples_alloc(this->buffer, nullptr, this->out_channels,
                                nb_samples, this->out_sample_fmt, 0);
    }


    return 0;
}

int AudioConvert::convertAudio(AVFrame *frame)
{
    int len = swr_convert(this->swrContext, this->buffer, frame->nb_samples,
                          (const uint8_t **) frame->extended_data, frame->nb_samples);

    this->bufferLen  =  this->out_channels * len * av_get_bytes_per_sample(this->out_sample_fmt);

    return this->bufferLen;
}

輸入

輸入部分由一個解碼執行緒組成,解碼執行緒負責解碼音視訊資料然後儲存到對應的資料佇列,包含兩個類:InputThread以及InputFormat,InputFormat是對ffmpeg解碼音視訊過程的封裝,InputThread例項化InputFormat然後從中讀取資料並存儲到對應的音視訊佇列。

這裡寫圖片描述

視訊渲染

眾所周知的,視訊其實就是一個連續的在螢幕上按照一定的時間序列播放的影象序列。用QT渲染視訊只需要將採集到的AvFrame轉換成影象就能夠在QT上顯示。QWidget能夠通過獲取到QPainter在其paintEvent事件中能夠將影象(QImage/QPixmap)渲染出來,要做到用QWidget渲染視訊只需要從佇列中取出AVFrame然後轉換成QImage的格式就可以將QImage繪製到對應的QWidget上。相關程式碼如下:

渲染QIMage

void VideoRender::paintEvent(QPaintEvent *event)
{
    QPainter painter(this);

    painter.setRenderHint(QPainter::Antialiasing, true);

    painter.setBrush(QColor(0xcccccc));

    painter.drawRect(0,0,this->width(),this->height());

    if(!frame.isNull())
    {
        painter.drawImage(QPoint(0,0),frame);
    }
}

影象轉換

void VideoThread::run()
{
    AvFrameContext  *videoFrame     = nullptr;
    ImageConvert    *imageContext   = nullptr;
    int64_t         realTime        = 0;
    int64_t         lastPts         = 0;
    int64_t         delay           = 0;
    int64_t         lastDelay       = 0;

    while (!isInterruptionRequested())
    {
        videoFrame = dataContext->getFrame();

        if(videoFrame == nullptr)
            break;

        if(imageContext != nullptr && (imageContext->in_width != videoFrame->frame->width ||
                                       imageContext->in_height != videoFrame->frame->height||
                                       imageContext->out_width != size.width() ||
                                       imageContext->out_height != size.height()))
        {
            delete imageContext;
            imageContext = nullptr;
        }

        if(imageContext == nullptr)
            imageContext = new ImageConvert(videoFrame->pixelFormat,
                                               videoFrame->frame->width,
                                               videoFrame->frame->height,
                                               AV_PIX_FMT_RGB32,
                                               size.width(),
                                               size.height());

        imageContext->convertImage(videoFrame->frame);


        if(audioRender != nullptr)
        {
            realTime = audioRender->getCurAudioTime();

            if(lastPts == 0)
                lastPts = videoFrame->pts;

            lastDelay   = delay;
            delay       = videoFrame->pts - lastPts;

            lastPts = videoFrame->pts;

            if(delay < 0 || delay > 1000000)
            {
                delay = lastDelay != 0 ? lastDelay : 0;
            }

            if(delay != 0)
            {
                if(videoFrame->pts > realTime)
                    QThread::usleep(delay * 1.5);
//                else
//                    QThread::usleep(delay / 1.5);
            }
        }

        QImage img(imageContext->buffer, size.width(), size.height(), QImage::Format_RGB32);

        emit onFrame(img);

        delete videoFrame;

    }

    delete imageContext;

}

音訊渲染

音訊渲染器使用QT Mutimedia模組QAudioOutput渲染。開啟一個音訊輸出,然後音訊轉換執行緒從資料佇列取出資料然後轉換格式之後寫入到音訊輸出的buffer。通過寫入時的音訊幀時間戳-緩衝區中剩餘的音訊資料所需要的時間就是當前音訊播放的時間,此時間可以作為同步時鐘,視訊渲染執行緒根據音訊渲染器的當前時間來進行音視訊同步。

時間計算方法如下:

int64_t AudioRender::getCurAudioTime()
{

    int64_t size = audioOutput->bufferSize() - outputBuffer->bytesAvailable();

    int bytes_per_sec = 44100 *2 * 2;

    int64_t pts = this->curPts - static_cast<double>(size) / bytes_per_sec * 1000000;

    return pts;
}