live555中是如何獲取SPS和PPS的
阿新 • • 發佈:2019-02-18
關於這個問題,困擾了很長時間,主要是為了獲取這2個東東,設計到的類太多了,首先的理清設計到了哪些類,其次得把這些類的依屬關係給理清,下面就讓咱們一步一步的來分析:
SPS和PPS的獲取是在伺服器接受到了Descripute命令後進行的,在看下面的內容之前請大家先看上篇文章:http://www.shouyanwang.org/thread-704-1-1.html
RtspServerMediaSubSession sdpLines:
FramedSource* inputSource = createNewStreamSource(0, estBitrate);
RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);
上面的2個類基本表明了H264這種型別在解析的時候所涉及的類層次:
初步看了下FramedSource和RTPSink的類結構層次,覺得FrameSource於從檔案中提取一幀一幀的H264幀有關係,而RTPSink可能與拆包封裝為RTP/RTCP 傳給伺服器有關聯。
由此引發的2條主幹繼承關係如下:
FramedSource* inputSource = createNewStreamSource(0, estBitrate);
引發的繼承關係:
H264VideoStreamFramer --- MPEGVideoStreamFamer --- FramedFilter---FramedSource -- Media
因此後面看到inputSoure其實本質上是 H264VideoStreamFramer
RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);
引發的繼承關係如下:
H264VideoRTPSink -- VideoRTPSink -- MultiFramedRTPSink -- RTPSink --- MediaSink
因此後面看到的RTSPink其實本質上是H264VideoRTPSink.
fDummyRTPSink-startPlaying 實際呼叫的是MediaSink的startPlaying:
continuePlaying呼叫的H264VideoRTPSink中得continuePlaying:
H264FUAFramenter -- FramedFilter -- FrameSource -- MediaSourec
fSource = fOurFragmenter;//fOurFragmenter 主要用於是RTP的拆包實現
這裡格外注意這2行程式碼,fSource首先參與到fOurFragmenter的初始化,然後改變指向,指向了剛剛建立的fOurFragmenter.
在fOurFragmenter初始化的過程中,其引數 FramedSource* fInputSource;指向了傳進來的source,也就是 H264FUAFramenter
為什麼在這裡重點強調,後面要用到的.
conitnuePlay呼叫的核心是buildAndSendPacket,,buildAndPackFrame主要涉及到了RFC3984中得RTP封包部分,其呼叫的核心是:
packFrame();
這個時候fSource-getNextFrame,fSource其實呼叫的是FrameSource的getNextFrame,因為H264FuaFragmenter並沒有實現此方法:
H264FUAFramenter中得doGetNextFrame
看到上面的程式碼沒,核心是fInputSource->getNextFrame,fInputSource是 H264VideoFramer型別的,但是H264VideoFramer並沒有覆蓋此方法,因此最終的執行是在FrameSource的getNextFrame中:
從H264VideoStreamFrame的繼承關係可以推敲出,doGetNextFrame其實呼叫的是其父類MPEGVideoStreamFramer的doGetNextFrame方法.
剩下的明天接著分析,就這麼點分析就花了將近1個半鐘頭........類了,休息下。。。
SPS和PPS的獲取是在伺服器接受到了Descripute命令後進行的,在看下面的內容之前請大家先看上篇文章:http://www.shouyanwang.org/thread-704-1-1.html
RtspServerMediaSubSession sdpLines:
FramedSource* inputSource = createNewStreamSource(0, estBitrate);
RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);
上面的2個類基本表明了H264這種型別在解析的時候所涉及的類層次:
初步看了下FramedSource和RTPSink的類結構層次,覺得FrameSource於從檔案中提取一幀一幀的H264幀有關係,而RTPSink可能與拆包封裝為RTP/RTCP 傳給伺服器有關聯。
由此引發的2條主幹繼承關係如下:
FramedSource* inputSource = createNewStreamSource(0, estBitrate);
引發的繼承關係:
H264VideoStreamFramer --- MPEGVideoStreamFamer --- FramedFilter---FramedSource -- Media
因此後面看到inputSoure其實本質上是 H264VideoStreamFramer
RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);
引發的繼承關係如下:
H264VideoRTPSink -- VideoRTPSink -- MultiFramedRTPSink -- RTPSink --- MediaSink
因此後面看到的RTSPink其實本質上是H264VideoRTPSink.
- char const* H264VideoFileServerMediaSubsession::getAuxSDPLine(RTPSink* rtpSink, FramedSource* inputSource) {
- if (fAuxSDPLine != NULL) return fAuxSDPLine; // it's already been set up (for a previous client)
- printf("H264VideoFileServerMediaSubsession getAuxSDPLine\r\n");
- if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream
- // Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known
- // until we start reading the file. This means that "rtpSink"s "auxSDPLine()" will be NULL initially,
- // and we need to start reading data from our file until this changes.
- fDummyRTPSink = rtpSink;//加了dummy就代表是引用?
- // Start reading the file: 在這裡的主要目的是為了獲取SPS和PPS
- fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this);//函式的宣告在 MediaSink 中完成,如果是H264那麼這裡的inputSource必然不為NULL
fDummyRTPSink-startPlaying 實際呼叫的是MediaSink的startPlaying:
- Boolean MediaSink::startPlaying(MediaSource& source,
- afterPlayingFunc* afterFunc,
- void* afterClientData) {
- printf("MediaSink startPlaying....\r\n");
- // Make sure we're not already being played:
- if (fSource != NULL) {//這裡是fSource不是source
- printf("MediaSink is already being played\r\n");
- envir().setResultMsg("This sink is already being played");
- return False;
- }
- // Make sure our source is compatible: compatible(相容)
- if (!sourceIsCompatibleWithUs(source)) {
- envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");
- return False;
- }
- fSource = (FramedSource*)&source;
- fAfterFunc = afterFunc;//MediaSink中定義的函式指標,用於指向H264FileServerMediaSubSession中的函式:afterPlayingDummy
- fAfterClientData = afterClientData;//實際指向H264FileMediaSubSession
- return continuePlaying();//呼叫的是H264的continuePlaying函式
- }
continuePlaying呼叫的H264VideoRTPSink中得continuePlaying:
- Boolean H264VideoRTPSink::continuePlaying() {
- // First, check whether we have a 'fragmenter' class set up yet.
- // If not, create it now:
- if (fOurFragmenter == NULL) {
- printf("H264VideoRTPSink init H264FUAFragmenter\r\n");
- Boolean H264VideoRTPSink::continuePlaying() {
H264FUAFramenter -- FramedFilter -- FrameSource -- MediaSourec
fOurFragmenter = new H264FUAFragmenter(envir(), fSource, OutPacketBuffer::maxSize,//100K
ourMaxPacketSize() - 12/*RTP hdr size*/);
fSource = fOurFragmenter;//fOurFragmenter 主要用於是RTP的拆包實現
這裡格外注意這2行程式碼,fSource首先參與到fOurFragmenter的初始化,然後改變指向,指向了剛剛建立的fOurFragmenter.
在fOurFragmenter初始化的過程中,其引數 FramedSource* fInputSource;指向了傳進來的source,也就是 H264FUAFramenter
為什麼在這裡重點強調,後面要用到的.
conitnuePlay呼叫的核心是buildAndSendPacket,,buildAndPackFrame主要涉及到了RFC3984中得RTP封包部分,其呼叫的核心是:
packFrame();
- void MultiFramedRTPSink::packFrame() {
- // Get the next frame.
- // First, see if we have an overflow frame that was too big for the last pkt
- if (fOutBuf->haveOverflowData()) {
- printf("MultiFramedRTPSink packFrame Over flow data---\r\n");
- // Use this frame before reading a new one from the source
- unsigned frameSize = fOutBuf->overflowDataSize(); //? fOutBuf的初始化在哪裡完成?
- struct timeval presentationTime = fOutBuf->overflowPresentationTime();
- unsigned durationInMicroseconds = fOutBuf->overflowDurationInMicroseconds();
- fOutBuf->useOverflowData();
- afterGettingFrame1(frameSize, 0, presentationTime, durationInMicroseconds);
- } else {
- printf("MultiFrameRTPSink packFrame read a new frame from the source--\r\n");
- // Normal case: we need to read a new frame from the source
- if (fSource == NULL) return;
- fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize();
- fCurFrameSpecificHeaderSize = frameSpecificHeaderSize();
- fOutBuf->skipBytes(fCurFrameSpecificHeaderSize);
- fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize;
- // printf("MultiFrameRTPSink packFrame curptr:%d,totalBytesAvailable:%d--\r\n",fOutBuf->curPtr(), fOutBuf->totalBytesAvailable());
- fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),
- afterGettingFrame, this, ourHandleClosure, this);//似乎所有的核心都集中到了這裡?-----
- }
- }
這個時候fSource-getNextFrame,fSource其實呼叫的是FrameSource的getNextFrame,因為H264FuaFragmenter並沒有實現此方法:
- void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
- afterGettingFunc* afterGettingFunc,
- void* afterGettingClientData,
- onCloseFunc* onCloseFunc,
- void* onCloseClientData) {
- printf("FrameSource getNextFrame ...\r\n");
- // Make sure we're not already being read:
- if (fIsCurrentlyAwaitingData) {
- envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
- envir().internalError();
- }
- fTo = to;
- fMaxSize = maxSize;
- fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
- fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
- fAfterGettingFunc = afterGettingFunc;
- fAfterGettingClientData = afterGettingClientData;
- fOnCloseFunc = onCloseFunc;
- fOnCloseClientData = onCloseClientData;
- fIsCurrentlyAwaitingData = True;
- //實際呼叫的是: H264FUAFragmenter::doGetNextFrame()
- doGetNextFrame();//
- }
H264FUAFramenter中得doGetNextFrame
- void H264FUAFragmenter::doGetNextFrame() {
- if (fNumValidDataBytes == 1) {
- //正常情況下應該執行的是這裡
- printf("H264FUAFragmenter doGetNextFrame validDataBytes..\r\n");
- // We have no NAL unit data currently in the buffer. Read a new one:
- //這裡的fInputSource實際指的是:H264VideoStreamFramer
- fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,
- afterGettingFrame, this,
- FramedSource::handleClosure, this);
- } else {
- //對NAL單元進行拆分或者合併傳送
看到上面的程式碼沒,核心是fInputSource->getNextFrame,fInputSource是 H264VideoFramer型別的,但是H264VideoFramer並沒有覆蓋此方法,因此最終的執行是在FrameSource的getNextFrame中:
- void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
- afterGettingFunc* afterGettingFunc,
- void* afterGettingClientData,
- onCloseFunc* onCloseFunc,
- void* onCloseClientData) {
- printf("FrameSource getNextFrame ...\r\n");
- // Make sure we're not already being read:
- if (fIsCurrentlyAwaitingData) {
- envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
- envir().internalError();
- }
- fTo = to;
- fMaxSize = maxSize;
- fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
- fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
- fAfterGettingFunc = afterGettingFunc;
- fAfterGettingClientData = afterGettingClientData;
- fOnCloseFunc = onCloseFunc;
- fOnCloseClientData = onCloseClientData;
- fIsCurrentlyAwaitingData = True;
- //實際呼叫的是: H264FUAFragmenter::doGetNextFrame() ---(這裡理解錯誤 ,因為這裡的FrameSource並不是H264FUAFragmenter物件)
- doGetNextFrame();// ----這裡實際呼叫的是那個物件的呢?
- }
從H264VideoStreamFrame的繼承關係可以推敲出,doGetNextFrame其實呼叫的是其父類MPEGVideoStreamFramer的doGetNextFrame方法.
- void MPEGVideoStreamFramer::doGetNextFrame() {
- printf("MPEGVideoStreamFrame doGetNextFrame ....\r\n");
- fParser->registerReadInterest(fTo, fMaxSize);
- continueReadProcessing();
- }
剩下的明天接著分析,就這麼點分析就花了將近1個半鐘頭........類了,休息下。。。