android-mediacodec-google官方demo
官方地址
https://github.com/PhilLab/Android-MediaCodec-Examples/blob/master/ExtractMpegFramesTest.java
掘金概述文章
https://juejin.im/entry/586289ea1b69e6006cea8f41
https://bigflake.com/mediacodec/
Android MediaCodec 編解碼詳解及 demo
閱讀 2665
收藏 56
2016-12-27
原文連結:www.jianshu.com
免費試學限時拿,騰訊技術總監授課,入職BAT不遙遠!ke.qq.com
原文地址
這篇文章是關於 MediaCodec
這一系列類,它主要是用來編碼和解碼音視訊資料。並且包含了一些原始碼示例的集合以及常見問題的解答。
在API23之後,官方的文件 official 就已經十分的詳細了。這裡的一些資訊可以幫你瞭解一些編解碼方面的知識,為了考慮相容性,這裡的程式碼大部分都是執行在API18及以上的環境中,當然如果你的目標是Lollipop 以上的使用者,你可以有更多的選擇,這些都沒有在這裡提及。
概述
MediaCodec
第一次可用是在 Android 4.1版本(API16 ),一開始是用來直接訪問裝置的媒體編解碼器。它提供了一種極其原始的介面。MediaCodec類同時存在 Java和C++層中,但是隻有前者是公共訪問方法。
在Android 4.3 (API18)中,MediaCodec被擴充套件為包含一種通過 Surface 提供輸入的方法(通過 createInputSurface
而且Android4.3還引入了 MediaMuxer,它允許將AVC編解碼器(原始H.264基本流)的輸出轉換為.MP4格式,可以和音訊流一起轉碼也可以單獨轉換。
Android5.0(API21)引入了“非同步模式”,它允許應用程式提供一個回撥方法,在緩衝區可用時執行。但是整個文章連結裡的程式碼都沒有用到這個,因為相容性保持到API 18+。
基本使用
所有的同步模式的 MediaCodec
API都遵循一個模式:
- 建立並配置一個
MediaCodec
物件 - 迴圈直到完成:
如果輸入緩衝區就緒,讀取一個輸入塊,並複製到輸入緩衝區中
如果輸出緩衝區就緒,複製輸出緩衝區的資料 - 釋放
MediaCodec
物件
MediaCodec的一個例項會處理一種型別的資料,(比如,MP3音訊或H.264視訊),編碼或是解碼。它對原始資料操作,所有任何的檔案頭,比如ID3(一般是位於一個mp3檔案的開頭或末尾的若干位元組內,附加了關於該mp3的歌手,標題,專輯名稱,年代,風格等資訊,該資訊就被稱為ID3資訊)這些資訊會被擦除。它不與任何高階的系統元件通訊,也不會通過揚聲器來播放音訊,或是通過網路來獲取視訊流資料,它只是一個會從緩衝區取資料,並返回資料的中間層。
一些編解碼器對於它們的緩衝區是比較特殊的,它們可能需要一些特殊的記憶體對齊或是有特定的最小最大限制,為了適應廣泛的可能性,buffer緩衝區分配是由編解碼器自己實現的,而不是應用程式的層面。你並不需要一個帶有資料的緩衝區給 MediaCodec
,而是直接向它申請一個緩衝區,然後把你的資料拷貝進去。
這看起來和“零拷貝”原則是相悖的,但大部分情況發生拷貝的機率是比較小的,因為編解碼器並不需要複製或調整這些資料來滿足要求,而且大多數我們可以直接使用緩衝區,比如直接從磁碟或網路讀取資料到緩衝區中,不需要複製。
MediaCodec的輸入必須在“access units”中完成,在編碼H.264視訊時意味著一幀,在解碼時意味著是一個NAL單元,然而,它看起來感覺更像是流,你不能提交一個單塊,並期望不久後就出現,實際上,編解碼器可能會在輸出前入隊好幾個buffers。
這裡強烈建議直接從下面的示例程式碼中學習,而不是直接從官方文件上手。
例子
EncodeAndMuxTest.java (requires 4.3, API 18)
使用OpenGL ES生成一個視訊,通過 MediaCodec 使用H.264進行編碼,而且通過 MediaMuxer 將流轉換成一個.MP4檔案,這裡通過CTS 測試編寫,也可以直接轉成其他環境的程式碼。
CameraToMpegTest.java (requires 4.3, API 18)
通過相機預覽錄製視訊並且編碼成一個MP4檔案,同樣通過 MediaCodec 使用H.264進行編碼,以及 MediaMuxer 將流轉換成一個.MP4檔案,作為一個擴充套件版,還通過GLES片段著色器在錄製的時候改變視訊,同樣是一個CTS test,可以轉換成其他環境的程式碼。
Android Breakout game recorder patch (requires 4.3, API 18)
這是 Android Breakout v1.0.2版本的一個補丁,添加了遊戲錄製功能,遊戲是在60fps的全屏解析度下,通過一個30fps 720p的配置使用AVC編解碼器來錄製視訊,錄製檔案被儲存在一個應用的私有空間,比如 ./data/data/com.faddensoft.breakout/files/video.mp4。這個本質上和 EncodeAndMuxTest.java 是一樣的,不過這個是完全的真實環境不是CTS test,一個關鍵的區別在於EGL的建立,這裡允許通過將顯示和視訊context以共享紋理的方式。
EncodeDecodeTest.java (requires 4.3, API 18)
CTS test,總共有三種test做著相同的事情,但是是不同的方式。每一個都是:
生成video的幀,通過AVC進行編碼,生成解碼流,看看是否和原始資料一樣
上面的生成,編碼,解碼,檢測基本是同時的,幀被生成後,傳遞給編碼器,編碼器拿到的資料會傳遞給解碼器,然後進行校驗,三種方式分別是
Buffer到Buffer,buffers是軟體生成的YUV幀資料,這種方式是最慢的,但是能夠允許應用程式去檢測和修改YUV資料。
Buffer到Surface,編碼是一樣的,但是解碼會在surface中,通過OpenGL ES的 getReadPixels()進行校驗
Surface到Surface,通過OpenGL ES生成幀並解碼到Surface中,這是最快的方式,但是需要YUV和RGB資料的轉換。
DecodeEditEncodeTest.java (requires 4.3, API 18)
CTS test,主要是生成一系列視訊幀,通過AVC進行編碼,編碼資料流儲存在記憶體中,使用 MediaCodec解碼,通過OpenGL ES片段著色器編輯幀資料(交換綠/藍顏色通道),解碼編輯後的視訊流,驗證輸出。
ExtractMpegFramesTest.java (requires 4.1, API 16)
ExtractMpegFramesTest.java (requires 4.2, API 17)
提取一個.mp4視訊檔案的開始10幀,並保持成一個PNG檔案到sd卡中,使用 MediaExtractor 提取 CSD 資料,並將單個 access units給 MediaCodec 解碼器,幀被解碼到一個SurfaceTexture的surface中,離屏渲染,並通過 glReadPixels() 拿到資料後使用 Bitmap#compress() 儲存成一個PNG 檔案。
常見問題
Q1:我怎麼播放一個由MediaCodec建立的“video/avc”格式的視訊流?
A1.這個被建立的流是原始的H.264流資料,Linux的Totem Movie Player可以播放,但大部分其他的都播放不了,你可以使用 MediaMuxer 將其轉換為MP4檔案,看前面的EncodeAndMuxTest例子。
Q2:當我建立一個編碼器時,呼叫 MediaCodec的configure()方法會失敗並丟擲一個IllegalStateException異常?
A2.這通常是因為你沒有指定所有編碼器需要的關鍵命令,可以看一個這個例子 this stackoverflow item。
Q3:我的視訊解碼器配置好了但是不接收資料,這是為什麼?
A3.一個比較常見的錯誤就是忽略設定Codec-Specific Data(CSD),這個在文件中簡略的提到過,有兩個key,“csd-0”,“csd-1”,這個相當於是一系列元資料的序列引數集合,我們只需要直到這個會在MediaCodec 編碼的時候生成,並且在MediaCodec 解碼的時候需要它。
如果你直接把編碼器輸出傳遞給解碼器,就會發現第一個包裡面有BUFFER_FLAG_CODEC_CONFIG 的flag,這個引數需要確保傳遞給瞭解碼器,這樣解碼器才會開始接收資料,或者你可以直接設定CSD資料給MediaFormat,通過 configure()
方法設定給解碼器,這裡可以參考 EncodeDecodeTest sample 這個例子。
實在不行也可以使用 MediaExtractor ,它會幫你做好一切。
Q4:我可以直接將流資料給解碼器麼?
A4.不一定,解碼器需要的是 "access units"格式的流,不一定是位元組流。對於視訊解碼器,這意味著你需要儲存通過編碼器(比如H.264的NAL單元)建立的“包邊界”,這裡可以參考 DecodeEditEncodeTest sample 是如何操作的,一般不能讀任意的塊資料並傳遞給解碼器。
Q5:我在編碼由相機預覽拿到的YUV資料時,為什麼看起來顏色有問題?
A5.相機輸出的顏色格式和MediaCodec 在編碼時的輸入格式是不一樣的,相機支援YV12(平面 YUV 4:2:0) 以及 NV21 (半平面 YUV 4:2:0),MediaCodec支援以下一個或多個:
.#19 COLOR_FormatYUV420Planar (I420)
.#20 COLOR_FormatYUV420PackedPlanar (also I420)
.#21 COLOR_FormatYUV420SemiPlanar (NV12)
.#39 COLOR_FormatYUV420PackedSemiPlanar (also NV12)
.#0x7f000100 COLOR_TI_FormatYUV420PackedSemiPlanar (also also NV12)
I420的資料佈局相當於YV12,但是Cr和Cb卻是顛倒的,就像NV12和NV21一樣。所以如果你想要去處理相機拿到的YV12資料,可能會看到一些奇怪的顏色干擾,比如這樣 these images。直到Android4.4版本,依然沒有統一的輸入格式,比如Nexus 7(2012),Nexus 10使用的COLOR_FormatYUV420Planar,而Nexus 4, Nexus 5, and Nexus 7(2013)使用的是COLOR_FormatYUV420SemiPlanar,而Galaxy Nexus使用的COLOR_TI_FormatYUV420PackedSemiPlanar。
一種可移植性更高,更有效率的方式就是使用API18 的Surface input API,這個在 CameraToMpegTest sample 中已經演示了,這樣做的缺點就是你必須去操作RGB而不是YUV資料,這是一個影象處理的問題,如果你可以通過片段著色器來實現影象操作,可以利用GPU來處理這些轉換和計算。
Q6: EGL_RECORDABLE_ANDROID
flag是用來幹什麼的?
A6.這會告訴EGL,建立surface的行為必須是視訊編解碼器能相容的,沒有這個flag,EGL可能使用 MediaCodec 不能理解的格式來操作。
Q7:我是不是必須要在編碼時設定 presentation time stamp (pts)?
A7.是的,一些裝置如果沒有設定合理的值,那麼在編碼的時候就會採取丟棄幀和低質量編碼的方式。
需要注意的一點就是MediaCodec所需要的time格式是微秒,大部分java程式碼中的都是毫秒或者納秒。
Q8:為什麼有時輸出混亂(比如都是零,或者太短等等)?
A8.這常見的錯誤就是沒有去適配ByteBuffer的position和limit,這些東西MediaCodec並沒有自動的去做,
我們需要手動的加上一些程式碼:
int bufIndex = codec.dequeueOutputBuffer(info, TIMEOUT);
ByteBuffer outputData = outputBuffers[bufIndex];
if (info.size != 0) {
outputData.position(info.offset);
outputData.limit(info.offset + info.size);
}
在輸入端,你需要在將資料複製到緩衝區之前呼叫 clear()
。
Q9: 有時候會發現 storeMetaDataInBuffers
會打出一些錯誤log?
A9.是的,比如在Nexus 5上,看起來是這樣的
E OMXNodeInstance: OMX_SetParameter() failed for StoreMetaDataInBuffers: 0x8000101a
E ACodec : [OMX.qcom.video.encoder.avc] storeMetaDataInBuffers (output) failed w/ err -2147483648
不過可以忽略這些,不會出現什麼問題。
Android-MediaCodec-Examples/ExtractMpegFramesTest.java
cbfcfbd on May 4, 2016
PhilLab Initial upload of all bigflake samples
822 lines (729 sloc) 34.6 KB
/* |
* Copyright 2013 The Android Open Source Project |
* |
* Licensed under the Apache License, Version 2.0 (the "License"); |
* you may not use this file except in compliance with the License. |
* You may obtain a copy of the License at |
* |
* http://www.apache.org/licenses/LICENSE-2.0 |
* |
* Unless required by applicable law or agreed to in writing, software |
* distributed under the License is distributed on an "AS IS" BASIS, |
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
* See the License for the specific language governing permissions and |
* limitations under the License. |
*/ |
package android.media.cts; |
import android.graphics.Bitmap; |
import android.graphics.SurfaceTexture; |
import android.media.MediaCodec; |
import android.media.MediaExtractor; |
import android.media.MediaFormat; |
import android.opengl.GLES11Ext; |
import android.opengl.GLES20; |
import android.opengl.Matrix; |
import android.os.Environment; |
import android.test.AndroidTestCase; |
import android.util.Log; |
import android.view.Surface; |
import java.io.BufferedOutputStream; |
import java.io.File; |
import java.io.FileNotFoundException; |
import java.io.FileOutputStream; |
import java.io.IOException; |
import java.nio.ByteBuffer; |
import java.nio.ByteOrder; |
import java.nio.FloatBuffer; |
import javax.microedition.khronos.egl.EGL10; |
import javax.microedition.khronos.egl.EGLConfig; |
import javax.microedition.khronos.egl.EGLContext; |
import javax.microedition.khronos.egl.EGLDisplay; |
import javax.microedition.khronos.egl.EGLSurface; |
//20131122: minor tweaks to saveFrame() I/O |
//20131205: add alpha to EGLConfig (huge glReadPixels speedup); pre-allocate pixel buffers; |
// log time to run saveFrame() |
//20131210: switch from EGL14 to EGL10 for API 16 compatibility |
//20140123: correct error checks on glGet*Location() and program creation (they don't set error) |
//20140212: eliminate byte swap |
/** |
* Extract frames from an MP4 using MediaExtractor, MediaCodec, and GLES. Put a .mp4 file |
* in "/sdcard/source.mp4" and look for output files named "/sdcard/frame-XX.png". |
* <p> |
* This uses various features first available in Android "Jellybean" 4.1 (API 16). |
* <p> |
* (This was derived from bits and pieces of CTS tests, and is packaged as such, but is not |
* currently part of CTS.) |
*/ |
public class ExtractMpegFramesTest extends AndroidTestCase { |
private static final String TAG = "ExtractMpegFramesTest"; |
private static final boolean VERBOSE = false; // lots of logging |
// where to find files (note: requires WRITE_EXTERNAL_STORAGE permission) |
private static final File FILES_DIR = Environment.getExternalStorageDirectory(); |
private static final String INPUT_FILE = "source.mp4"; |
private static final int MAX_FRAMES = 10; // stop extracting after this many |
/** test entry point */ |
public void testExtractMpegFrames() throws Throwable { |
ExtractMpegFramesWrapper.runTest(this); |
} |
/** |
* Wraps extractMpegFrames(). This is necessary because SurfaceTexture will try to use |
* the looper in the current thread if one exists, and the CTS tests create one on the |
* test thread. |
* |
* The wrapper propagates exceptions thrown by the worker thread back to the caller. |
*/ |
private static class ExtractMpegFramesWrapper implements Runnable { |
private Throwable mThrowable; |
private ExtractMpegFramesTest mTest; |
private ExtractMpegFramesWrapper(ExtractMpegFramesTest test) { |
mTest = test; |
} |
@Override |
public void run() { |
try { |
mTest.extractMpegFrames(); |
} catch (Throwable th) { |
mThrowable = th; |
} |
} |
/** Entry point. */ |
public static void runTest(ExtractMpegFramesTest obj) throws Throwable { |
ExtractMpegFramesWrapper wrapper = new ExtractMpegFramesWrapper(obj); |
Thread th = new Thread(wrapper, "codec test"); |
th.start(); |
th.join(); |
if (wrapper.mThrowable != null) { |
throw wrapper.mThrowable; |
} |
} |
} |
/** |
* Tests extraction from an MP4 to a series of PNG files. |
* <p> |
* We scale the video to 640x480 for the PNG just to demonstrate that we can scale the |
* video with the GPU. If the input video has a different aspect ratio, we could preserve |
* it by adjusting the GL viewport to get letterboxing or pillarboxing, but generally if |
* you're extracting frames you don't want black bars. |
*/ |
private void extractMpegFrames() throws IOException { |
MediaCodec decoder = null; |
CodecOutputSurface outputSurface = null; |
MediaExtractor extractor = null; |
int saveWidth = 640; |
int saveHeight = 480; |
try { |
File inputFile = new File(FILES_DIR, INPUT_FILE); // must be an absolute path |
// The MediaExtractor error messages aren't very useful. Check to see if the input |
// file exists so we can throw a better one if it's not there. |
if (!inputFile.canRead()) { |
throw new FileNotFoundException("Unable to read " + inputFile); |
} |
extractor = new MediaExtractor(); |
extractor.setDataSource(inputFile.toString()); |
int trackIndex = selectTrack(extractor); |
if (trackIndex < 0) { |
throw new RuntimeException("No video track found in " + inputFile); |
} |
extractor.selectTrack(trackIndex); |
MediaFormat format = extractor.getTrackFormat(trackIndex); |
if (VERBOSE) { |
Log.d(TAG, "Video size is " + format.getInteger(MediaFormat.KEY_WIDTH) + "x" + |
format.getInteger(MediaFormat.KEY_HEIGHT)); |
} |
// Could use width/height from the MediaFormat to get full-size frames. |
outputSurface = new CodecOutputSurface(saveWidth, saveHeight); |
// Create a MediaCodec decoder, and configure it with the MediaFormat from the |
// extractor. It's very important to use the format from the extractor because |
// it contains a copy of the CSD-0/CSD-1 codec-specific data chunks. |
String mime = format.getString(MediaFormat.KEY_MIME); |
decoder = MediaCodec.createDecoderByType(mime); |
decoder.configure(format, outputSurface.getSurface(), null, 0); |
decoder.start(); |
doExtract(extractor, trackIndex, decoder, outputSurface); |
} finally { |
// release everything we grabbed |
if (outputSurface != null) { |
outputSurface.release(); |
outputSurface = null; |
} |
if (decoder != null) { |
decoder.stop(); |
decoder.release(); |
decoder = null; |
} |
if (extractor != null) { |
extractor.release(); |
extractor = null; |
} |
} |
} |
/** |
* Selects the video track, if any. |
* |
* @return the track index, or -1 if no video track is found. |
*/ |
private int selectTrack(MediaExtractor extractor) { |
// Select the first video track we find, ignore the rest. |
int numTracks = extractor.getTrackCount(); |
for (int i = 0; i < numTracks; i++) { |
MediaFormat format = extractor.getTrackFormat(i); |
String mime = format.getString(MediaFormat.KEY_MIME); |
if (mime.startsWith("video/")) { |
if (VERBOSE) { |
Log.d(TAG, "Extractor selected track " + i + " (" + mime + "): " + format); |
} |
return i; |
} |
} |
return -1; |
} |
/** |
* Work loop. |
*/ |
static void doExtract(MediaExtractor extractor, int trackIndex, MediaCodec decoder, |
CodecOutputSurface outputSurface) throws IOException { |
final int TIMEOUT_USEC = 10000; |
ByteBuffer[] decoderInputBuffers = decoder.getInputBuffers(); |
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo(); |
int inputChunk = 0; |
int decodeCount = 0; |
long frameSaveTime = 0; |
boolean outputDone = false; |
boolean inputDone = false; |
while (!outputDone) { |
if (VERBOSE) Log.d(TAG, "loop"); |
// Feed more data to the decoder. |
if (!inputDone) { |
int inputBufIndex = decoder.dequeueInputBuffer(TIMEOUT_USEC); |
if (inputBufIndex >= 0) { |
ByteBuffer inputBuf = decoderInputBuffers[inputBufIndex]; |
// Read the sample data into the ByteBuffer. This neither respects nor |
// updates inputBuf's position, limit, etc. |
int chunkSize = extractor.readSampleData(inputBuf, 0); |
if (chunkSize < 0) { |
// End of stream -- send empty frame with EOS flag set. |
decoder.queueInputBuffer(inputBufIndex, 0, 0, 0L, |
MediaCodec.BUFFER_FLAG_END_OF_STREAM); |
inputDone = true; |
if (VERBOSE) Log.d(TAG, "sent input EOS"); |
} else { |
if (extractor.getSampleTrackIndex() != trackIndex) { |
Log.w(TAG, "WEIRD: got sample from track " + |
extractor.getSampleTrackIndex() + ", expected " + trackIndex); |
} |
long presentationTimeUs = extractor.getSampleTime(); |
decoder.queueInputBuffer(inputBufIndex, 0, chunkSize, |
presentationTimeUs, 0 /*flags*/); |
if (VERBOSE) { |
Log.d(TAG, "submitted frame " + inputChunk + " to dec, size=" + |
chunkSize); |
} |
inputChunk++; |
extractor.advance(); |
} |
} else { |
if (VERBOSE) Log.d(TAG, "input buffer not available"); |
} |
} |
if (!outputDone) { |
int decoderStatus = decoder.dequeueOutputBuffer(info, TIMEOUT_USEC); |
if (decoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) { |
// no output available yet |
if (VERBOSE) Log.d(TAG, "no output from decoder available"); |
} else if (decoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) { |
// not important for us, since we're using Surface |
if (VERBOSE) Log.d(TAG, "decoder output buffers changed"); |
} else if (decoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) { |
MediaFormat newFormat = decoder.getOutputFormat(); |
if (VERBOSE) Log.d(TAG, "decoder output format changed: " + newFormat); |
} else if (decoderStatus < 0) { |
fail("unexpected result from decoder.dequeueOutputBuffer: " + decoderStatus); |
} else { // decoderStatus >= 0 |
if (VERBOSE) Log.d(TAG, "surface decoder given buffer " + decoderStatus + |
" (size=" + info.size + ")"); |
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) { |
if (VERBOSE) Log.d(TAG, "output EOS"); |
outputDone = true; |
} |
boolean doRender = (info.size != 0); |
// As soon as we call releaseOutputBuffer, the buffer will be forwarded |
// to SurfaceTexture to convert to a texture. The API doesn't guarantee |
// that the texture will be available before the call returns, so we |
// need to wait for the onFrameAvailable callback to fire. |
decoder.releaseOutputBuffer(decoderStatus, doRender); |
if (doRender) { |
if (VERBOSE) Log.d(TAG, "awaiting decode of frame " + decodeCount); |
outputSurface.awaitNewImage(); |
outputSurface.drawImage(true); |
if (decodeCount < MAX_FRAMES) { |
File outputFile = new File(FILES_DIR, |
String.format("frame-%02d.png", decodeCount)); |
long startWhen = System.nanoTime(); |
outputSurface.saveFrame(outputFile.toString()); |
frameSaveTime += System.nanoTime() - startWhen; |
} |
decodeCount++; |
} |
} |
} |
} |
int numSaved = (MAX_FRAMES < decodeCount) ? MAX_FRAMES : decodeCount; |
Log.d(TAG, "Saving " + numSaved + " frames took " + |
(frameSaveTime / numSaved / 1000) + " us per frame"); |
} |
/** |
* Holds state associated with a Surface used for MediaCodec decoder output. |
* <p> |
* The constructor for this class will prepare GL, create a SurfaceTexture, |
* and then create a Surface for that SurfaceTexture. The Surface can be passed to |
* MediaCodec.configure() to receive decoder output. When a frame arrives, we latch the |
* texture with updateTexImage(), then render the texture with GL to a pbuffer. |
* <p> |
* By default, the Surface will be using a BufferQueue in asynchronous mode, so we |
* can potentially drop frames. |
*/ |
private static class CodecOutputSurface |
implements SurfaceTexture.OnFrameAvailableListener { |
private ExtractMpegFramesTest.STextureRender mTextureRender; |
private SurfaceTexture mSurfaceTexture; |
private Surface mSurface; |
private EGL10 mEgl; |
private EGLDisplay mEGLDisplay = EGL10.EGL_NO_DISPLAY; |
private EGLContext mEGLContext = EGL10.EGL_NO_CONTEXT; |
private EGLSurface mEGLSurface = EGL10.EGL_NO_SURFACE; |
int mWidth; |
int mHeight; |
private Object mFrameSyncObject = new Object(); // guards mFrameAvailable |
private boolean mFrameAvailable; |
private ByteBuffer mPixelBuf; // used by saveFrame() |
/** |
* Creates a CodecOutputSurface backed by a pbuffer with the specified dimensions. The |
* new EGL context and surface will be made current. Creates a Surface that can be passed |
* to MediaCodec.configure(). |
*/ |
public CodecOutputSurface(int width, int height) { |
if (width <= 0 || height <= 0) { |
throw new IllegalArgumentException(); |
} |
mEgl = (EGL10) EGLContext.getEGL(); |
mWidth = width; |
mHeight = height; |
eglSetup(); |
makeCurrent(); |
setup(); |
} |
/** |
* Creates interconnected instances of TextureRender, SurfaceTexture, and Surface. |
*/ |
private void setup() { |
mTextureRender = new ExtractMpegFramesTest.STextureRender(); |
mTextureRender.surfaceCreated(); |
if (VERBOSE) Log.d(TAG, "textureID=" + mTextureRender.getTextureId()); |
mSurfaceTexture = new SurfaceTexture(mTextureRender.getTextureId()); |
// This doesn't work if this object is created on the thread that CTS started for |
// these test cases. |
// |
// The CTS-created thread has a Looper, and the SurfaceTexture constructor will |
// create a Handler that uses it. The "frame available" message is delivered |
// there, but since we're not a Looper-based thread we'll never see it. For |
// this to do anything useful, CodecOutputSurface must be created on a thread without |
// a Looper, so that SurfaceTexture uses the main application Looper instead. |
// |
// Java language note: passing "this" out of a constructor is generally unwise, |
// but we should be able to get away with it here. |
mSurfaceTexture.setOnFrameAvailableListener(this); |
mSurface = new Surface(mSurfaceTexture); |
mPixelBuf = ByteBuffer.allocateDirect(mWidth * mHeight * 4); |
mPixelBuf.order(ByteOrder.LITTLE_ENDIAN); |
} |
/** |
* Prepares EGL. We want a GLES 2.0 context and a surface that supports pbuffer. |
*/ |
private void eglSetup() { |
final int EGL_OPENGL_ES2_BIT = 0x0004; |
final int EGL_CONTEXT_CLIENT_VERSION = 0x3098; |
mEGLDisplay = mEgl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY); |
if (mEGLDisplay == EGL10.EGL_NO_DISPLAY) { |
throw new RuntimeException("unable to get EGL14 display"); |
} |
int[] version = new int[2]; |
if (!mEgl.eglInitialize(mEGLDisplay, version)) { |
mEGLDisplay = null; |
throw new RuntimeException("unable to initialize EGL14"); |
} |
// Configure EGL for pbuffer and OpenGL ES 2.0, 24-bit RGB. |
int[] attribList = { |
EGL10.EGL_RED_SIZE, 8, |
EGL10.EGL_GREEN_SIZE, 8, |
EGL10.EGL_BLUE_SIZE, 8, |
EGL10.EGL_ALPHA_SIZE, 8, |
EGL10.EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT, |
EGL10.EGL_SURFACE_TYPE, EGL10.EGL_PBUFFER_BIT, |
EGL10.EGL_NONE |
}; |
EGLConfig[] configs = new EGLConfig[1]; |
int[] numConfigs = new int[1]; |
if (!mEgl.eglChooseConfig(mEGLDisplay, attribList, configs, configs.length, |
numConfigs)) { |