Camera 預覽之SurfaceView、TextureView、GLSurfaceView(三)
今天介紹下GLSurfaceView如何使用。GLSurfaceView的包名是android.opengl,由此可以它是opengl的一個類,它也可以預覽camera,而且在預覽camera上有比SurfaceView獨特的優勢,可以做到資料和顯示的分離,比如在沒有螢幕的裝置照樣可以開預覽實時直播。下面要介紹的這個例子是獲取camera預覽資料編碼為視訊流。但這篇文章只介紹如何使用GLSurfaceView去預覽,關於獲取預覽資料編碼視訊流後續會介紹。
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.content.Context;
import android.graphics.SurfaceTexture;
import android.opengl.GLES11Ext;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.GLSurfaceView.Renderer;
import android.util.AttributeSet;
import android.util.Log;
public class MyGLSurfaceView extends GLSurfaceView implements
Renderer, SurfaceTexture.OnFrameAvailableListener {
private static final String TAG = "MyGLSurfaceView";
private Context mContext;
private SurfaceTexture mSurface;
private int mTextureID = -1;
private CameraDrawer mCameraDrawer;
public CameraGLSurfaceView(Context context, AttributeSet attrs) {
super(context, attrs);
// TODO Auto-generated constructor stub
mContext = context;
setEGLContextClientVersion(2);
setRenderer(this);
setRenderMode(RENDERMODE_WHEN_DIRTY);
}
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// TODO Auto-generated method stub
Log.i(TAG, "onSurfaceCreated...");
mTextureID = createTextureID();
mSurface = new SurfaceTexture(mTextureID);
mSurface.setOnFrameAvailableListener(this);
mCameraDrawer = new CameraDrawer(mTextureID);
CameraWrapper.getInstance().doOpenCamera(null);
}
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// TODO Auto-generated method stub
Log.i(TAG, "onSurfaceChanged..." + width + "/" + height);
GLES20.glViewport(0, 0, width, height);
if(!CameraWrapper.getInstance().isPreviewing()){
CameraWrapper.getInstance().doStartPreview(mSurface);
}
}
@Override
public void onDrawFrame(GL10 gl) {
// TODO Auto-generated method stub
Log.i(TAG, "onDrawFrame...");
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
mSurface.updateTexImage();
float[] mtx = new float[16];
mSurface.getTransformMatrix(mtx);
mCameraDrawer.drawSelf(mtx);
}
@Override
public void onPause() {
// TODO Auto-generated method stub
super.onPause();
CameraWrapper.getInstance().doStopCamera();
}
private int createTextureID() {
int[] texture = new int[1];
GLES20.glGenTextures(1, texture, 0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, texture[0]);
GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
GL10.GL_TEXTURE_MIN_FILTER,GL10.GL_LINEAR);
GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
return texture[0];
}
public SurfaceTexture getSurfaceTexture() {
return mSurface;
}
@Override
public void onFrameAvailable(SurfaceTexture surfaceTexture) {
// TODO Auto-generated method stub
Log.i(TAG, "onFrameAvailable...");
this.requestRender();
}
}
MyGLSurfaceView繼承至GLSurfaceView,並實現Render介面,看下建構函式中的實現:
1、setEGLContextClientVersion 設定opengl的版本,這個必須要設定,如果不設定系統就不知道使用哪個版本的api來渲染,介面上就什麼都不會顯示。
2、setRenderer設定一個渲染器
3、setRenderMode設定渲染模式,支援2中模式,RENDERMODE_CONTINUOUSLY ,這個模式會一直渲染比較耗費資源。RENDERMODE_WHEN_DIRTY Surface建立的時候會去渲染,或是有資料的時候,即呼叫requestRender,才會去渲染,camera預覽比較適合第二種模式。
走完構造,我的畫布已經準備好,渲染器也初始化完,如何進行資料渲染呢,接下就該Render接口出場了。
這裡實現了3個回掉onSurfaceCreated() onSurfaceChanged() onDrawFrame()
1、onSurfaceCreated裡建立了紋理並綁定了一個ID,設定SurfaceTexture Frame Available的一個監聽,通知有Render有資料需要渲染。構造mCameraDrawer,CameraDrawer很重要它是負責繪製的資料的一個類,下面會介紹。
2、onSurfaceChanged當Surface有變化的時候開啟camera預覽
3、onDrawFrame,呼叫updateTexImage,在畫布上繪製當前幀資料。
到這裡我們的繪畫大師CameraDrawer就要登場了。
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import java.nio.ShortBuffer;
import android.opengl.GLES11Ext;
import android.opengl.GLES20;
import android.opengl.Matrix;
public class CameraDrawer {
private final String vertexShaderCode =
"attribute vec4 vPosition;" +
"attribute vec2 inputTextureCoordinate;" +
"varying vec2 textureCoordinate;" +
"void main()" +
"{"+
"gl_Position = vPosition;"+
"textureCoordinate = inputTextureCoordinate;" +
"}";
private final String fragmentShaderCode =
"#extension GL_OES_EGL_image_external : require\n"+
"precision mediump float;" +
"varying vec2 textureCoordinate;\n" +
"uniform samplerExternalOES s_texture;\n" +
"void main() {" +
" gl_FragColor = texture2D( s_texture, textureCoordinate );\n" +
"}";
private FloatBuffer vertexBuffer, textureVerticesBuffer;
private ShortBuffer drawListBuffer;
private final int mProgram;
private int mPositionHandle;
private int mTextureCoordHandle;
private short drawOrder[] = { 0, 1, 2, 0, 2, 3 }; // order to draw vertices
// number of coordinates per vertex in this array
private static final int COORDS_PER_VERTEX = 2;
private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex
static float squareCoords[] = {
-1.0f, 1.0f,
-1.0f, -1.0f,
1.0f, -1.0f,
1.0f, 1.0f,
};
static float textureVertices[] = {
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
};
private int texture;
public CameraDrawer(int texture)
{
this.texture = texture;
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(squareCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(squareCoords);
vertexBuffer.position(0);
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
ByteBuffer bb2 = ByteBuffer.allocateDirect(textureVertices.length * 4);
bb2.order(ByteOrder.nativeOrder());
textureVerticesBuffer = bb2.asFloatBuffer();
textureVerticesBuffer.put(textureVertices);
textureVerticesBuffer.position(0);
int vertexShader = loadShader(GLES20.GL_VERTEX_SHADER, vertexShaderCode);
int fragmentShader = loadShader(GLES20.GL_FRAGMENT_SHADER, fragmentShaderCode);
mProgram = GLES20.glCreateProgram(); // create empty OpenGL ES Program
GLES20.glAttachShader(mProgram, vertexShader); // add the vertex shader to program
GLES20.glAttachShader(mProgram, fragmentShader); // add the fragment shader to program
GLES20.glLinkProgram(mProgram); // creates OpenGL ES program executables
}
public void drawSelf(float[] mtx)
{
GLES20.glUseProgram(mProgram);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, texture);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the <insert shape here> coordinate data
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer);
mTextureCoordHandle = GLES20.glGetAttribLocation(mProgram, "inputTextureCoordinate");
GLES20.glEnableVertexAttribArray(mTextureCoordHandle);
GLES20.glVertexAttribPointer(mTextureCoordHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, textureVerticesBuffer);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mTextureCoordHandle);
}
private int loadShader(int type, String shaderCode){
// create a vertex shader type (GLES20.GL_VERTEX_SHADER)
// or a fragment shader type (GLES20.GL_FRAGMENT_SHADER)
int shader = GLES20.glCreateShader(type);
// add the source code to the shader and compile it
GLES20.glShaderSource(shader, shaderCode);
GLES20.glCompileShader(shader);
return shader;
}
private float[] transformTextureCoordinates( float[] coords, float[] matrix)
{
float[] result = new float[ coords.length ];
float[] vt = new float[4];
for ( int i = 0 ; i < coords.length ; i += 2 ) {
float[] v = { coords[i], coords[i+1], 0 , 1 };
Matrix.multiplyMV(vt, 0, matrix, 0, v, 0);
result[i] = vt[0];
result[i+1] = vt[1];
}
return result;
}
}
通過回撥onFrameAvailable,一幀一幀畫預覽資料。接下來看下Activity部分是怎麼使用這view的。和之前介紹的區別在於,之前camera的open和preview都是在activity中完成,現在放在MyGLSurfaceView當中。
public class CameraActivity extends Activity{
private static final String TAG = "CameraActivity";
CameraGLSurfaceView glSurfaceView = null;
float previewRate = -1f;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_camera);
initUI();
initViewParams();
shutterBtn.setOnClickListener(new BtnListeners());
}
private void initUI(){
glSurfaceView = (CameraGLSurfaceView)findViewById(R.id.camera_textureview);
}
private void initViewParams(){
LayoutParams params = glSurfaceView.getLayoutParams();
Point p = DisplayUtil.getScreenMetrics(this);
params.width = p.x;
params.height = p.y;
previewRate = DisplayUtil.getScreenRate(this); //預設全屏的比例預覽
glSurfaceView.setLayoutParams(params);
}
@Override
protected void onResume() {
// TODO Auto-generated method stub
super.onResume();
glSurfaceView.bringToFront();
}
@Override
protected void onPause() {
// TODO Auto-generated method stub
super.onPause();
glSurfaceView.onPause();
}
}
下面是預覽截圖:
到這camera三種預覽方式就分析完畢了,後續我會深入到framework中給大家分析,camera的資料是如何渲染到GLSurfaceView上的。因為上述程式碼中並沒有很明顯的對buffer進行的操作。
原創不易,如果您覺得好,可以分享此公眾號給你更多的人。