相机渲染成多个曲面 - 打开和关闭屏幕 [英] Rendering camera into multiple surfaces - on and off screen

查看:844
本文介绍了相机渲染成多个曲面 - 打开和关闭屏幕的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想渲染摄像机输出到一个视图,并偶尔摄像机输出帧保存到一个文件中,约束感 - 保存的框架应该是在相同的分辨率作为相机配置,而观点的除了相机输出较小(保持纵横比)。

根据在在grafika ContinuousCaptureActivity例子,我认为最好的办法是将相机发送到表面纹理一般渲染输出和降频转换它成 SurfaceView ,并在需要时,渲染​​全画幅成不同的表面一个没有观点,为了从它在平行于普通的 SurfaceView 渲染检索字节的缓冲区。

这个例子非常相似,我的情况 - 在preVIEW呈现给更小尺寸的景色,可以记录并通过 VIDEOEN codeR

我更换了 VIDEOEN codeR 逻辑,用我自己和被困试图提供一个表面,像EN codeR的确,对于全分辨率渲染。如何创建这样一个表面?我是不是正确的处理这个?


根据这个例子有些code思路:


surfaceCreated(SurfaceHolder持有者)办法(行350):

  @Override // SurfaceHolder.Callback
公共无效surfaceCreated(SurfaceHolder持有人){
    Log.d(TAGsurfaceCreated持有人=+持有人);    mEglCore =新EglCore(NULL,EglCore.FLAG_RECORDABLE);
    mDisplaySurface =新WindowSurface(mEglCore,holder.getSurface(),FALSE);
    mDisplaySurface.makeCurrent();    mFullFrameBlit =新FullFrameRect(
            新Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
    mTextureId = mFullFrameBlit.createTextureObject();
    mCameraTexture =新的表面纹理(mTextureId);
    mCameraTexture.setOnFrameAvailableListener(本);    Log.d(TAG首发相机preVIEW);
    尝试{
        mCamera.set previewTexture(mCameraTexture);
    }赶上(IOException异常IOE){
        抛出新的RuntimeException(IOE);
    }
    mCamera.start preVIEW();
    // ***我的编辑开始***    // EN codeR创作不再需要
    //尝试{
    // mCircEn codeR =新CircularEn codeR(VIDEO_WIDTH,VIDEO_HEIGHT,600万,
    // mCamera previewThousandFps / 1000,7,mHandler);
    //}赶上(IOException异常IOE){
    //抛出新的RuntimeException(IOE);
    //}    男人coderSurface =新WindowSurface(mEglCore,mCameraTexture); //< - 崩溃,并EGL错误0x3003    // ***我的编辑结束***    updateControls();
}

并条机()办法(线420):

 私人无效并条机(){
    //Log.d(TAG并条机);
    如果(mEglCore == NULL){
        Log.d(TAG,并条机跳过后关机);
        返回;
    }    //锁存从相机下一帧。
    mDisplaySurface.makeCurrent();
    mCameraTexture.updateTexImage();
    mCameraTexture.getTransformMatrix(mTmpMatrix);    //用它填充SurfaceView。
    SurfaceView SV =(SurfaceView)findViewById(R.id.continuousCapture_surfaceView);
    INT viewWidth = sv.getWidth();
    INT viewHeight = sv.getHeight();
    GLES20.glViewport(0,0,viewWidth,viewHeight);
    mFullFrameBlit.drawFrame(mTextureId,mTmpMatrix);
    mDisplaySurface.swapBuffers();    // ***我的编辑开始***    //它发送到视频连接$ C $铬。
    如果(someCondition){
        男人coderSurface.makeCurrent();
        GLES20.glViewport(0,0,VIDEO_WIDTH,VIDEO_HEIGHT);
        mFullFrameBlit.drawFrame(mTextureId,mTmpMatrix);
        男人coderSurface.swapBuffers();
        尝试{
            男人coderSurface.saveFrame(新文件(getExternalFilesDir(空),将String.valueOf(System.currentTimeMillis的())+巴纽));
        }赶上(IOException异常五){
            e.printStackTrace();
        }
    }    // ***我的编辑结束***}


解决方案

您是在正确的轨道上。在刚刚表面纹理周围是否从相机原始YUV框架包裹的快速一点,所以外部的质地是原始图像,没有任何变化。您无法读取像素直出外部的质感,所以你必须先某处呈现。

要做到这一点,最简单的方法是创建一个离屏表面pbuffer的。 Grafika的GLES / OffscreenSurface类正是这一点(通过调用 eglCreatePbufferSurface())。使该EGLSurface电流,使纹理到FullFrameRect,然后阅读与 glReadPixels()(见 EglSurfaceBase#saveFrame()为code)。不要叫 eglSwapBuffers()

请注意,你不能为输出,只是一个EGLSurface创建一个Android的表面。 (他们是不同的。)

I want to render the camera output into a view and once in a while save the camera output frame to a file, with the constraint being - the saved frame should be the same resolution as the camera is configured, while the view is smaller than the camera output (maintaining the aspect ratio).

Based on the ContinuousCaptureActivity example in grafika, I thought the best approach would be to send the camera to a SurfaceTexture and generally rendering the output and downscaling it into a SurfaceView, and when needed, render the full frame into a different Surface that has no view, in order to retrieve a byte buffer from it in parallel to the regular SurfaceView rendering.

The example is very similar to my situation - the preview is rendered to a view of smaller size and can be recorded and saved at the full resolution via a VideoEncoder.

I replaced the VideoEncoder logic with my own and got stuck trying to provide a Surface, like the encoder does, for the full resolution rendering. How do I create such a Surface? Am I approaching this correctly?


Some code ideas based on the example:


Inside the surfaceCreated(SurfaceHolder holder) method (line 350):

@Override   // SurfaceHolder.Callback
public void surfaceCreated(SurfaceHolder holder) {
    Log.d(TAG, "surfaceCreated holder=" + holder);

    mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
    mDisplaySurface = new WindowSurface(mEglCore, holder.getSurface(), false);
    mDisplaySurface.makeCurrent();

    mFullFrameBlit = new FullFrameRect(
            new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
    mTextureId = mFullFrameBlit.createTextureObject();
    mCameraTexture = new SurfaceTexture(mTextureId);
    mCameraTexture.setOnFrameAvailableListener(this);

    Log.d(TAG, "starting camera preview");
    try {
        mCamera.setPreviewTexture(mCameraTexture);
    } catch (IOException ioe) {
        throw new RuntimeException(ioe);
    }
    mCamera.startPreview();


    // *** MY EDIT START ***

    // Encoder creation no longer needed
    //  try {
    //    mCircEncoder = new CircularEncoder(VIDEO_WIDTH, VIDEO_HEIGHT, 6000000,
    //            mCameraPreviewThousandFps / 1000, 7, mHandler);
    //  } catch (IOException ioe) {
    //      throw new RuntimeException(ioe);
    //  }

    mEncoderSurface = new WindowSurface(mEglCore, mCameraTexture); // <-- Crashes with EGL error 0x3003

    // *** MY EDIT END ***

    updateControls();
}

The drawFrame() method (line 420):

private void drawFrame() {
    //Log.d(TAG, "drawFrame");
    if (mEglCore == null) {
        Log.d(TAG, "Skipping drawFrame after shutdown");
        return;
    }

    // Latch the next frame from the camera.
    mDisplaySurface.makeCurrent();
    mCameraTexture.updateTexImage();
    mCameraTexture.getTransformMatrix(mTmpMatrix);

    // Fill the SurfaceView with it.
    SurfaceView sv = (SurfaceView) findViewById(R.id.continuousCapture_surfaceView);
    int viewWidth = sv.getWidth();
    int viewHeight = sv.getHeight();
    GLES20.glViewport(0, 0, viewWidth, viewHeight);
    mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
    mDisplaySurface.swapBuffers();

    // *** MY EDIT START ***

    // Send it to the video encoder.
    if (someCondition) {
        mEncoderSurface.makeCurrent();
        GLES20.glViewport(0, 0, VIDEO_WIDTH, VIDEO_HEIGHT);
        mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
        mEncoderSurface.swapBuffers();
        try {
            mEncoderSurface.saveFrame(new File(getExternalFilesDir(null), String.valueOf(System.currentTimeMillis()) + ".png"));
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    // *** MY EDIT END ***

}

解决方案

You're on the right track. The SurfaceTexture just does a quick bit of wrapping around the original YUV frame from the camera, so the "external" texture is the original image, with no changes. You can't read the pixels straight out of an external texture, so you have to render it somewhere first.

The easiest way to do this is to create an off-screen pbuffer surface. Grafika's gles/OffscreenSurface class does exactly this (with a call to eglCreatePbufferSurface()). Make that EGLSurface current, render the texture onto a FullFrameRect, then read the framebuffer with glReadPixels() (see EglSurfaceBase#saveFrame() for code). Don't call eglSwapBuffers().

Note that you're not creating an Android Surface for the output, just an EGLSurface. (They're different.)

这篇关于相机渲染成多个曲面 - 打开和关闭屏幕的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆