在Android上录制视频时如何绘制视频,并保存视频和图形? [英] How can I draw on a video while recording it in android, and save the video and the drawing?

查看:188
本文介绍了在Android上录制视频时如何绘制视频,并保存视频和图形?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试开发一个应用程序,使我可以在录制视频的同时绘制视频,然后将录制的视频和视频保存到一个mp4文件中,以备后用.另外,我想使用camera2库,尤其是我需要我的应用程序才能在高于API 21的设备上运行,并且我始终避免使用不赞成使用的库.

I am trying to develop an app that allows me to draw on a video while recording it, and to then save both the recording and the video in one mp4 file for later use. Also, I want to use the camera2 library, especially that I need my app to run for devices higher than API 21, and I am always avoiding deprecated libraries.

我尝试了许多方法来做到这一点,包括FFmpeg,其中放置了TextureView.getBitmap()(来自相机)和从画布上获取的位图的覆盖图.它可以工作,但是由于它的功能很慢,因此视频无法捕获足够的帧(甚至不能达到25 fps),并且运行得如此之快.我也希望包含音频.

I tried many ways to do it, including FFmpeg in which I placed an overlay of the TextureView.getBitmap() (from the camera) and a bitmap taken from the canvas. It worked but since it is a slow function, the video couldn't catch enough frames (not even 25 fps), and it ran so fast. I want audio to be included as well.

我考虑过MediaProjection库,但是我不确定它是否只能在VirtualDisplay内捕获包含相机和图形的布局,因为应用程序用户也可以在视频中添加文本,但是我不知道希望键盘出现.

I thought about the MediaProjection library, but I am not sure if it can capture the layout containg the camera and the drawing only inside its VirtualDisplay, because the app user may add text as well on the video, and I don't want the keyboard to appear.

请帮忙,这是一个星期的研究,我发现没有什么对我有用.

Please help, it's been a week of research and I found nothing that worked fine for me.

PS:在用户按下停止录制"按钮之后,如果包括一点处理时间,我没有问题.

已编辑

现在在Eddy's Answer之后,由于该应用程序执行视频渲染,因此我正在使用shadercam应用程序在相机表面上绘制,解决方法是将画布渲染为位图,然后渲染为GL纹理,但是我无法成功完成.我需要你们的帮助,我需要完成应用程序:S

Now after Eddy's Answer, I am using the shadercam app to draw on the camera surface since the app does the video rendering, and the workaround to do is about rendering my canvas into a bitmap then into a GL texture, however I am not being able to do it successfully. I need your help guys, I need to finish the app :S

我正在使用shadercam库( https://github.com/googlecreativelab/shadercam ) ,然后我将"ExampleRenderer"文件替换为以下代码:

I am using the shadercam library (https://github.com/googlecreativelab/shadercam), and I replaced the "ExampleRenderer" file with the following code:

public class WriteDrawRenderer extends CameraRenderer
{
    private float offsetR = 1f;
    private float offsetG = 1f;
    private float offsetB = 1f;

    private float touchX = 1000000000;
    private float touchY = 1000000000;

    private  Bitmap textBitmap;

    private int textureId;

    private boolean isFirstTime = true;

    //creates a new canvas that will draw into a bitmap instead of rendering into the screen
    private Canvas bitmapCanvas;

    /**
     * By not modifying anything, our default shaders will be used in the assets folder of shadercam.
     *
     * Base all shaders off those, since there are some default uniforms/textures that will
     * be passed every time for the camera coordinates and texture coordinates
     */
    public WriteDrawRenderer(Context context, SurfaceTexture previewSurface, int width, int height)
    {
        super(context, previewSurface, width, height, "touchcolor.frag.glsl", "touchcolor.vert.glsl");
        //other setup if need be done here


    }

    /**
     * we override {@link #setUniformsAndAttribs()} and make sure to call the super so we can add
     * our own uniforms to our shaders here. CameraRenderer handles the rest for us automatically
     */
    @Override
    protected void setUniformsAndAttribs()
    {
        super.setUniformsAndAttribs();

        int offsetRLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetR");
        int offsetGLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetG");
        int offsetBLoc = GLES20.glGetUniformLocation(mCameraShaderProgram, "offsetB");

        GLES20.glUniform1f(offsetRLoc, offsetR);
        GLES20.glUniform1f(offsetGLoc, offsetG);
        GLES20.glUniform1f(offsetBLoc, offsetB);

        if (touchX < 1000000000 && touchY < 1000000000)
        {
            //creates a Paint object
            Paint yellowPaint = new Paint();
            //makes it yellow
            yellowPaint.setColor(Color.YELLOW);
            //sets the anti-aliasing for texts
            yellowPaint.setAntiAlias(true);
            yellowPaint.setTextSize(70);

            if (isFirstTime)
            {
                textBitmap = Bitmap.createBitmap(mSurfaceWidth, mSurfaceHeight, Bitmap.Config.ARGB_8888);
                bitmapCanvas = new Canvas(textBitmap);
            }

            bitmapCanvas.drawText("Test Text", touchX, touchY, yellowPaint);

            if (isFirstTime)
            {
                textureId = addTexture(textBitmap, "textBitmap");
                isFirstTime = false;
            }
            else
            {
                updateTexture(textureId, textBitmap);
            }

            touchX = 1000000000;
            touchY = 1000000000;
        }
    }

    /**
     * take touch points on that textureview and turn them into multipliers for the color channels
     * of our shader, simple, yet effective way to illustrate how easy it is to integrate app
     * interaction into our glsl shaders
     * @param rawX raw x on screen
     * @param rawY raw y on screen
     */
    public void setTouchPoint(float rawX, float rawY)
    {
        this.touchX = rawX;
        this.touchY = rawY;
    }
}

请帮助大家,已经有一个月了,但我仍然对同一个应用程序感到困惑:(并且对opengl一无所知.两周了,我正在尝试将这个项目用于我的应用程序,并且没有渲染任何内容视频.

Please help guys, it's been a month and I am still stuck with the same app :( and have no idea about opengl. Two weeks and I'm trying to use this project for my app, and nothing is being rendered on the video.

提前谢谢!

推荐答案

这是一个应该起作用的粗略轮廓,但是它的工作量很大:

Here's a rough outline that should work, but it's quite a bit of work:

  1. 设置一个android.media.MediaRecorder来录制视频和音频
  2. 从MediaRecorder获取一个Surface并从中设置一个EGLImage( https://developer.android.com/reference/android/opengl/EGL14.html#eglCreateWindowSurface(android.opengl.EGLDisplay ,android.opengl.EGLConfig,java.lang.Object,int [],int));您将需要一个完整的OpenGL上下文并进行设置,然后需要将该EGLImage设置为渲染目标.
  3. 在该GL上下文中创建SurfaceTexture.
  4. 配置相机以将数据发送到该SurfaceTexture
  5. 启动MediaRecorder
  6. 在从摄像机接收的每一帧上,将用户完成的图形转换为GL纹理,然后将摄像机纹理与用户图形合成.
  7. 最后,调用glSwapBuffers将复合帧发送到录像机
  1. Set up a android.media.MediaRecorder for recording the video and audio
  2. Get a Surface from MediaRecorder and set up an EGLImage from it (https://developer.android.com/reference/android/opengl/EGL14.html#eglCreateWindowSurface(android.opengl.EGLDisplay, android.opengl.EGLConfig, java.lang.Object, int[], int) ); you'll need a whole OpenGL context and setup for this. Then you'll need to set that EGLImage as your render target.
  3. Create a SurfaceTexture within that GL context.
  4. Configure camera to send data to that SurfaceTexture
  5. Start the MediaRecorder
  6. On each frame received from camera, convert the drawing done by the user to a GL texture, and composite the camera texture and the user drawing.
  7. Finally, call glSwapBuffers to send the composited frame to the video recorder

这篇关于在Android上录制视频时如何绘制视频,并保存视频和图形?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆