带有OpenGL的MediaRecorder曲面输入-如果启用了音频录制,则会出现此问题 [英] MediaRecorder Surface Input with OpenGL - issue if audio recording is enabled

查看:417
本文介绍了带有OpenGL的MediaRecorder曲面输入-如果启用了音频录制,则会出现此问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想用MediaRecorder代替MediaCodec来录制视频,因为众所周知,它很容易使用.

I want to use MediaRecorder for recording videos instead of MediaCodec, because it's very easy to use as we know.

我还想在录制时使用OpenGL处理​​帧,这就是为什么我使用

I also want to use OpenGL to process frames while recording, that's why I use

recorderSurface = MediaCodec.createPersistentInputSurface()

mediaRecorder.setInputSurface(recorderSurface)

然后,我使用Grafika的ContinuousCaptureActivity示例中的示例代码来初始化EGL渲染上下文,创建cameraTexture并将其作为Surface传递给Camera2 API,

Then I use example code from Grafika's ContinuousCaptureActivity sample to init EGL rendering context, create cameraTexture and pass it to Camera2 API as Surface https://github.com/google/grafika/blob/master/app/src/main/java/com/android/grafika/ContinuousCaptureActivity.java#L392

并从我们的recorderSurface https://github.com/google/grafika/blob/master/app/src/main/java/com/android/grafika/ContinuousCaptureActivity.java#L418

依此类推(处理帧与Grafika示例中的帧一样,与示例代码Grafika代码中的所有内容相同)

and so on (processing frames as in Grafika sample, everything the same as in the example code Grafika code)

然后,当我开始录制(MediaRecorder.start())时,如果未设置音频源,它会录制好视频

Then when I start recording (MediaRecorder.start()), it records video ok if audio source wasn't set

但是如果还启用了录音

mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC)
...
mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC)

然后,最终视频的时长(长度)很大,因此无法真正播放.因此,使用Surface作为输入和使用GLES添加和处理帧时,MediaRecorder音频编码器会毁掉一切

Then final video has large duration (length) and it's not really playable. So MediaRecorder audio encoder ruins everything when using Surface as input and GLES for adding and processing frames

我不知道如何解决它.

这是我处理框架的代码(基于Grafika示例,几乎相同):

Here's my code to process frames (based on Grafika sample, it's almost the same):

class GLCameraFramesRender(
    private val width: Int,
    private val height: Int,
    private val callback: Callback,
    recorderSurface: Surface,
    eglCore: EglCore
) : OnFrameAvailableListener {
    private val fullFrameBlit: FullFrameRect
    private val textureId: Int
    private val encoderSurface: WindowSurface
    private val tmpMatrix = FloatArray(16)
    private val cameraTexture: SurfaceTexture
    val cameraSurface: Surface

    init {
        encoderSurface = WindowSurface(eglCore, recorderSurface, true)
        encoderSurface.makeCurrent()

        fullFrameBlit = FullFrameRect(Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT))

        textureId = fullFrameBlit.createTextureObject()

        cameraTexture = SurfaceTexture(textureId)
        cameraSurface = Surface(cameraTexture)
        cameraTexture.setOnFrameAvailableListener(this)
    }

    fun release() {
        cameraTexture.setOnFrameAvailableListener(null)
        cameraTexture.release()
        cameraSurface.release()
        fullFrameBlit.release(false)
        eglCore.release()
    }

    override fun onFrameAvailable(surfaceTexture: SurfaceTexture) {
        if (callback.isRecording()) {
            drawFrame()
        } else {
            cameraTexture.updateTexImage()
        }
    }

    private fun drawFrame() {
        cameraTexture.updateTexImage()

        cameraTexture.getTransformMatrix(tmpMatrix)


        GLES20.glViewport(0, 0, width, height)

        fullFrameBlit.drawFrame(textureId, tmpMatrix)

        encoderSurface.setPresentationTime(cameraTexture.timestamp)

        encoderSurface.swapBuffers()
       
    }

    interface Callback {
        fun isRecording(): Boolean
    }
}

推荐答案

您的时间戳很可能不在同一时基中.媒体录制系统通常希望在 uptimeMillis 时基中添加时间戳,但是许多摄像头设备会在 elapsedRealtime 时基中生成数据.一种是计算设备处于深度睡眠状态的时间,另一种则不是.自您重新启动设备以来,时间越长,差异就越大.

It's very likely your timestamps aren't in the same timebase. The media recording system generally wants timestamps in the uptimeMillis timebase, but many camera devices produce data in the elapsedRealtime timebase. One counts time when the device is in deep sleep, and the other doesn't; the longer it's been since you rebooted your device, the bigger the discrepancy becomes.

添加音频无关紧要,因为MediaRecorder的内部音频时间戳记将在uptimeMillis中,而摄像机帧的时间戳记将是经过时的Realtime.视音频同步不良,可能会导致超过几分之一秒的差异.几分钟或更长时间只会搞乱一切.

It wouldn't matter until you add in the audio, since MediaRecorder's internal audio timestamps will be in uptimeMillis, while the camera frame timestamps will come in as elapsedRealtime. A discrepancy of more than a few fractions of a second would probably be noticeable as a bad A/V sync; a few minutes or more will just mess everything up.

当相机直接与媒体记录堆栈对话时,它会自动调整时间戳;由于您将GPU置于中间,因此不会发生这种情况(因为摄像头不知道帧最终将在哪里移动).

When the camera talks to the media recording stack directly, it adjusts timestamps automatically; since you've placed the GPU in the middle, that doesn't happen (since the camera doesn't know that's where your frames are going eventually).

您可以通过

You can check if the camera is using elapsedRealtime as the timebase via SENSOR_INFO_TIMESTAMP_SOURCE. But in any case, you have a few choices:

  1. 如果摄像机使用TIMESTAMP_SOURCE_REALTIME,请在记录开始时测量两个时间戳之间的时差,并相应地调整输入到setPresentationTime(delta = elapsedRealtime - uptimeMillis; timestamp = timestamp - delta;)中的时间戳
  2. 只需将uptimeMillis() * 1000000用作setPresentationTime的时间.这可能会导致过多的A/V偏斜,但是很容易尝试.
  1. If the camera uses TIMESTAMP_SOURCE_REALTIME, measure the difference between the two timestamp at the start of recording, and adjust the timestamps you feed into setPresentationTime accordingly (delta = elapsedRealtime - uptimeMillis; timestamp = timestamp - delta;)
  2. Just use uptimeMillis() * 1000000 as the time for setPresentationTime. This may cause too much A/V skew, but it's easy to try.

这篇关于带有OpenGL的MediaRecorder曲面输入-如果启用了音频录制,则会出现此问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆