在使用MediaCodec和InputSurface录制时如何丢帧? [英] How to drop frames while recording with MediaCodec and InputSurface?

查看:575
本文介绍了在使用MediaCodec和InputSurface录制时如何丢帧?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的Android应用中,我想录制带延时的视频.我有一个InputSurface-> MediaCodec(编码器)-> MediaMuxer.

In my Android app I want to record a video with Time-lapse. I have an InputSurface -> MediaCodec (encoder) -> MediaMuxer.

但是,如果我想加快视频播放速度(例如:x3),则可以得到具有很高帧频的视频.例如:以正常速度,我得到30fps的视频.如果我加快速度(x3),我将获得90fps的视频.

But if I want to speed up the video (for example: x3), I get the resulted video with very high framerate. For example: with normal speed I get video 30fps. If I speed up (x3), I get the video 90fps.

由于视频的帧率很高,所以我手机的视频播放器无法正常播放视频(计算机的视频播放器可以正常播放视频,没有任何问题).因此,我认为我必须丢弃一些帧以将帧率保持在60fps以下.

Since the framerate of video is high, the video player of my phone cannot play the video normally (The video player of computer plays the video well without any problem). So I think I have to drop some frames to keep the framerate lower than 60fps.

但是我不知道如何丢帧.因为在AVC流中,我们有I,B,P帧,并且它们可能依赖于其他帧,所以我们不能任意丢弃它们.有人可以帮我吗?

But I don't know how to drop the frames. Because in AVC stream, we have I, B, P frames and they may be dependent upon others, so we can't drop them arbitrarily. Can anybody help me?

推荐答案

您必须对流进行解码和重新编码,并在运行时丢弃帧.只需将60fps视频中的时间戳减半,即可获得120fps视频.

You have to decode and re-encode the stream, dropping frames as you go. Simply halving the time stamps in a 60fps video will leave you with a 120fps video.

请记住,原始H.264视频流中没有嵌入任何时间戳.由MediaExtractor解析并由MediaMuxer添加的.mp4包装器保存时序信息. MediaCodec接口似乎可以接受并产生演示时间戳,但它只是传递它来帮助您保持与正确帧关联的时间戳-编码器可以对帧进行重新排序. (某些编码器确实会查看时间戳以尝试达到比特率目标,因此您无法通过伪造的值.)

Bear in mind that the raw H.264 video stream does not have any timestamps embedded in it. The .mp4 wrapper parsed by MediaExtractor and added by MediaMuxer holds the timing information. The MediaCodec interfaces appear to accept and produce the presentation time stamp, but it's mostly just passing it through to help you keep the timestamp associated with the correct frame -- frames can be reordered by the encoder. (Some encoders do look at the timestamps to try to meet the bit rate target, so you can't pass bogus values through.)

您可以执行 DecodeEditEncode示例之类的操作.当解码器调用

You can do something like the DecodeEditEncode example. When the decoder calls releaseOutputBuffer(), you just pass "false" for the render argument on every other frame.

如果您接受其他来源的视频帧(例如用于屏幕录制的虚拟显示器),则无法将编码器的Surface直接移到显示器上.您必须创建一个SurfaceTexture,创建一个从中浮出水面,然后在帧到达时对其进行处理. DecodeEditEncode示例正是这样做的,它使用GLES着色器修改了每一帧.

If you're accepting video frames from some other source, such as a virtual display for screen recording, you can't hand the encoder's Surface directly to the display. You would have to create a SurfaceTexture, create a Surface from that, and then process the frames as they arrive. The DecodeEditEncode example does exactly this, modifying each frame with a GLES shader as it does so.

尽管如此,屏幕录制确实带来了额外的困难.虚拟显示器的帧是在产生时到达的,而不是以固定的帧速率到达的,从而产生可变帧速率的视频.例如,您可能会有一系列这样的帧:

Screen recording does present an additional difficulty though. Frames from virtual displays arrive as they are produced, not at a fixed frame rate, yielding variable-frame-rate video. For example, you might have a sequence of frames like this:

[1] [2] <10 seconds pass> [3] [4] [5] ...

虽然大多数帧之间的间隔为16.7毫秒(60帧/秒),但在不更新显示时会有间隙.如果您的录音每隔一帧就抓拍一次,您将得到:

While most of the frames are arriving 16.7ms apart (60fps), there are gaps when the display isn't updating. If your recording grabs every other frame, you will get:

[1] <10+ seconds pass> [3] [5] ...

您最终在错误的帧上暂停了10秒钟,如果 1

You end up paused for 10 seconds on the wrong frame, which can be glaring if there was a lot of movement between 1 and 2. Making this work correctly requires some intelligence in the frame-dropping, e.g. repeating the previous frame as needed to produce constant-frame-rate 30fps video.

这篇关于在使用MediaCodec和InputSurface录制时如何丢帧?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆