使用MediaCodec编辑帧和编码 [英] Editing frames and encoding with MediaCodec
问题描述
我能够解码mp4视频.如果使用Surface
配置解码器,则可以在屏幕上看到视频.现在,我要编辑帧(添加黄线甚至更好地重叠小图像)并将视频编码为新视频.不必显示视频,现在我也不在乎性能.(如果在编辑时显示帧,如果编辑功能花费很多时间,我可能会有空隙),那么,您对此有何建议?我是否仍然使用GlSurface配置解码器并使用OpenGl
(GLES),或者将其配置为null并以某种方式将Bytebuffer
转换为Bitmap
,对其进行修改,然后将位图编码为字节数组?我还在Grafika页面上看到,您可以将Surface
与自定义Rederer一起使用,并使用OpenGl
(GLES).谢谢
I was able to decode an mp4 video. If I configure the decoder using a Surface
I can see the video on screen. Now, I want to edit the frame (adding a yellow line or even better overlapping a tiny image) and encode the video as a new video. It is not necessary to show the video and I don't care now about the performance.(If I show the frames while editing I could have a gap if the editing function takes a lot of time), So, What do you recommend to me, configure the decoder with a GlSurface anyway and use OpenGl
(GLES), or configure it with null and somehow convert the Bytebuffer
to a Bitmap
, modify it, and encode the bitmap as a byte array? Also I saw in Grafika page that you cand use a Surface
with a custom Rederer and use OpenGl
(GLES). Thanks
推荐答案
您将不得不使用OpenGLES
. ByteBuffer/Bitmap方法无法提供逼真的性能/功能.
You will have to use OpenGLES
. ByteBuffer/Bitmap approach can not give realistic performance/features.
现在,您已经能够使用MediaExtractor和Codec将视频解码为Surface
,您需要使用用于创建Surface的SurfaceTexture
作为External Texture
,并使用GLES渲染到另一个从配置为编码器的MediaCodec
中检索到的Surface
.
Now that you've been able to decode the Video (using MediaExtractor and Codec) to a Surface
, you need to use the SurfaceTexture
used to create the Surface as an External Texture
and render using GLES to another Surface
retrieved from MediaCodec
configured as an encoder.
尽管Grafika
没有完全相似的完整项目,但是您可以从现有项目开始,然后尝试在grafika中使用以下任一子项目显示+捕获摄像头,当前可渲染Camera
帧(送入SurfaceTexture)到视频(和显示).
因此,实质上,唯一的变化是MediaCodec
将帧馈送到SurfaceTexture
而不是Camera
.
Though Grafika
doesn't have an exactly similar complete project, you can start with your existing project and then try to use either of the following subprojects in grafika Continuous Camera or Show + capture camera, which currently renders Camera
frames (fed to SurfaceTexture) to a Video (and display).
So essentially, the only change is the MediaCodec
feeding frames to SurfaceTexture
instead of the Camera
.
Google CTS DecodeEditEncodeTest does exactly the same and can be used as a reference in order to make the learning curve smoother.
使用这种方法,您当然可以做各种事情,例如控制视频的播放速度(快进和慢速),在场景上添加各种叠加层,使用着色器在视频中播放颜色/像素等
Using this approach, you can certainly do all sorts of things like manipulating the playback speed of video (fast forward and slow-down), adding all sorts of overlays on the scene, play with colors/pixels in the video using shaders etc.
Checkout filters in Show + capture camera for an illustration for the same.
解码-编辑-编码流
使用OpenGLES时,通过使用GLES渲染到编码器的输入表面来对帧进行编辑".
When using OpenGLES, 'editing' of the frame happens via rendering using GLES to the Encoder's input surface.
如果解码和渲染+编码在不同的线程中分开,则势必会每帧跳过几帧,除非您在两个线程之间实现某种同步,以使解码器一直等到渲染+编码为该框架发生在另一个线程上.
If decoding and rendering+encoding are separated out in different threads, you're bound to skip a few frames every frame, unless you implement some sort of synchronisation between the two threads to keep the decoder waiting until the render+encode for that frame has happened on the other thread.
尽管现代硬件编解码器支持同时进行视频编码和解码,但我还是建议do the decoding, rendering and encoding in the same thread
,特别是在您的情况下,此时性能不是主要问题.这将有助于避免必须自行处理和/或跳帧的问题.
Although modern hardware codecs support simultaneous video encoding and decoding, I'd suggest, do the decoding, rendering and encoding in the same thread
, especially in your case, when the performance is not a major concern right now. That will help avoiding the problems of having to handle synchronisation on your own and/or frame jumps.
这篇关于使用MediaCodec编辑帧和编码的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!