渲染位图帧面编码 [英] Render Bitmap frames to Surface for encoding
问题描述
我的目标是参加一个M4V视频文件,视频为PNG帧德codeA段,修改这些框架,并重新连接code修整后的视频(也M4V)。
My goal is to take in a M4V video file, decode a segment of the video as PNG frames, modify these frames, and re-encode the trimmed video (also to M4V).
的工作流程是,像这样: [输入视频] - >出口框架 - >修改框架 - >恩code框架 - > [输出视频]
。
The workflow is like so: [Input Video] -> Export Frames -> Modify Frames -> Encode Frames -> [Output Video]
.
有关去code过程中,我一直在引用 bigflake 例子。使用 ExtractMpegFramesTest 例如code我能够生成位图
帧的的.m4v
文件和出口帧为PNG文件。
For the decode process, I have been referencing the bigflake examples. Using the ExtractMpegFramesTest example code I was able to generate Bitmap
frames from an .m4v
file and export frames as PNG files.
现在我试图重新编码过程中,采用 恩codeAndMuxTest 例如,在尝试创建编码另一套课程。
Now I am attempting the re-encoding process, using the EncodeAndMuxTest example in attempts to create another set of classes for encoding.
我遇到的问题是,这个例子code似乎产生在OpenGL的原始帧。我有一系列位图
的,我要带code /渲染到 codecInputSurface
对象。 pretty得多的解码过程做什么反向
The issue I am running into is, the example code seems to generate raw frames in OpenGL. I have a series of Bitmaps
that I want to encode/render to the CodecInputSurface
object. Pretty much the reverse of what the decoding process does.
大多数例子code是蛮好的,看来我只需要修改 generateSurfaceFrame()
渲染位图
到表面
用OpenGL。
The majority of the example code is just fine, it seems I just need to modify generateSurfaceFrame()
to render the Bitmap
to the Surface
with OpenGL.
下面是code是我迄今:
Here is the code that I have thus far:
// Member variables (see EncodeAndMuxTest example)
private MediaCodec encoder;
private CodeInputSurface inputSurface;
private MediaMuxer muxer;
private int trackIndex;
private boolean hasMuxerStarted;
private MediaCodec.BufferInfo bufferInfo;
// This is called for each frame to be rendered into the video file
private void encodeFrame(Bitmap bitmap)
{
int textureId = 0;
try
{
textureId = loadTexture(bitmap);
// render the texture here?
}
finally
{
unloadTexture(textureId);
}
}
// Loads a texture into OpenGL
private int loadTexture(Bitmap bitmap)
{
final int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
int textureWidth = bitmap.getWidth();
int textureHeight = bitmap.getHeight();
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
return textures[0];
}
// Unloads a texture from OpenGL
private void unloadTexture(int textureId)
{
final int[] textures = new int[1];
textures[0] = textureId;
GLES20.glDeleteTextures(1, textures, 0);
}
我觉得我应该可以使用 STextureRender
在 ExtractMpegFramesTest 的例子来实现类似,但它只是不点击我。
I feel like I should be able to use the STextureRender
from the ExtractMpegFramesTest example to achieve similar, but it's just not clicking for me.
另一件事是性能,这我真的想获得高效的编码。我将视频编码90-450帧(3-15秒@ 30fps)的,所以这应该只需要几秒钟的希望。
Another thing is performance, which I really am trying to get efficient encoding. I will be encoding 90-450 frames of video (3-15 seconds @ 30fps), so this should only take several seconds hopefully.
推荐答案
您可以尝试英特尔印出媒体包,它可以修改框架,切段,加入文件等等。的是框架的修改几个示例效果:色彩修改,文本覆盖了等等,你可以很容易地修改或添加新的效果。它设置一个很好的样本和教程如何构建和运行应用程序:<一href=\"https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials-running-samples\" rel=\"nofollow\">https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials-running-samples
You can try Intel INDE Media Pack, it allows to modify frames, cut segments, join files and much more. The are several sample effects for frames modifications: colors modifications, text overlays an so on, and you can easily modify or add new effects. It has a nice samples set and tutorials how to build and run app: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials-running-samples
框架的修改是基于GL着色器,这样,例如,用于棕褐色:
Frames modifications are gl shaders based, like this, for example for Sepia:
@Override
protected String getFragmentShader() {
return "#extension GL_OES_EGL_image_external : require\n" +
"precision mediump float;\n" +
"varying vec2 vTextureCoord;\n" +
"uniform mat3 uWeightsMatrix;\n" +
"uniform samplerExternalOES sTexture;\n" +
"void main() {\n" +
" vec4 color = texture2D(sTexture, vTextureCoord);\n" +
" vec3 color_new = min(uWeightsMatrix * color.rgb, 1.0);\n" +
" gl_FragColor = vec4(color_new.rgb, color.a);\n" +
"}\n";
}
在这里uWeightsMatrix设置为通过getAttributeLocation和glUniformMatrix3fv着色器
where uWeightsMatrix is set to shader via getAttributeLocation and glUniformMatrix3fv
protected float[] getWeights() {
return new float[]{
805.0f / 2048.0f, 715.0f / 2048.0f, 557.0f / 2048.0f,
1575.0f / 2048.0f, 1405.0f / 2048.0f, 1097.0f / 2048.0f,
387.0f / 2048.0f, 344.0f / 2048.0f, 268.0f / 2048.0f
};
}
这篇关于渲染位图帧面编码的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!