需要从RGB帧创建webm视频 [英] need to create a webm video from RGB frames

查看:191
本文介绍了需要从RGB帧创建webm视频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个可生成一堆jpg的应用程序,需要将其转换为webm视频.我正在尝试将我的rgb数据从jpeg中获取到vpxenc示例中.我可以从输出视频中的原始jpg中看到基本形状,但是所有内容都是带绿色的(即使应该是黑色的像素也大约是绿色的一半),而其他所有扫描线中都有一些垃圾.

I have an app that generates a bunch of jpgs that I need to turn into a webm video. I'm trying to get my rgb data from the jpegs into the vpxenc sample. I can see the basic shapes from the original jpgs in the output video, but everything is tinted green (even pixels that should be black are about halfway green) and every other scanline has some garbage in it.

我正在尝试向其提供VPX_IMG_FMT_YV12数据,我假设其结构如下:

I'm trying to feed it VPX_IMG_FMT_YV12 data, which I'm assuming is structured like so:

每帧 8位Y数据 每个2x2 V块的8位平均值 每个2x2 U块的8位平均值

for each frame 8-bit Y data 8-bit averages of each 2x2 V block 8-bit averages of each 2x2 U block

以下是源图像和即将播放的视频的屏幕截图:

Here is a source image and a screenshot of the video that is coming out:

图像

我完全有可能错误地进行了RGB-> YV12转换,但是即使我仅编码8位Y数据并将U和V块设置为0,视频看起来也差不多.我基本上是通过以下公式运行RGB数据:

It's entirely possible that I'm doing the RGB->YV12 conversion incorrectly, but even if I only encode the 8-bit Y data and set the U and V blocks to 0, the video looks about the same. I'm basically running my RGB data through this equation:

// (R, G, and B are 0-255)
float y = 0.299f*R + 0.587f*G + 0.114f*B;
float v = (R-y)*0.713f;
float u = (B-v)*0.565f;

..然后生成我写入vpxenc的U和V的2x2过滤值,我只做(a + b + c + d)/4,其中a,b,c,d是U或每个2x2像素块的V值.

.. and then to produce the 2x2 filtered values for U and V that I write into vpxenc, I just do (a + b + c + d) / 4, where a,b,c,d are the U or V values of each 2x2 pixel block.

所以我想知道:

  1. 是否有更简单的方法(以代码形式)来获取RGB数据并将其提供给vpx_codec_encode以获得漂亮的webm视频?

  1. Is there an easier way (in code) to take RGB data and feed it to vpx_codec_encode to get a nice webm video?

我的RGB-> YV12转换是否在某个地方出错?

Is my RGB->YV12 conversion wrong somewhere?

任何帮助将不胜感激.

推荐答案

freefallr:可以.这是代码.请注意,它是在原地转换RGB-> YUV并将YV12输出放入pFullYPlane/pDownsampledUPlane/pDownsampledVPlane中.当我修改其vpxenc示例以使用此数据时,此代码产生了美观的WebM视频.

freefallr: Sure. Here is the code. Note that it's converting the RGB->YUV in place as well as putting the YV12 output into pFullYPlane/pDownsampledUPlane/pDownsampledVPlane. This code produced nice looking WebM videos when I modified their vpxenc sample to use this data.

void RGB_To_YV12( unsigned char *pRGBData, int nFrameWidth, int nFrameHeight, void *pFullYPlane, void *pDownsampledUPlane, void *pDownsampledVPlane )
{
    int nRGBBytes = nFrameWidth * nFrameHeight * 3;

    // Convert RGB -> YV12. We do this in-place to avoid allocating any more memory.
    unsigned char *pYPlaneOut = (unsigned char*)pFullYPlane;
    int nYPlaneOut = 0;

    for ( int i=0; i < nRGBBytes; i += 3 )
    {
        unsigned char B = pRGBData[i+0];
        unsigned char G = pRGBData[i+1];
        unsigned char R = pRGBData[i+2];

        float y = (float)( R*66 + G*129 + B*25 + 128 ) / 256 + 16;
        float u = (float)( R*-38 + G*-74 + B*112 + 128 ) / 256 + 128;
        float v = (float)( R*112 + G*-94 + B*-18 + 128 ) / 256 + 128;

        // NOTE: We're converting pRGBData to YUV in-place here as well as writing out YUV to pFullYPlane/pDownsampledUPlane/pDownsampledVPlane.
        pRGBData[i+0] = (unsigned char)y;
        pRGBData[i+1] = (unsigned char)u;
        pRGBData[i+2] = (unsigned char)v;

        // Write out the Y plane directly here rather than in another loop.
        pYPlaneOut[nYPlaneOut++] = pRGBData[i+0];
    }

    // Downsample to U and V.
    int halfHeight = nFrameHeight >> 1;
    int halfWidth = nFrameWidth >> 1;

    unsigned char *pVPlaneOut = (unsigned char*)pDownsampledVPlane;
    unsigned char *pUPlaneOut = (unsigned char*)pDownsampledUPlane;

    for ( int yPixel=0; yPixel < halfHeight; yPixel++ )
    {
        int iBaseSrc = ( (yPixel*2) * nFrameWidth * 3 );

        for ( int xPixel=0; xPixel < halfWidth; xPixel++ )
        {
            pVPlaneOut[yPixel * halfWidth + xPixel] = pRGBData[iBaseSrc + 2];
            pUPlaneOut[yPixel * halfWidth + xPixel] = pRGBData[iBaseSrc + 1];

            iBaseSrc += 6;
        }
    }
}

这篇关于需要从RGB帧创建webm视频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆