使用 RenderScript 为纵向模式旋转 YUV 图像数据 [英] Rotating YUV image data for Portrait Mode Using RenderScript

查看:22
本文介绍了使用 RenderScript 为纵向模式旋转 YUV 图像数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于视频图像处理项目,我必须旋转传入的 YUV 图像数据,以便数据不是水平显示而是垂直显示.我使用了

你必须知道,在Android中,即使设备处于纵向模式,YUV图像数据也是以横向呈现的(我在开始这个项目之前不知道.再次,我不明白为什么没有可用于通过一次调用旋转帧的方法).这意味着即使设备处于纵向模式,起点也在左下角.但是在纵向模式下,每一帧的起点应该在左上角.我对字段使用矩阵表示法(例如 (0,0)、(0,1) 等).注意:我从

要旋转横向框架,我们必须重新组织字段.这是我对草图所做的映射(见上文),它显示了横向模式下的单帧 yuv_420.映射应将框架旋转 90 度:

第一列从左下角开始向上:(0,0) ->(0,5)//(0,0) 应该在 (0,5)(0,1) ->(1,5)//(0,1) 应该在 (1,5)(0,2) ->(2,5)//依此类推..(0,3) ->(3,5)(0,4) ->(4,5)(0,5) ->(5,5)第二列从 (1,0) 开始向上:(1,0) ->(0,4)(1,1) ->(1,4)(1,2) ->(2,4)(1,3) ->(3,4)(1,4) ->(4,4)(1,5) ->(5,4)等等...

事实上,发生的情况是第一列成为新的第一行,第二列成为新的第二行,依此类推.从映射中可以看出,我们可以进行以下观察:

  • 结果的x坐标总是等于y坐标从左侧.所以,我们可以说x = y.
  • 我们总能观察到的是,对于结果的 y 坐标以下等式必须成立:y = width - 1 - x.(我针对草图中的所有坐标进行了测试,结果总是如此).

因此,我编写了以下渲染脚本内核函数:

#pragma version(1)#pragma rs java_package_name(com.jon.condino.testing.renderscript)#pragma rs_fp_relaxedrs_allocation gCurrentFrame;整数宽度;uchar4 __attribute__((内核)) yuv2rgbFrames(uint32_t x,uint32_t y){uint32_t inX = y;//第一次观察:设置 x=yuint32_t inY = 宽度 - 1 - x;//第二个观察:上面提到的等式//剩下的行只是检索 YUV 像素元素的方法,将它们转换为 RGB 并将它们作为结果输出//从最新帧读取像素值 - YUV 颜色空间//函数 rsGetElementAtYuv_uchar_?需要 API 18uchar4 curPixel;curPixel.r = rsGetElementAtYuv_uchar_Y(gCurrentFrame, inX, inY);curPixel.g = rsGetElementAtYuv_uchar_U(gCurrentFrame, inX, inY);curPixel.b = rsGetElementAtYuv_uchar_V(gCurrentFrame, inX, inY);//uchar4 rsYuvToRGBA_uchar4(uchar y, uchar u, uchar v);//此函数使用 NTSC 公式将 YUV 转换为 RBGuchar4 out = rsYuvToRGBA_uchar4(curPixel.r, curPixel.g, curPixel.b);返回;}

该方法似乎有效,但有一个小错误,如下图所示.如我们所见,相机预览处于纵向模式.但是我的相机预览左侧有这条非常奇怪的颜色线.为什么会这样?(请注意,我使用后置摄像头):

任何解决问题的建议都会很棒.我从 2 周开始处理这个问题(YUV 从横向到纵向的旋转),这是迄今为止我自己能得到的最好的解决方案.希望有人能帮忙改进一下代码,让左边奇怪的颜色线也消失了.

更新:

我在代码中所做的分配如下:

//yuvInAlloc 将是获取 YUV 图像数据的 Allocation//从相机yuvInAlloc = createYuvIoInputAlloc(rs, x, y, ImageFormat.YUV_420_888);yuvInAlloc.setOnBufferAvailableListener(this);//这里是 createYuvIoInputAlloc() 方法公共分配 createYuvIoInputAlloc(RenderScript rs, int x, int y, int yuvFormat) {return Allocation.createTyped(rs, createYuvType(rs, x, y, yuvFormat),分配.USAGE_IO_INPUT |分配.USAGE_SCRIPT);}//自定义脚本会将 YUV 转换为 RGBA 并将其放入此 AllocationrgbInAlloc = RsUtil.createRgbAlloc(rs, x, y);//这里是 createRgbAlloc() 方法公共分配 createRgbAlloc(RenderScript rs, int x, int y) {return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y));}//我们将所有处理过的图像数据放入的分配rgbOutAlloc = RsUtil.createRgbIoOutputAlloc(rs, x, y);//这里是 createRgbIoOutputAlloc() 方法公共分配 createRgbIoOutputAlloc(RenderScript rs, int x, int y) {return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y),分配.USAGE_IO_OUTPUT |分配.USAGE_SCRIPT);}

其他一些辅助函数:

public Type createType(RenderScript rs, Element e, int x, int y) {如果(Build.VERSION.SDK_INT >= 21){返回 Type.createXY(rs, e, x, y);} 别的 {返回新的 Type.Builder(rs, e).setX(x).setY(y).create();}}@RequiresApi(18)公共类型 createYuvType(RenderScript rs, int x, int y, int yuvFormat) {支持布尔值 = yuvFormat == ImageFormat.NV21 ||yuvFormat == ImageFormat.YV12;如果(Build.VERSION.SDK_INT >= 19){支持 |= yuvFormat == ImageFormat.YUV_420_888;}如果(!支持){throw new IllegalArgumentException("无效的 yuv 格式:"+ yuvFormat);}返回新的 Type.Builder(rs, createYuvElement(rs)).setX(x).setY(y).setYuvFormat(yuvFormat).创建();}公共元素 createYuvElement(RenderScript rs) {如果(Build.VERSION.SDK_INT >= 19){返回 Element.YUV(rs);} 别的 {返回 Element.createPixel(rs, Element.DataType.UNSIGNED_8, Element.DataKind.PIXEL_YUV);}}

调用自定义渲染脚本和分配:

//看下面输入大小是如何确定的customYUVToRGBAConverter.invoke_setInputImageSize(x, y);customYUVToRGBAConverter.set_inputAllocation(yuvInAlloc);//接收一些帧yuvInAlloc.ioReceive();//执行从 YUV 到 RGB 的转换customYUVToRGBAConverter.forEach_convert(rgbInAlloc);//这只是进行帧操作,例如应用特定过滤器renderer.renderFrame(mRs, rgbInAlloc, rgbOutAlloc);//将操作过的数据发送到输出流rgbOutAlloc.ioSend();

最后一点,输入图像的大小.您在上面看到的方法的 x 和 y 坐标基于此处表示为 mPreviewSize 的预览大小:

int deviceOrientation = getWindowManager().getDefaultDisplay().getRotation();int totalRotation = sensorToDeviceRotation(cameraCharacteristics, deviceOrientation);//确定我们是否处于纵向模式boolean swapRotation = totalRotation == 90 ||totalRotation == 270;int 旋转宽度 = 宽度;int 旋转高度 = 高度;//我们是否处于纵向模式?如果是,则交换值如果(交换旋转){旋转宽度 = 高度;旋转高度 = 宽度;}//确定预览尺寸mPreviewSize = 选择OptimalSize(map.getOutputSizes(SurfaceTexture.class),旋转宽度,旋转高度);

因此,基于 x 将是 mPreviewSize.getWidth()y 将是 mPreviewSize.getHeight() 就我而言.

解决方案

查看我的 YuvConverter.它的灵感来自 android - Renderscript 将 NV12 yuv 转换为 RGB.

它的 rs部分很简单:

#pragma version(1)#pragma rs java_package_name(随便)#pragma rs_fp_relaxedrs_allocation Yplane;uint32_t Yline;uint32_t 紫外线线;rs_allocation 上位机;rs_allocation Vplane;rs_allocation NV21;uint32_t 宽度;uint32_t 高度;uchar4 __attribute__((内核)) YUV420toRGB(uint32_t x, uint32_t y){uchar Y = rsGetElementAt_uchar(Yplane, x + y * Yline);uchar V = rsGetElementAt_uchar(Vplane, (x & ~1) + y/2 * UVline);uchar U = rsGetElementAt_uchar(Uplane, (x & ~1) + y/2 * UVline);//https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion短 R = Y + (512 + 1436 * V)/1024;//1.402短 G = Y + (512 - 352 * U - 731 * V)/1024;//-0.344136 -0.714136空头 B = Y + (512 + 1815 * U)/1024;//1.772如果 (R <0) R == 0;否则如果 (R > 255) R == 255;如果 (G <0) G == 0;否则如果 (G > 255) G == 255;如果 (B <0) B == 0;否则如果 (B > 255) B == 255;返回(uchar4){R,G,B,255};}uchar4 __attribute__((内核)) YUV420toRGB_180(uint32_t x, uint32_t y){返回 YUV420toRGB(宽度 - 1 - x,高度 - 1 - y);}uchar4 __attribute__((内核)) YUV420toRGB_90(uint32_t x, uint32_t y){返回 YUV420toRGB(y, Width - x - 1);}uchar4 __attribute__((内核)) YUV420toRGB_270(uint32_t x, uint32_t y){返回 YUV420toRGB(高度 - 1 - y, x);}

我的 Java 包装器在 Flutter 中使用过,但这并不重要:

公共类 YuvConverter 实现 AutoCloseable {私有 RenderScript rs;私有 ScriptC_yuv2rgb scriptC_yuv2rgb;私有位图 bmp;YuvConverter(上下文ctx,int ySize,int uvSize,int宽度,int高度){rs = RenderScript.create(ctx);scriptC_yuv2rgb = 新的 ScriptC_yuv2rgb(rs);init(ySize, uvSize, width, height);}私有分配 allocY、allocU、allocV、allocOut;@覆盖公共无效关闭(){if (allocY != null) allocY.destroy();if (allocU != null) allocU.destroy();if (allocV != null) allocV.destroy();if (allocOut != null) allocOut.destroy();bmp = 空;allocY = 空;allocU = 空;allocV = 空;allocOut = null;scriptC_yuv2rgb.destroy();scriptC_yuv2rgb = null;rs = 空;}私有无效初始化(int ySize,int uvSize,int 宽度,int 高度){if (bmp == null || bmp.getWidth() != width || bmp.getHeight() != height) {bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);if (allocOut != null) allocOut.destroy();allocOut = null;}if (allocY == null || allocY.getBytesSize() != ySize) {if (allocY != null) allocY.destroy();Type.Builder yBuilder = new Type.Builder(rs, Element.U8(rs)).setX(ySize);allocY = Allocation.createTyped(rs, yBuilder.create(), Allocation.USAGE_SCRIPT);}if (allocU == null || allocU.getBytesSize() != uvSize || allocV == null || allocV.getBytesSize() != uvSize ) {if (allocU != null) allocU.destroy();if (allocV != null) allocV.destroy();Type.Builder uvBuilder = new Type.Builder(rs, Element.U8(rs)).setX(uv​​Size);allocU = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);allocV = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);}if (allocOut == null || allocOut.getBytesSize() != width*height*4) {Type rgbType = Type.createXY(rs, Element.RGBA_8888(rs), width, height);if (allocOut != null) allocOut.destroy();allocOut = Allocation.createTyped(rs, rgbType, Allocation.USAGE_SCRIPT);}}@Retention(RetentionPolicy.SOURCE)//枚举此接口的有效值@IntDef({Surface.ROTATION_0, Surface.ROTATION_90, Surface.ROTATION_180, Surface.ROTATION_270})//创建一个用于验证 int 类型的接口公共@interface 轮换{}/*** 将 YUV_420 图像转换为位图.* @param yPlane byte[] 的 Y,像素步长为 1* @param uPlane byte[] 的 U,像素步长为 2* @param vPlane byte[] of V,像素步幅为 2* @param yLine Y 的行距* @param uvLine U 和 V 的行距* @param width 输出图像的宽度(注意纵向旋转时与高度交换)* @param height 输出图像的高度* @param旋转旋转应用.ROTATION_90 用于人像后置摄像头.* @return RGBA_8888 位图图像.*/公共位图 YUV420toRGB(byte[] yPlane, byte[] uPlane, byte[] vPlane,int yLine、int uvLine、int 宽度、int 高度、@Rotation int 旋转){init(yPlane.length, uPlane.length, width, height);allocY.copyFrom(yPlane);allocU.copyFrom(uPlane);allocV.copyFrom(vPlane);scriptC_yuv2rgb.set_Width(width);scriptC_yuv2rgb.set_Height(height);scriptC_yuv2rgb.set_Yline(yLine);scriptC_yuv2rgb.set_UVline(uvLine);scriptC_yuv2rgb.set_Yplane(allocY);scriptC_yuv2rgb.set_Uplane(allocU);scriptC_yuv2rgb.set_Vplane(allocV);开关(旋转){案例 Surface.ROTATION_0:scriptC_yuv2rgb.forEach_YUV420toRGB(allocOut);休息;案例 Surface.ROTATION_90:scriptC_yuv2rgb.forEach_YUV420toRGB_90(allocOut);休息;案例 Surface.ROTATION_180:scriptC_yuv2rgb.forEach_YUV420toRGB_180(allocOut);休息;案例 Surface.ROTATION_270:scriptC_yuv2rgb.forEach_YUV420toRGB_270(allocOut);休息;}allocOut.copyTo(bmp);返回 bmp;}}

性能的关键是renderscript可以被初始化一次(这就是为什么YuvConverter.init()public)并且接下来的调用非常快.

for a video image processing project, I have to rotate the incoming YUV image data so that the data is not shown horizontally but vertically. I used this project which gave me a tremendous insight in how to convert YUV image data to ARGB for processing them in real-time. The only drawback of that project is that it is only in landscape. There is no option for portrait mode (I do not know why the folks at Google present an example sample which handles only the landscape orientation). I wanted to change that.

So, I decided to use a custom YUV to RGB script which rotates the data so that it appears in portrait mode. The following GIF demonstrates how the app shows the data BEFORE I apply any rotation.

You must know that in Android, the YUV image data is presented in the landscape orientation even if the device is in portrait mode (I did NOT know it before I started this project. Again, I do not understand why there is no method available that can be used to rotate the frames with one call). That means that the starting point is at the bottom-left corner even if the device is in portrait mode. But in portrait mode, the starting point of each frame should be at the top-left corner. I use a matrix notation for the fields (e.g. (0,0), (0,1), etc.). Note: I took the sketch from here:

To rotate the landscape-oriented frame, we have to reorganize the fields. Here are the mappings I made to the sketch (see above) which shows a single frame yuv_420 in landscape mode. The mappings should rotate the frame by 90 degrees:

first column starting from the bottom-left corner and going upwards:
(0,0) -> (0,5)       // (0,0) should be at (0,5)
(0,1) -> (1,5)       // (0,1) should be at (1,5)
(0,2) -> (2,5)       // and so on ..
(0,3) -> (3,5)
(0,4) -> (4,5)
(0,5) -> (5,5)

2nd column starting at (1,0) and going upwards:
(1,0) -> (0,4)
(1,1) -> (1,4)
(1,2) -> (2,4)
(1,3) -> (3,4)
(1,4) -> (4,4)
(1,5) -> (5,4)

and so on...

In fact, what happens is that the first column becomes the new first row, the 2nd column becomes the new 2nd row, and so on. As you can see from the mappings, we can make the following observations:

  • the x coordinate of the result is always equal to the y coordinate from the left side. So, we can say that x = y.
  • What we can always observe is that for the y coordinate of the result the following equation must hold: y = width - 1 - x. (I tested this for all coordinates from the sketch, it was always true).

So, I wrote the following renderscript kernel function:

#pragma version(1)
#pragma rs java_package_name(com.jon.condino.testing.renderscript)
#pragma rs_fp_relaxed

rs_allocation gCurrentFrame;
int width;

uchar4 __attribute__((kernel)) yuv2rgbFrames(uint32_t x,uint32_t y)
{

    uint32_t inX = y;             // 1st observation: set x=y
    uint32_t inY = width - 1 - x; // 2nd observation: the equation mentioned above

    // the remaining lines are just methods to retrieve the YUV pixel elements, converting them to RGB and outputting them as result 

    // Read in pixel values from latest frame - YUV color space
    // The functions rsGetElementAtYuv_uchar_? require API 18
    uchar4 curPixel;
    curPixel.r = rsGetElementAtYuv_uchar_Y(gCurrentFrame, inX, inY);
    curPixel.g = rsGetElementAtYuv_uchar_U(gCurrentFrame, inX, inY);
    curPixel.b = rsGetElementAtYuv_uchar_V(gCurrentFrame, inX, inY);

    // uchar4 rsYuvToRGBA_uchar4(uchar y, uchar u, uchar v);
    // This function uses the NTSC formulae to convert YUV to RBG
    uchar4 out = rsYuvToRGBA_uchar4(curPixel.r, curPixel.g, curPixel.b);

    return out;
}

The approach seems to work but it has a little bug as you can see in the following image. The camera preview is in portrait mode as we can see. BUT I have this very weird color lines at the left side of my camera preview. Why is this happening? (Note that I use back facing camera):

Any advice for solving the problem would be great. I am dealing with this problem (rotation of YUV from landscape to portrait) since 2 weeks and this is by far the best solution I could get on my own. I hope someone can help to improve the code so that the weird color lines at the left side also disappears.

UPDATE:

My Allocations I made within the code are the following:

// yuvInAlloc will be the Allocation that will get the YUV image data
// from the camera
yuvInAlloc = createYuvIoInputAlloc(rs, x, y, ImageFormat.YUV_420_888);
yuvInAlloc.setOnBufferAvailableListener(this);

// here the createYuvIoInputAlloc() method
public Allocation createYuvIoInputAlloc(RenderScript rs, int x, int y, int yuvFormat) {
    return Allocation.createTyped(rs, createYuvType(rs, x, y, yuvFormat),
            Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
}

// the custom script will convert the YUV to RGBA and put it to this Allocation
rgbInAlloc = RsUtil.createRgbAlloc(rs, x, y);

// here the createRgbAlloc() method
public Allocation createRgbAlloc(RenderScript rs, int x, int y) {
    return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y));
}



// the allocation to which we put all the processed image data
rgbOutAlloc = RsUtil.createRgbIoOutputAlloc(rs, x, y);

// here the createRgbIoOutputAlloc() method
public Allocation createRgbIoOutputAlloc(RenderScript rs, int x, int y) {
    return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y),
                Allocation.USAGE_IO_OUTPUT | Allocation.USAGE_SCRIPT);
}

some other helper functions:

public Type createType(RenderScript rs, Element e, int x, int y) {
        if (Build.VERSION.SDK_INT >= 21) {
            return Type.createXY(rs, e, x, y);
        } else {
            return new Type.Builder(rs, e).setX(x).setY(y).create();
        }
    }

    @RequiresApi(18)
    public Type createYuvType(RenderScript rs, int x, int y, int yuvFormat) {
        boolean supported = yuvFormat == ImageFormat.NV21 || yuvFormat == ImageFormat.YV12;
        if (Build.VERSION.SDK_INT >= 19) {
            supported |= yuvFormat == ImageFormat.YUV_420_888;
        }
        if (!supported) {
            throw new IllegalArgumentException("invalid yuv format: " + yuvFormat);
        }
        return new Type.Builder(rs, createYuvElement(rs)).setX(x).setY(y).setYuvFormat(yuvFormat)
                .create();
    }

    public Element createYuvElement(RenderScript rs) {
        if (Build.VERSION.SDK_INT >= 19) {
            return Element.YUV(rs);
        } else {
            return Element.createPixel(rs, Element.DataType.UNSIGNED_8, Element.DataKind.PIXEL_YUV);
        }
    }

calls on the custom renderscript and allocations :

// see below how the input size is determined
customYUVToRGBAConverter.invoke_setInputImageSize(x, y);
customYUVToRGBAConverter.set_inputAllocation(yuvInAlloc);

// receive some frames
yuvInAlloc.ioReceive();


// performs the conversion from the YUV to RGB
customYUVToRGBAConverter.forEach_convert(rgbInAlloc);

// this just do the frame manipulation , e.g. applying a particular filter
renderer.renderFrame(mRs, rgbInAlloc, rgbOutAlloc);


// send manipulated data to output stream
rgbOutAlloc.ioSend();

Last but least, the size of the input image. The x and y coordinates of the methods you have seen above are based on the preview size denoted here as mPreviewSize:

int deviceOrientation = getWindowManager().getDefaultDisplay().getRotation();
int totalRotation = sensorToDeviceRotation(cameraCharacteristics, deviceOrientation);
// determine if we are in portrait mode
boolean swapRotation = totalRotation == 90 || totalRotation == 270;
int rotatedWidth = width;
int rotatedHeigth = height;

// are we in portrait mode? If yes, then swap the values
if(swapRotation){
      rotatedWidth = height;
      rotatedHeigth = width;
}

// determine the preview size 
mPreviewSize = chooseOptimalSize(
                  map.getOutputSizes(SurfaceTexture.class),
                  rotatedWidth,
                  rotatedHeigth);

So, based on that the x would be mPreviewSize.getWidth() and y would be mPreviewSize.getHeight() in my case.

解决方案

See my YuvConverter. It was inspired by android - Renderscript to convert NV12 yuv to RGB.

Its rs part is very simple:

#pragma version(1)
#pragma rs java_package_name(whatever)
#pragma rs_fp_relaxed

rs_allocation Yplane;
uint32_t Yline;
uint32_t UVline;
rs_allocation Uplane;
rs_allocation Vplane;
rs_allocation NV21;
uint32_t Width;
uint32_t Height;

uchar4 __attribute__((kernel)) YUV420toRGB(uint32_t x, uint32_t y)
{
    uchar Y = rsGetElementAt_uchar(Yplane, x + y * Yline);
    uchar V = rsGetElementAt_uchar(Vplane, (x & ~1) + y/2 * UVline);
    uchar U = rsGetElementAt_uchar(Uplane, (x & ~1) + y/2 * UVline);
    // https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion
    short R = Y + (512           + 1436 * V) / 1024; //             1.402
    short G = Y + (512 -  352 * U - 731 * V) / 1024; // -0.344136  -0.714136
    short B = Y + (512 + 1815 * U          ) / 1024; //  1.772
    if (R < 0) R == 0; else if (R > 255) R == 255;
    if (G < 0) G == 0; else if (G > 255) G == 255;
    if (B < 0) B == 0; else if (B > 255) B == 255;
    return (uchar4){R, G, B, 255};
}

uchar4 __attribute__((kernel)) YUV420toRGB_180(uint32_t x, uint32_t y)
{
    return YUV420toRGB(Width - 1 - x, Height - 1 - y);
}

uchar4 __attribute__((kernel)) YUV420toRGB_90(uint32_t x, uint32_t y)
{
    return YUV420toRGB(y, Width - x - 1);
}

uchar4 __attribute__((kernel)) YUV420toRGB_270(uint32_t x, uint32_t y)
{
    return YUV420toRGB(Height - 1 - y, x);
}

My Java wrapper was used in Flutter, but this does not really matter:

public class YuvConverter implements AutoCloseable {

    private RenderScript rs;
    private ScriptC_yuv2rgb scriptC_yuv2rgb;
    private Bitmap bmp;

    YuvConverter(Context ctx, int ySize, int uvSize, int width, int height) {
        rs = RenderScript.create(ctx);
        scriptC_yuv2rgb = new ScriptC_yuv2rgb(rs);
        init(ySize, uvSize, width, height);
    }

    private Allocation allocY, allocU, allocV, allocOut;

    @Override
    public void close() {
        if (allocY != null) allocY.destroy();
        if (allocU != null) allocU.destroy();
        if (allocV != null) allocV.destroy();
        if (allocOut != null) allocOut.destroy();
        bmp = null;
        allocY = null;
        allocU = null;
        allocV = null;
        allocOut = null;
        scriptC_yuv2rgb.destroy();
        scriptC_yuv2rgb = null;
        rs = null;
    }

    private void init(int ySize, int uvSize, int width, int height) {
        if (bmp == null || bmp.getWidth() != width || bmp.getHeight() != height) {
            bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
            if (allocOut != null) allocOut.destroy();
            allocOut = null;
        }
        if (allocY == null || allocY.getBytesSize() != ySize) {
            if (allocY != null) allocY.destroy();
            Type.Builder yBuilder = new Type.Builder(rs, Element.U8(rs)).setX(ySize);
            allocY = Allocation.createTyped(rs, yBuilder.create(), Allocation.USAGE_SCRIPT);
        }
        if (allocU == null || allocU.getBytesSize() != uvSize || allocV == null || allocV.getBytesSize() != uvSize ) {
            if (allocU != null) allocU.destroy();
            if (allocV != null) allocV.destroy();
            Type.Builder uvBuilder = new Type.Builder(rs, Element.U8(rs)).setX(uvSize);
            allocU = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);
            allocV = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);
        }
        if (allocOut == null || allocOut.getBytesSize() != width*height*4) {
            Type rgbType = Type.createXY(rs, Element.RGBA_8888(rs), width, height);
            if (allocOut != null) allocOut.destroy();
            allocOut = Allocation.createTyped(rs, rgbType, Allocation.USAGE_SCRIPT);
        }
    }

    @Retention(RetentionPolicy.SOURCE)
    // Enumerate valid values for this interface
    @IntDef({Surface.ROTATION_0, Surface.ROTATION_90, Surface.ROTATION_180, Surface.ROTATION_270})
    // Create an interface for validating int types
    public @interface Rotation {}

    /**
     * Converts an YUV_420 image into Bitmap.
     * @param yPlane  byte[] of Y, with pixel stride 1
     * @param uPlane  byte[] of U, with pixel stride 2
     * @param vPlane  byte[] of V, with pixel stride 2
     * @param yLine   line stride of Y
     * @param uvLine  line stride of U and V
     * @param width   width of the output image (note that it is swapped with height for portrait rotation)
     * @param height  height of the output image
     * @param rotation  rotation to apply. ROTATION_90 is for portrait back-facing camera.
     * @return RGBA_8888 Bitmap image.
     */

    public Bitmap YUV420toRGB(byte[] yPlane, byte[] uPlane, byte[] vPlane,
                              int yLine, int uvLine, int width, int height,
                              @Rotation int rotation) {
        init(yPlane.length, uPlane.length, width, height);

        allocY.copyFrom(yPlane);
        allocU.copyFrom(uPlane);
        allocV.copyFrom(vPlane);
        scriptC_yuv2rgb.set_Width(width);
        scriptC_yuv2rgb.set_Height(height);
        scriptC_yuv2rgb.set_Yline(yLine);
        scriptC_yuv2rgb.set_UVline(uvLine);
        scriptC_yuv2rgb.set_Yplane(allocY);
        scriptC_yuv2rgb.set_Uplane(allocU);
        scriptC_yuv2rgb.set_Vplane(allocV);

        switch (rotation) {
            case Surface.ROTATION_0:
                scriptC_yuv2rgb.forEach_YUV420toRGB(allocOut);
                break;
            case Surface.ROTATION_90:
                scriptC_yuv2rgb.forEach_YUV420toRGB_90(allocOut);
                break;
            case Surface.ROTATION_180:
                scriptC_yuv2rgb.forEach_YUV420toRGB_180(allocOut);
                break;
            case Surface.ROTATION_270:
                scriptC_yuv2rgb.forEach_YUV420toRGB_270(allocOut);
                break;
        }

        allocOut.copyTo(bmp);
        return bmp;
    }
}

The key to performance is that renderscript can be initialized once (that's why YuvConverter.init() is public) and the following calls are very fast.

这篇关于使用 RenderScript 为纵向模式旋转 YUV 图像数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆