使用RenderScript为肖像模式旋转YUV图像数据 [英] Rotating YUV image data for Portrait Mode Using RenderScript

查看:89
本文介绍了使用RenderScript为肖像模式旋转YUV图像数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于视频图像处理项目,我必须旋转传入的YUV图像数据,以使该数据不是水平显示,而是垂直显示.我使用了

您必须知道,在Android中,即使设备处于纵向模式,YUV图像数据也会以横向显示(我在开始该项目之前并不知道.)同样,我也不明白为什么没有可用的方法,可用于一次调用来旋转框架).这意味着即使设备处于纵向模式,起点也位于左下角.但是在纵向模式下,每帧的起点应在左上角.我对字段使用矩阵表示法(例如(0,0),(0,1)等).注意:我从

要旋转面向景观的框架,我们必须重新组织字段.这是我对草图所做的映射(请参见上文),其中显示了横向模式下的单帧 yuv_420 .映射应将框架旋转90度:

 第一列从左下角开始并向上:(0,0)->(0,5)//(0,0)应该在(0,5)(0,1)->(1,5)//(0,1)应该位于(1,5)(0,2)->(2,5)//等.(0,3)->(3,5)(0,4)->(4,5)(0,5)->(5,5)第二列从(1,0)开始并向上:(1,0)->(0,4)(1,1)->(1,4)(1,2)->(2,4)(1,3)->(3,4)(1,4)->(4,4)(1,5)->(5,4)等等... 

实际上,发生的事情是第一列成为新的第一行,第二列成为新的第二行,依此类推.从映射中可以看到,我们可以进行以下观察:

  • 结果的 x 坐标始终等于 y 坐标从左侧开始.因此,我们可以说 x = y .
  • 我们始终可以观察到的是,对于结果的y坐标以下等式必须成立: y = width-1-x .(我对草图中的所有坐标都进行了测试,这始终是正确的.)

因此,我编写了以下renderscript内核函数:

  #pragma版本(1)#pragma rs java_package_name(com.jon.condino.testing.renderscript)#pragma rs_fp_relaxedrs_allocation gCurrentFrame;int宽度;uchar4 __attribute __((kernel))yuv2rgbFrames(uint32_t x,uint32_t y){uint32_t inX = y;//第一个观察值:设置x = yuint32_t inY =宽度-1-x;//第二个观察:上面提到的等式//剩下的行只是检索YUV像素元素,将它们转换为RGB并作为结果输出的方法//从最新帧中读取像素值-YUV颜色空间//函数rsGetElementAtYuv_uchar_?需要API 18uchar4 curPixel;curPixel.r = rsGetElementAtYuv_uchar_Y(gCurrentFrame,inX,inY);curPixel.g = rsGetElementAtYuv_uchar_U(gCurrentFrame,inX,inY);curPixel.b = rsGetElementAtYuv_uchar_V(gCurrentFrame,inX,inY);//uchar4 rsYuvToRGBA_uchar4(uchar y,uchar u,uchar v);//此函数使用NTSC公式将YUV转换为RBGuchar4 out = rsYuvToRGBA_uchar4(curPixel.r,curPixel.g,curPixel.b);退回} 

该方法似乎可行,但有一个小错误,如下图所示.如我们所见,相机预览处于纵向模式.但是我的相机预览左侧有这条非常奇怪的色线.为什么会这样呢?(请注意,我使用的是背面摄像头):

任何解决该问题的建议都很好.自两个星期以来,我一直在处理这个问题(YUV从横向旋转到纵向),这是迄今为止我能独自获得的最佳解决方案.我希望有人可以帮助改善代码,以便左侧的怪异色线也消失.

更新:

我在代码中所做的分配如下:

 //yuvInAlloc将是将获得YUV图像数据的分配//从相机yuvInAlloc = createYuvIoInputAlloc(rs,x,y,ImageFormat.YUV_420_888);yuvInAlloc.setOnBufferAvailableListener(this);//这里是createYuvIoInputAlloc()方法公共分配createYuvIoInputAlloc(RenderScript rs,int x,int y,int yuvFormat){返回Allocation.createTyped(rs,createYuvType(rs,x,y,yuvFormat),分配.USAGE_IO_INPUT|Allocation.USAGE_SCRIPT);}//自定义脚本会将YUV转换为RGBA并将其放入此分配rgbInAlloc = RsUtil.createRgbAlloc(rs,x,y);//这里是createRgbAlloc()方法公共分配createRgbAlloc(RenderScript rs,int x,int y){返回Allocation.createTyped(rs,createType(rs,Element.RGBA_8888(rs),x,y));}//将所有已处理的图像数据放入的分配rgbOutAlloc = RsUtil.createRgbIoOutputAlloc(rs,x,y);//这里是createRgbIoOutputAlloc()方法公共分配createRgbIoOutputAlloc(RenderScript rs,int x,int y){返回Allocation.createTyped(rs,createType(rs,Element.RGBA_8888(rs),x,y),分配.USAGE_IO_OUTPUT|Allocation.USAGE_SCRIPT);} 

其他一些帮助功能:

 公共类型createType(RenderScript rs,Element e,int x,int y){如果(Build.VERSION.SDK_INT> = 21){返回Type.createXY(rs,e,x,y);} 别的 {返回新的Type.Builder(rs,e).setX(x).setY(y).create();}}@RequiresApi(18)公共类型createYuvType(RenderScript rs,int x,int y,int yuvFormat){布尔值支持= yuvFormat == ImageFormat.NV21 ||yuvFormat == ImageFormat.YV12;如果(Build.VERSION.SDK_INT> = 19){支持的| = yuvFormat == ImageFormat.YUV_420_888;}如果(!受支持){抛出新的IllegalArgumentException("无效的yuv格式:" + yuvFormat);}返回新的Type.Builder(rs,createYuvElement(rs)).setX(x).setY(y).setYuvFormat(yuvFormat).创造();}公共元素createYuvElement(RenderScript rs){如果(Build.VERSION.SDK_INT> = 19){返回Element.YUV(rs);} 别的 {返回Element.createPixel(rs,Element.DataType.UNSIGNED_8,Element.DataKind.PIXEL_YUV);}} 

调用自定义渲染脚本和分配:

 //参见下面如何确定输入大小customYUVToRGBAConverter.invoke_setInputImageSize(x,y);customYUVToRGBAConverter.set_inputAllocation(yuvInAlloc);//接收一些帧yuvInAlloc.ioReceive();//执行从YUV到RGB的转换customYUVToRGBAConverter.forEach_convert(rgbInAlloc);//这只是进行帧操作,例如应用特定的过滤器renderer.renderFrame(mRs,rgbInAlloc,rgbOutAlloc);//将经处理的数据发送到输出流rgbOutAlloc.ioSend(); 

最后但至少是输入图像的大小.您在上面看到的方法的x和y坐标基于此处表示为mPreviewSize的预览大小:

  int deviceOrientation = getWindowManager().getDefaultDisplay().getRotation();int totalRotation = sensorToDeviceRotation(cameraCharacteristics,deviceOrientation);//确定我们是否处于纵向模式布尔值swapRotation = totalRotation == 90 ||totalRotation == 270;int rotationWidth =宽度;int rotationHeigth =高度;//我们是否处于纵向模式?如果是,则交换值if(swapRotation){rotationWidth =高度;rotationHeigth =宽度;}//确定预览大小mPreviewSize = choiceOptimalSize(map.getOutputSizes(SurfaceTexture.class),rotationWidth,旋转的高度); 

因此,基于此, x 将是 mPreviewSize.getWidth(),而 y 将是 mPreviewSize.getHeight()在我的情况下.

解决方案

请参阅我的 android-Renderscript将NV12 yuv转换为RGB .

它的 rs部分非常简单:

  #pragma版本(1)#pragma rs java_package_name(无论如何)#pragma rs_fp_relaxedrs_allocation Yplane;uint32_t Yline;uint32_t UVline;rs_allocation Uplane;rs_allocation Vplane;rs_allocation NV21;uint32_t宽度;uint32_t高度;uchar4 __attribute __((kernel))YUV420toRGB(uint32_t x,uint32_t y){uchar Y = rsGetElementAt_uchar(Yplane,x + y * Yline);uchar V = rsGetElementAt_uchar(Vplane,(x&〜1)+ y/2 * UVline);uchar U = rsGetElementAt_uchar(Uplane,(x&〜1)+ y/2 * UVline);//https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion短R = Y +(512 + 1436 * V)/1024;//1.402短G = Y +(512-352 * U-731 * V)/1024;//-0.344136 -0.714136短B = Y +(512 + 1815 * U)/1024;//1.772如果(R< 0)R == 0;否则,如果(R> 255)R == 255;如果(G< 0)G == 0;否则(G> 255)G == 255;否则.如果(B< 0)B == 0;否则(B> 255)B == 255;返回(uchar4){R,G,B,255};}uchar4 __attribute __((kernel))YUV420toRGB_180(uint32_t x,uint32_t y){返回YUV420toRGB(Width-1-x,Height-1-y);}uchar4 __attribute __((kernel))YUV420toRGB_90(uint32_t x,uint32_t y){返回YUV420toRGB(y,Width-x-1);}uchar4 __attribute __((kernel))YUV420toRGB_270(uint32_t x,uint32_t y){返回YUV420toRGB(Height-1-y,x);} 

在Flutter中使用了我的Java包装器,但这并不重要:

 公共类YuvConverter实现AutoCloseable {私有的RenderScript rs;私有ScriptC_yuv2rgb脚本C_yuv2rgb;专用位图bmp;YuvConverter(Context ctx,int ySize,int uvSize,int width,int height){rs = RenderScript.create(ctx);scriptC_yuv2rgb =新的ScriptC_yuv2rgb(rs);init(ySize,uvSize,width,height);}私有分配allocY,allocU,allocV,allocOut;@Override公共无效close(){如果(allocY!= null)allocY.destroy();如果(allocU!= null)allocU.destroy();如果(allocV!= null)allocV.destroy();如果(allocOut!= null)allocOut.destroy();bmp = null;allocY = null;allocU = null;allocV = null;allocOut = null;scriptC_yuv2rgb.destroy();scriptC_yuv2rgb = null;rs = null;}私人void init(int ySize,int uvSize,int width,int height){if(bmp == null || bmp.getWidth()!=宽度|| bmp.getHeight()!=高度){bmp = Bitmap.createBitmap(宽度,高度,Bitmap.Config.ARGB_8888);如果(allocOut!= null)allocOut.destroy();allocOut = null;}如果(allocY == null || allocY.getBytesSize()!= ySize){如果(allocY!= null)allocY.destroy();Type.Builder yBuilder =新的Type.Builder(rs,Element.U8(rs)).setX(ySize);allocY = Allocation.createTyped(rs,yBuilder.create(),Allocation.USAGE_SCRIPT);}if(allocU == null || allocU.getBytesSize()!= uvSize || allocV == null || allocV.getBytesSize()!= uvSize){如果(allocU!= null)allocU.destroy();如果(allocV!= null)allocV.destroy();Type.Builder uvBuilder =新的Type.Builder(rs,Element.U8(rs)).setX(uvSize);allocU = Allocation.createTyped(rs,uvBuilder.create(),Allocation.USAGE_SCRIPT);allocV = Allocation.createTyped(rs,uvBuilder.create(),Allocation.USAGE_SCRIPT);}如果(allocOut == null || allocOut.getBytesSize()!= width * height * 4){类型rgbType = Type.createXY(rs,Element.RGBA_8888(rs),width,height);如果(allocOut!= null)allocOut.destroy();allocOut = Allocation.createTyped(rs,rgbType,Allocation.USAGE_SCRIPT);}}@Retention(RetentionPolicy.SOURCE)//列举此接口的有效值@IntDef({Surface.ROTATION_0,Surface.ROTATION_90,Surface.ROTATION_180,Surface.ROTATION_270})//创建一个用于验证int类型的接口公共@interface旋转{}/***将YUV_420图像转换为位图.* Y的@param yPlane byte [],像素跨度为1* U的@param uPlane字节[],像素跨度为2* V的@param vPlane byte [],像素跨度为2* @param yLine Y的线跨度* @param uvLine的U和V线跨度* @param width输出图像的宽度(请注意,它已被换成用于纵向旋转的高度)* @param height输出图像的高度* @param旋转适用.ROTATION_90适用于人像后置摄像头.* @return RGBA_8888位图图像.*/公共位图YUV420toRGB(byte [] yPlane,byte [] uPlane,byte [] vPlane,int yLine,int uvLine,int width,int height,@Rotation int rotation){init(yPlane.length,uPlane.length,width,height);allocY.copyFrom(yPlane);allocU.copyFrom(uPlane);allocV.copyFrom(vPlane);scriptC_yuv2rgb.set_Width(width);scriptC_yuv2rgb.set_Height(height);scriptC_yuv2rgb.set_Yline(yLine);scriptC_yuv2rgb.set_UVline(uvLine);scriptC_yuv2rgb.set_Yplane(allocY);scriptC_yuv2rgb.set_Uplane(allocU);scriptC_yuv2rgb.set_Vplane(allocV);开关(旋转){外壳Surface.ROTATION_0:scriptC_yuv2rgb.forEach_YUV420toRGB(allocOut);休息;案例Surface.ROTATION_90:scriptC_yuv2rgb.forEach_YUV420toRGB_90(allocOut);休息;案例Surface.ROTATION_180:scriptC_yuv2rgb.forEach_YUV420toRGB_180(allocOut);休息;案例Surface.ROTATION_270:scriptC_yuv2rgb.forEach_YUV420toRGB_270(allocOut);休息;}allocOut.copyTo(bmp);返回bmp;}} 

性能的关键是渲染脚本可以初始化一次(这就是 YuvConverter.init() public 的原因),并且随后的调用非常快.

for a video image processing project, I have to rotate the incoming YUV image data so that the data is not shown horizontally but vertically. I used this project which gave me a tremendous insight in how to convert YUV image data to ARGB for processing them in real-time. The only drawback of that project is that it is only in landscape. There is no option for portrait mode (I do not know why the folks at Google present an example sample which handles only the landscape orientation). I wanted to change that.

So, I decided to use a custom YUV to RGB script which rotates the data so that it appears in portrait mode. The following GIF demonstrates how the app shows the data BEFORE I apply any rotation.

You must know that in Android, the YUV image data is presented in the landscape orientation even if the device is in portrait mode (I did NOT know it before I started this project. Again, I do not understand why there is no method available that can be used to rotate the frames with one call). That means that the starting point is at the bottom-left corner even if the device is in portrait mode. But in portrait mode, the starting point of each frame should be at the top-left corner. I use a matrix notation for the fields (e.g. (0,0), (0,1), etc.). Note: I took the sketch from here:

To rotate the landscape-oriented frame, we have to reorganize the fields. Here are the mappings I made to the sketch (see above) which shows a single frame yuv_420 in landscape mode. The mappings should rotate the frame by 90 degrees:

first column starting from the bottom-left corner and going upwards:
(0,0) -> (0,5)       // (0,0) should be at (0,5)
(0,1) -> (1,5)       // (0,1) should be at (1,5)
(0,2) -> (2,5)       // and so on ..
(0,3) -> (3,5)
(0,4) -> (4,5)
(0,5) -> (5,5)

2nd column starting at (1,0) and going upwards:
(1,0) -> (0,4)
(1,1) -> (1,4)
(1,2) -> (2,4)
(1,3) -> (3,4)
(1,4) -> (4,4)
(1,5) -> (5,4)

and so on...

In fact, what happens is that the first column becomes the new first row, the 2nd column becomes the new 2nd row, and so on. As you can see from the mappings, we can make the following observations:

  • the x coordinate of the result is always equal to the y coordinate from the left side. So, we can say that x = y.
  • What we can always observe is that for the y coordinate of the result the following equation must hold: y = width - 1 - x. (I tested this for all coordinates from the sketch, it was always true).

So, I wrote the following renderscript kernel function:

#pragma version(1)
#pragma rs java_package_name(com.jon.condino.testing.renderscript)
#pragma rs_fp_relaxed

rs_allocation gCurrentFrame;
int width;

uchar4 __attribute__((kernel)) yuv2rgbFrames(uint32_t x,uint32_t y)
{

    uint32_t inX = y;             // 1st observation: set x=y
    uint32_t inY = width - 1 - x; // 2nd observation: the equation mentioned above

    // the remaining lines are just methods to retrieve the YUV pixel elements, converting them to RGB and outputting them as result 

    // Read in pixel values from latest frame - YUV color space
    // The functions rsGetElementAtYuv_uchar_? require API 18
    uchar4 curPixel;
    curPixel.r = rsGetElementAtYuv_uchar_Y(gCurrentFrame, inX, inY);
    curPixel.g = rsGetElementAtYuv_uchar_U(gCurrentFrame, inX, inY);
    curPixel.b = rsGetElementAtYuv_uchar_V(gCurrentFrame, inX, inY);

    // uchar4 rsYuvToRGBA_uchar4(uchar y, uchar u, uchar v);
    // This function uses the NTSC formulae to convert YUV to RBG
    uchar4 out = rsYuvToRGBA_uchar4(curPixel.r, curPixel.g, curPixel.b);

    return out;
}

The approach seems to work but it has a little bug as you can see in the following image. The camera preview is in portrait mode as we can see. BUT I have this very weird color lines at the left side of my camera preview. Why is this happening? (Note that I use back facing camera):

Any advice for solving the problem would be great. I am dealing with this problem (rotation of YUV from landscape to portrait) since 2 weeks and this is by far the best solution I could get on my own. I hope someone can help to improve the code so that the weird color lines at the left side also disappears.

UPDATE:

My Allocations I made within the code are the following:

// yuvInAlloc will be the Allocation that will get the YUV image data
// from the camera
yuvInAlloc = createYuvIoInputAlloc(rs, x, y, ImageFormat.YUV_420_888);
yuvInAlloc.setOnBufferAvailableListener(this);

// here the createYuvIoInputAlloc() method
public Allocation createYuvIoInputAlloc(RenderScript rs, int x, int y, int yuvFormat) {
    return Allocation.createTyped(rs, createYuvType(rs, x, y, yuvFormat),
            Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
}

// the custom script will convert the YUV to RGBA and put it to this Allocation
rgbInAlloc = RsUtil.createRgbAlloc(rs, x, y);

// here the createRgbAlloc() method
public Allocation createRgbAlloc(RenderScript rs, int x, int y) {
    return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y));
}



// the allocation to which we put all the processed image data
rgbOutAlloc = RsUtil.createRgbIoOutputAlloc(rs, x, y);

// here the createRgbIoOutputAlloc() method
public Allocation createRgbIoOutputAlloc(RenderScript rs, int x, int y) {
    return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y),
                Allocation.USAGE_IO_OUTPUT | Allocation.USAGE_SCRIPT);
}

some other helper functions:

public Type createType(RenderScript rs, Element e, int x, int y) {
        if (Build.VERSION.SDK_INT >= 21) {
            return Type.createXY(rs, e, x, y);
        } else {
            return new Type.Builder(rs, e).setX(x).setY(y).create();
        }
    }

    @RequiresApi(18)
    public Type createYuvType(RenderScript rs, int x, int y, int yuvFormat) {
        boolean supported = yuvFormat == ImageFormat.NV21 || yuvFormat == ImageFormat.YV12;
        if (Build.VERSION.SDK_INT >= 19) {
            supported |= yuvFormat == ImageFormat.YUV_420_888;
        }
        if (!supported) {
            throw new IllegalArgumentException("invalid yuv format: " + yuvFormat);
        }
        return new Type.Builder(rs, createYuvElement(rs)).setX(x).setY(y).setYuvFormat(yuvFormat)
                .create();
    }

    public Element createYuvElement(RenderScript rs) {
        if (Build.VERSION.SDK_INT >= 19) {
            return Element.YUV(rs);
        } else {
            return Element.createPixel(rs, Element.DataType.UNSIGNED_8, Element.DataKind.PIXEL_YUV);
        }
    }

calls on the custom renderscript and allocations :

// see below how the input size is determined
customYUVToRGBAConverter.invoke_setInputImageSize(x, y);
customYUVToRGBAConverter.set_inputAllocation(yuvInAlloc);

// receive some frames
yuvInAlloc.ioReceive();


// performs the conversion from the YUV to RGB
customYUVToRGBAConverter.forEach_convert(rgbInAlloc);

// this just do the frame manipulation , e.g. applying a particular filter
renderer.renderFrame(mRs, rgbInAlloc, rgbOutAlloc);


// send manipulated data to output stream
rgbOutAlloc.ioSend();

Last but least, the size of the input image. The x and y coordinates of the methods you have seen above are based on the preview size denoted here as mPreviewSize:

int deviceOrientation = getWindowManager().getDefaultDisplay().getRotation();
int totalRotation = sensorToDeviceRotation(cameraCharacteristics, deviceOrientation);
// determine if we are in portrait mode
boolean swapRotation = totalRotation == 90 || totalRotation == 270;
int rotatedWidth = width;
int rotatedHeigth = height;

// are we in portrait mode? If yes, then swap the values
if(swapRotation){
      rotatedWidth = height;
      rotatedHeigth = width;
}

// determine the preview size 
mPreviewSize = chooseOptimalSize(
                  map.getOutputSizes(SurfaceTexture.class),
                  rotatedWidth,
                  rotatedHeigth);

So, based on that the x would be mPreviewSize.getWidth() and y would be mPreviewSize.getHeight() in my case.

解决方案

See my YuvConverter. It was inspired by android - Renderscript to convert NV12 yuv to RGB.

Its rs part is very simple:

#pragma version(1)
#pragma rs java_package_name(whatever)
#pragma rs_fp_relaxed

rs_allocation Yplane;
uint32_t Yline;
uint32_t UVline;
rs_allocation Uplane;
rs_allocation Vplane;
rs_allocation NV21;
uint32_t Width;
uint32_t Height;

uchar4 __attribute__((kernel)) YUV420toRGB(uint32_t x, uint32_t y)
{
    uchar Y = rsGetElementAt_uchar(Yplane, x + y * Yline);
    uchar V = rsGetElementAt_uchar(Vplane, (x & ~1) + y/2 * UVline);
    uchar U = rsGetElementAt_uchar(Uplane, (x & ~1) + y/2 * UVline);
    // https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion
    short R = Y + (512           + 1436 * V) / 1024; //             1.402
    short G = Y + (512 -  352 * U - 731 * V) / 1024; // -0.344136  -0.714136
    short B = Y + (512 + 1815 * U          ) / 1024; //  1.772
    if (R < 0) R == 0; else if (R > 255) R == 255;
    if (G < 0) G == 0; else if (G > 255) G == 255;
    if (B < 0) B == 0; else if (B > 255) B == 255;
    return (uchar4){R, G, B, 255};
}

uchar4 __attribute__((kernel)) YUV420toRGB_180(uint32_t x, uint32_t y)
{
    return YUV420toRGB(Width - 1 - x, Height - 1 - y);
}

uchar4 __attribute__((kernel)) YUV420toRGB_90(uint32_t x, uint32_t y)
{
    return YUV420toRGB(y, Width - x - 1);
}

uchar4 __attribute__((kernel)) YUV420toRGB_270(uint32_t x, uint32_t y)
{
    return YUV420toRGB(Height - 1 - y, x);
}

My Java wrapper was used in Flutter, but this does not really matter:

public class YuvConverter implements AutoCloseable {

    private RenderScript rs;
    private ScriptC_yuv2rgb scriptC_yuv2rgb;
    private Bitmap bmp;

    YuvConverter(Context ctx, int ySize, int uvSize, int width, int height) {
        rs = RenderScript.create(ctx);
        scriptC_yuv2rgb = new ScriptC_yuv2rgb(rs);
        init(ySize, uvSize, width, height);
    }

    private Allocation allocY, allocU, allocV, allocOut;

    @Override
    public void close() {
        if (allocY != null) allocY.destroy();
        if (allocU != null) allocU.destroy();
        if (allocV != null) allocV.destroy();
        if (allocOut != null) allocOut.destroy();
        bmp = null;
        allocY = null;
        allocU = null;
        allocV = null;
        allocOut = null;
        scriptC_yuv2rgb.destroy();
        scriptC_yuv2rgb = null;
        rs = null;
    }

    private void init(int ySize, int uvSize, int width, int height) {
        if (bmp == null || bmp.getWidth() != width || bmp.getHeight() != height) {
            bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
            if (allocOut != null) allocOut.destroy();
            allocOut = null;
        }
        if (allocY == null || allocY.getBytesSize() != ySize) {
            if (allocY != null) allocY.destroy();
            Type.Builder yBuilder = new Type.Builder(rs, Element.U8(rs)).setX(ySize);
            allocY = Allocation.createTyped(rs, yBuilder.create(), Allocation.USAGE_SCRIPT);
        }
        if (allocU == null || allocU.getBytesSize() != uvSize || allocV == null || allocV.getBytesSize() != uvSize ) {
            if (allocU != null) allocU.destroy();
            if (allocV != null) allocV.destroy();
            Type.Builder uvBuilder = new Type.Builder(rs, Element.U8(rs)).setX(uvSize);
            allocU = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);
            allocV = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);
        }
        if (allocOut == null || allocOut.getBytesSize() != width*height*4) {
            Type rgbType = Type.createXY(rs, Element.RGBA_8888(rs), width, height);
            if (allocOut != null) allocOut.destroy();
            allocOut = Allocation.createTyped(rs, rgbType, Allocation.USAGE_SCRIPT);
        }
    }

    @Retention(RetentionPolicy.SOURCE)
    // Enumerate valid values for this interface
    @IntDef({Surface.ROTATION_0, Surface.ROTATION_90, Surface.ROTATION_180, Surface.ROTATION_270})
    // Create an interface for validating int types
    public @interface Rotation {}

    /**
     * Converts an YUV_420 image into Bitmap.
     * @param yPlane  byte[] of Y, with pixel stride 1
     * @param uPlane  byte[] of U, with pixel stride 2
     * @param vPlane  byte[] of V, with pixel stride 2
     * @param yLine   line stride of Y
     * @param uvLine  line stride of U and V
     * @param width   width of the output image (note that it is swapped with height for portrait rotation)
     * @param height  height of the output image
     * @param rotation  rotation to apply. ROTATION_90 is for portrait back-facing camera.
     * @return RGBA_8888 Bitmap image.
     */

    public Bitmap YUV420toRGB(byte[] yPlane, byte[] uPlane, byte[] vPlane,
                              int yLine, int uvLine, int width, int height,
                              @Rotation int rotation) {
        init(yPlane.length, uPlane.length, width, height);

        allocY.copyFrom(yPlane);
        allocU.copyFrom(uPlane);
        allocV.copyFrom(vPlane);
        scriptC_yuv2rgb.set_Width(width);
        scriptC_yuv2rgb.set_Height(height);
        scriptC_yuv2rgb.set_Yline(yLine);
        scriptC_yuv2rgb.set_UVline(uvLine);
        scriptC_yuv2rgb.set_Yplane(allocY);
        scriptC_yuv2rgb.set_Uplane(allocU);
        scriptC_yuv2rgb.set_Vplane(allocV);

        switch (rotation) {
            case Surface.ROTATION_0:
                scriptC_yuv2rgb.forEach_YUV420toRGB(allocOut);
                break;
            case Surface.ROTATION_90:
                scriptC_yuv2rgb.forEach_YUV420toRGB_90(allocOut);
                break;
            case Surface.ROTATION_180:
                scriptC_yuv2rgb.forEach_YUV420toRGB_180(allocOut);
                break;
            case Surface.ROTATION_270:
                scriptC_yuv2rgb.forEach_YUV420toRGB_270(allocOut);
                break;
        }

        allocOut.copyTo(bmp);
        return bmp;
    }
}

The key to performance is that renderscript can be initialized once (that's why YuvConverter.init() is public) and the following calls are very fast.

这篇关于使用RenderScript为肖像模式旋转YUV图像数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆