在Android camera2下将YUV_420_888转换为位图的图像不正确 [英] Incorrect image converting YUV_420_888 into Bitmaps under Android camera2

查看:260
本文介绍了在Android camera2下将YUV_420_888转换为位图的图像不正确的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将YUV_420_888图像转换为来自camera2预览的位图.但是输出的图像颜色不正确.

I’m trying to convert YUV_420_888 images into bitmaps, coming from the camera2 preview. But the output image has incorrect colors.

接下来是我正在运行的用于生成位图的测试代码.仅是测试代码,因此请不要对不相关的因素进行任何代码审查,例如位图被回收或RenderScript被连续创建.该代码仅用于测试从YUV到RGB的转换,仅此而已.

Next is the test code I’m running to generate the bitmap. Is test code only, so please don’t do any code review about not relevant factors such as the bitmap is being recycled, or the RenderScript is continuously been created. This code is just to test the conversion from YUV to RGB and nothing more.

其他因素,该代码应从API 22及更高版本运行,因此使用特定于RenderScript的 ScriptIntrinsicYuvToRGB 就足够了,而不必使用旧的手动转换,仅在以前的Android版本中才需要这样做缺乏适当的YUV_420_888支持.

Other factors, the code is meant to run from API 22 and above, therefore using RenderScript specific ScriptIntrinsicYuvToRGB should be sufficient, without having to use old manual conversions which where necessary only in previous Android versions due to lack of proper YUV_420_888 support.

由于RenderScript已经提供了专用于处理所有类型的YUV转换的ScriptIntrinsicYuvToRGB,我认为问题可能出在如何从Image对象获取YUV字节数据,但我无法弄清楚问题出在哪里

As RenderScript already offers a dedicated ScriptIntrinsicYuvToRGB which is meant to handle all types of YUV conversions, I think the problem could be in how I get the YUV byte data from the Image object, but I can't figure where the issue is.

要在Android Studio中查看输出位图,请在bitmap.recycle()中放置一个断点,因此在回收之前,可以使用查看位图"选项在变量调试"窗口中对其进行查看.

To view the output bitmap in Android Studio, place a breakpoint in bitmap.recycle(), so before it gets recycled you can look at it in the Variables Debug Window by using the "view bitmap" option.

请让我知道是否有人可以发现转换问题:

Please let me know if anyone can spot what’s wrong with the conversion:

@Override
public void onImageAvailable(ImageReader reader)
{
    RenderScript rs = RenderScript.create(this.mContext);

    final Image image = reader.acquireLatestImage();

    final Image.Plane[] planes = image.getPlanes();
    final ByteBuffer planeY = planes[0].getBuffer();
    final ByteBuffer planeU = planes[1].getBuffer();
    final ByteBuffer planeV = planes[2].getBuffer();

    // Get the YUV planes data

    final int Yb = planeY.rewind().remaining();
    final int Ub = planeU.rewind().remaining();
    final int Vb = planeV.rewind().remaining();

    final ByteBuffer yuvData = ByteBuffer.allocateDirect(Yb + Ub + Vb);

    planeY.get(yuvData.array(), 0, Yb);
    planeU.get(yuvData.array(), Yb, Vb);
    planeV.get(yuvData.array(), Yb + Vb, Ub);

    // Initialize Renderscript

    Type.Builder yuvType = new Type.Builder(rs, Element.YUV(rs))
            .setX(image.getWidth())
            .setY(image.getHeight())
            .setYuvFormat(ImageFormat.YUV_420_888);

    final Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs))
            .setX(image.getWidth())
            .setY(image.getHeight());

    Allocation yuvAllocation = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
    Allocation rgbAllocation = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);

    // Convert

    yuvAllocation.copyFromUnchecked(yuvData.array());

    ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.YUV(rs));
    scriptYuvToRgb.setInput(yuvAllocation);
    scriptYuvToRgb.forEach(rgbAllocation);

    // Get the bitmap

    Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
    rgbAllocation.copyTo(bitmap);

    // Release

    bitmap.recycle();

    yuvAllocation.destroy();
    rgbAllocation.destroy();
    rs.destroy();

    image.close();
}

推荐答案

回答我自己的问题,实际的问题是我怀疑如何将图像平面转换为ByteBuffer.接下来,该解决方案应适用于NV21和YV12.由于YUV数据已经进入了单独的平面,因此只需根据其行距和像素跨度获得正确的方法即可.还需要对如何将数据传递到RenderScript内部函数进行一些小的修改.

Answering my own question, the actual problem was as I suspected in how I was transforming the Image planes into the ByteBuffer. Next the solution, which should work for both NV21 & YV12. As the YUV data already comes in separate planes, is just a matter of getting it the correct way based on their row and pixel strides. Also needed to do some minor modifications in how the data is passed to the RenderScript intrinsic.

注意:为了优化生产的onImageAvailable()连续流,在执行转换之前,应将Image字节数据复制到单独的缓冲区中,并在单独的线程中执行转换(取决于您的要求).但是,由于这不是问题的一部分,因此在下一个代码中,将转换直接放入onImageAvailable()中以简化答案.如果有人需要知道如何复制图像数据,请创建一个新问题并通知我,以便我共享我的代码.

NOTE: For a production optimized onImageAvailable() uninterrupted flow, instead, before doing the conversion the Image byte data should be copied into a separate buffer and the conversion executed in a separate thread (depending on your requirements). But since this isn't part of the question, in the next code the conversion is placed directly into onImageAvailable() to simplify the answer. If anyone needs to know how to copy the Image data please create a new question and let me know so I will share my code.

@Override
public void onImageAvailable(ImageReader reader)
{
    // Get the YUV data

    final Image image = reader.acquireLatestImage();
    final ByteBuffer yuvBytes = this.imageToByteBuffer(image);

    // Convert YUV to RGB

    final RenderScript rs = RenderScript.create(this.mContext);

    final Bitmap        bitmap     = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
    final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);

    final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
    allocationYuv.copyFrom(yuvBytes.array());

    ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
    scriptYuvToRgb.setInput(allocationYuv);
    scriptYuvToRgb.forEach(allocationRgb);

    allocationRgb.copyTo(bitmap);

    // Release

    bitmap.recycle();

    allocationYuv.destroy();
    allocationRgb.destroy();
    rs.destroy();

    image.close();
}

private ByteBuffer imageToByteBuffer(final Image image)
{
    final Rect crop   = image.getCropRect();
    final int  width  = crop.width();
    final int  height = crop.height();

    final Image.Plane[] planes     = image.getPlanes();
    final byte[]        rowData    = new byte[planes[0].getRowStride()];
    final int           bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
    final ByteBuffer    output     = ByteBuffer.allocateDirect(bufferSize);

    int channelOffset = 0;
    int outputStride = 0;

    for (int planeIndex = 0; planeIndex < 3; planeIndex++)
    {
        if (planeIndex == 0)
        {
            channelOffset = 0;
            outputStride = 1;
        }
        else if (planeIndex == 1)
        {
            channelOffset = width * height + 1;
            outputStride = 2;
        }
        else if (planeIndex == 2)
        {
            channelOffset = width * height;
            outputStride = 2;
        }

        final ByteBuffer buffer      = planes[planeIndex].getBuffer();
        final int        rowStride   = planes[planeIndex].getRowStride();
        final int        pixelStride = planes[planeIndex].getPixelStride();

        final int shift         = (planeIndex == 0) ? 0 : 1;
        final int widthShifted  = width >> shift;
        final int heightShifted = height >> shift;

        buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));

        for (int row = 0; row < heightShifted; row++)
        {
            final int length;

            if (pixelStride == 1 && outputStride == 1)
            {
                length = widthShifted;
                buffer.get(output.array(), channelOffset, length);
                channelOffset += length;
            }
            else
            {
                length = (widthShifted - 1) * pixelStride + 1;
                buffer.get(rowData, 0, length);

                for (int col = 0; col < widthShifted; col++)
                {
                    output.array()[channelOffset] = rowData[col * pixelStride];
                    channelOffset += outputStride;
                }
            }

            if (row < heightShifted - 1)
            {
                buffer.position(buffer.position() + rowStride - length);
            }
        }
    }

    return output;
}

这篇关于在Android camera2下将YUV_420_888转换为位图的图像不正确的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆