camera2 捕获的图片 - 从 YUV_420_888 到 NV21 的转换 [英] camera2 captured picture - conversion from YUV_420_888 to NV21

查看:208
本文介绍了camera2 捕获的图片 - 从 YUV_420_888 到 NV21 的转换的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

通过 camera2 API,我们收到格式为 YUV_420_888 的 Image 对象.我们将使用以下函数转换为 NV21:

Via the camera2 API we are receiving an Image object of the format YUV_420_888. We are using then the following function for conversion to NV21:

private static byte[] YUV_420_888toNV21(Image image) {
    byte[] nv21;
    ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
    ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
    ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();

    int ySize = yBuffer.remaining();
    int uSize = uBuffer.remaining();
    int vSize = vBuffer.remaining();

    nv21 = new byte[ySize + uSize + vSize];

    //U and V are swapped
    yBuffer.get(nv21, 0, ySize);
    vBuffer.get(nv21, ySize, vSize);
    uBuffer.get(nv21, ySize + vSize, uSize);

    return nv21;
}

虽然此函数与 cameraCaptureSessions.setRepeatingRequest 配合良好,但在调用 cameraCaptureSessions.capture 时,我们会在进一步处理(在 JNI 端)中遇到分段错误.两者都通过 ImageReader 请求 YUV_420_888 格式.

While this function works fine with cameraCaptureSessions.setRepeatingRequest, we get a segmentation error in further processing (on the JNI side) when calling cameraCaptureSessions.capture. Both request YUV_420_888 format via ImageReader.

为什么两个函数调用的结果不同,而请求的类型相同?

How come the result is different for both function calls while the requested type is the same?

更新:正如评论中提到的,由于不同的图像尺寸(捕获请求的尺寸更大),我得到了这种行为.但是我们在 JNI 端的进一步处理操作对于这两个请求是相同的,并且不依赖于图像尺寸(仅依赖于纵横比,在两种情况下都是相同的).

Update: As mentioned in the comments I get this behaviour because of different image sizes (much larger dimension for the capture request). But our further processing operations on the JNI side are the same for both requests and don't depend on image dimensions (only on the aspect ratio, which is in both cases the same).

推荐答案

如果根本没有填充,您的代码只会返回正确的 NV21,并且 UV 平原重叠,实际上代表交错的 VU 值.这种情况经常发生在预览中,但在这种情况下,您为数组分配了额外的 w*h/4 字节(这可能不是问题).也许对于捕获的图像,您需要更强大的实现,例如

Your code will only return correct NV21 if there is no padding at all, and U and V plains overlap and actually represent interlaced VU values. This happens quite often for preview, but in such case you allocate extra w*h/4 bytes for your array (which presumably is not a problem). Maybe for captured image you need a more robust implemenation, e.g.

private static byte[] YUV_420_888toNV21(Image image) {

    int width = image.getWidth();
    int height = image.getHeight(); 
    int ySize = width*height;
    int uvSize = width*height/4;

    byte[] nv21 = new byte[ySize + uvSize*2];

    ByteBuffer yBuffer = image.getPlanes()[0].getBuffer(); // Y
    ByteBuffer uBuffer = image.getPlanes()[1].getBuffer(); // U
    ByteBuffer vBuffer = image.getPlanes()[2].getBuffer(); // V

    int rowStride = image.getPlanes()[0].getRowStride();
    assert(image.getPlanes()[0].getPixelStride() == 1);

    int pos = 0;

    if (rowStride == width) { // likely
        yBuffer.get(nv21, 0, ySize);
        pos += ySize;
    }
    else {
        long yBufferPos = -rowStride; // not an actual position
        for (; pos<ySize; pos+=width) {
            yBufferPos += rowStride;
            yBuffer.position(yBufferPos);
            yBuffer.get(nv21, pos, width);
        }
    }

    rowStride = image.getPlanes()[2].getRowStride();
    int pixelStride = image.getPlanes()[2].getPixelStride();

    assert(rowStride == image.getPlanes()[1].getRowStride());
    assert(pixelStride == image.getPlanes()[1].getPixelStride());
    
    if (pixelStride == 2 && rowStride == width && uBuffer.get(0) == vBuffer.get(1)) {
        // maybe V an U planes overlap as per NV21, which means vBuffer[1] is alias of uBuffer[0]
        byte savePixel = vBuffer.get(1);
        try {
            vBuffer.put(1, (byte)~savePixel);
            if (uBuffer.get(0) == (byte)~savePixel) {
                vBuffer.put(1, savePixel);
                vBuffer.position(0);
                uBuffer.position(0);
                vBuffer.get(nv21, ySize, 1);
                uBuffer.get(nv21, ySize + 1, uBuffer.remaining());

                return nv21; // shortcut
            }
        }
        catch (ReadOnlyBufferException ex) {
            // unfortunately, we cannot check if vBuffer and uBuffer overlap
        }

        // unfortunately, the check failed. We must save U and V pixel by pixel
        vBuffer.put(1, savePixel);
    }

    // other optimizations could check if (pixelStride == 1) or (pixelStride == 2), 
    // but performance gain would be less significant

    for (int row=0; row<height/2; row++) {
        for (int col=0; col<width/2; col++) {
            int vuPos = col*pixelStride + row*rowStride;
            nv21[pos++] = vBuffer.get(vuPos);
            nv21[pos++] = uBuffer.get(vuPos);
        }
    }

    return nv21;
}

如果您打算将结果数组传递给 C++,您可以利用 事实

If you anyway intend to pass the resulting array to C++, you can take advantage of the fact that

返回的缓冲区总是有 isDirect 返回 true,因此底层数据可以映射为 JNI 中的指针,而无需使用 GetDirectBufferAddress 进行任何复制.

the buffer returned will always have isDirect return true, so the underlying data could be mapped as a pointer in JNI without doing any copies with GetDirectBufferAddress.

这意味着可以在 C++ 中以最少的开销完成相同的转换.在C++中,你甚至可能会发现实际的像素排列已经是NV21了!

This means that same conversion may be done in C++ with minimal overhead. In C++, you may even find that the actual pixel arrangement is already NV21!

PS 实际上,这可以在 Java 中完成,开销可以忽略不计,请参阅上面的 if (pixelStride == 2 && ... 行.所以,我们可以将所有色度字节批量复制到生成的字节数组中,这比运行循环快得多,但仍然比在 C++ 中实现的这种情况要慢.有关完整实现,请参见 ".

PS Actually, this can be done in Java, with negligible overhead, see the line if (pixelStride == 2 && … above. So, we can bulk copy all chroma bytes to the resulting byte array, which is much faster than running the loops, but still slower than what can be achieved for such case in C++. For full implementation, see Image.toByteArray().

这篇关于camera2 捕获的图片 - 从 YUV_420_888 到 NV21 的转换的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆