将Camera2 API与ImageReader一起使用 [英] Using Camera2 API with ImageReader
问题描述
我正在尝试使用Galaxy S4上的Camera2 API捕获图像数据. ImageReader被用作表面提供程序.使用的图像格式已经在ImageFormat.YV12和ImageFormat.YUV_420_888中进行了尝试,并产生相同的结果.
I'm trying to capture image data using the Camera2 API on a Galaxy S4. ImageReader is being used as the surface provider. The image format used has been tried with both ImageFormat.YV12 and ImageFormat.YUV_420_888 and produces the same results.
设置似乎很好,我使用ImageReader从ImageReader获得了图像.图像有3个平面.缓冲区是预期的大小,Y平面为Width * Height,其他两个平面为(Width * Height)/4.
The setup seems fine, and I get an Image from the ImageReader using ImageReader. The Image has 3 planes. The buffers are the expected sizes, Width*Height for Y plane and (Width*Height)/4 for the other two planes.
问题是我无法通过两种方式正确获取数据.第一个问题是Y平面数据是镜像的.可以解决这个问题,尽管这很奇怪,所以我很好奇是否可以预期.
The issue is that I'm not getting data properly in two ways. The first issue is that the Y plane data is in mirror-image. This can be dealt with, though it is strange so I am curious if this is expected.
更严重的问题是,其他两架飞机似乎根本无法正确传送数据.例如,对于640x480的图像大小,导致U和V缓冲区大小为76800字节,只有缓冲区的前320个字节为非零值.此数字各不相同,似乎在不同图像尺寸之间不遵循设定的比率,但是对于每种尺寸,图像之间似乎是一致的.
The worse issue is that the other two planes don't seem to be delivering data correctly at all. For instance, with an image size of 640x480, which results in U and V buffer sizes of 76800 bytes, only the first 320 bytes of the buffers are non-zero values. This number varies and does not seem to follow a set ratio between different image sizes, but does seem to be consistent between images for each size.
我想知道使用此API是否缺少某些内容.代码在下面.
I wonder if there is something that I am missing in using this API. Code is below.
public class OnboardCamera {
private final String TAG = "OnboardCamera";
int mWidth = 1280;
int mHeight = 720;
int mYSize = mWidth*mHeight;
int mUVSize = mYSize/4;
int mFrameSize = mYSize+(mUVSize*2);
//handler for the camera
private HandlerThread mCameraHandlerThread;
private Handler mCameraHandler;
//the size of the ImageReader determines the output from the camera.
private ImageReader mImageReader = ImageReader.newInstance(mWidth, mHeight, ImageFormat.YV12, 30);
private Surface mCameraRecieverSurface = mImageReader.getSurface();
{
mImageReader.setOnImageAvailableListener(mImageAvailListener, mCameraHandler);
}
private byte[] tempYbuffer = new byte[mYSize];
private byte[] tempUbuffer = new byte[mUVSize];
private byte[] tempVbuffer = new byte[mUVSize];
ImageReader.OnImageAvailableListener mImageAvailListener = new ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader reader) {
//when a buffer is available from the camera
//get the image
Image image = reader.acquireNextImage();
Image.Plane[] planes = image.getPlanes();
//copy it into a byte[]
byte[] outFrame = new byte[mFrameSize];
int outFrameNextIndex = 0;
ByteBuffer sourceBuffer = planes[0].getBuffer();
sourceBuffer.get(tempYbuffer, 0, tempYbuffer.length);
ByteBuffer vByteBuf = planes[1].getBuffer();
vByteBuf.get(tempVbuffer);
ByteBuffer yByteBuf = planes[2].getBuffer();
yByteBuf.get(tempUbuffer);
//free the Image
image.close();
}
};
OnboardCamera() {
mCameraHandlerThread = new HandlerThread("mCameraHandlerThread");
mCameraHandlerThread.start();
mCameraHandler = new Handler(mCameraHandlerThread.getLooper());
}
@Override
public boolean startProducing() {
CameraManager cm = (CameraManager) Ten8Application.getAppContext().getSystemService(Context.CAMERA_SERVICE);
try {
String[] cameraList = cm.getCameraIdList();
for (String cd: cameraList) {
//get camera characteristics
CameraCharacteristics mCameraCharacteristics = cm.getCameraCharacteristics(cd);
//check if the camera is in the back - if not, continue to next
if (mCameraCharacteristics.get(CameraCharacteristics.LENS_FACING) != CameraCharacteristics.LENS_FACING_BACK) {
continue;
}
//get StreamConfigurationMap - supported image formats
StreamConfigurationMap scm = mCameraCharacteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
android.util.Size[] sizes = scm.getOutputSizes(ImageFormat.YV12);
cm.openCamera(cd, mDeviceStateCallback, mCameraHandler);
}
} catch (CameraAccessException e) {
e.printStackTrace();
Log.e(TAG, "CameraAccessException detected", e);
}
return false;
}
private final CameraDevice.StateCallback mDeviceStateCallback = new CameraDevice.StateCallback() {
@Override
public void onOpened(CameraDevice camera) {
//make list of surfaces to give to camera
List<Surface> surfaceList = new ArrayList<>();
surfaceList.add(mCameraRecieverSurface);
try {
camera.createCaptureSession(surfaceList, mCaptureSessionStateCallback, mCameraHandler);
} catch (CameraAccessException e) {
Log.e(TAG, "createCaptureSession threw CameraAccessException.", e);
}
}
@Override
public void onDisconnected(CameraDevice camera) {
}
@Override
public void onError(CameraDevice camera, int error) {
}
};
private final CameraCaptureSession.StateCallback mCaptureSessionStateCallback = new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(CameraCaptureSession session) {
try {
CaptureRequest.Builder requestBuilder = session.getDevice().createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
requestBuilder.addTarget(mCameraRecieverSurface);
//set to null - image data will be produced but will not receive metadata
session.setRepeatingRequest(requestBuilder.build(), null, mCameraHandler);
} catch (CameraAccessException e) {
Log.e(TAG, "createCaptureSession threw CameraAccessException.", e);
}
}
@Override
public void onConfigureFailed(CameraCaptureSession session) {
}
};
}
推荐答案
我遇到了同样的问题,我认为问题出在Android API 21中.我升级到API 23,并且相同的代码运行良好.还在API 22上进行了测试,它也有效.
I had the same issue, the problem I believe was in Android API 21. I upgraded to API 23 and the same code worked fine. Also tested on API 22 and it also worked.
这篇关于将Camera2 API与ImageReader一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!