使用 MediaProjection 截取屏幕截图 [英] Take a screenshot using MediaProjection

查看:47
本文介绍了使用 MediaProjection 截取屏幕截图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用 Android L 中可用的 MediaProjection API,可以

<块引用>

将主屏幕(默认显示)的内容捕获到 Surface 对象中,然后您的应用可以通过网络发送该对象

我设法让 VirtualDisplay 正常工作,并且我的 SurfaceView 正确显示了屏幕内容.

我想要做的是捕获显示在Surface 中的帧,并将其打印到文件中.我尝试了以下方法,但得到的只是一个黑色文件:

Bitmap bitmap = Bitmap.createBitmap(surfaceView.getWidth(),surfaceView.getHeight(),Bitmap.Config.ARGB_8888);Canvas canvas = new Canvas(位图);表面视图.绘制(画布);printBitmapToFile(位图);

知道如何从 Surface 检索显示的数据吗?

编辑

正如@j__m 建议的那样,我现在正在使用 ImageReaderSurface 设置 VirtualDisplay:

Display display = getWindowManager().getDefaultDisplay();点大小 = new Point();display.getSize(size);displayWidth = size.x;displayHeight = size.y;imageReader = ImageReader.newInstance(displayWidth, displayHeight, ImageFormat.JPEG, 5);

然后我创建将 Surface 传递给 MediaProjection 的虚拟显示:

int flags = DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY |DisplayManager.VIRTUAL_DISPLAY_FLAG_PUBLIC;DisplayMetrics 指标 = getResources().getDisplayMetrics();int密度=metrics.密度Dpi;mediaProjection.createVirtualDisplay("test", displayWidth, displayHeight, 密度, 标志,imageReader.getSurface(), null,projectionHandler);

最后,为了获得屏幕截图",我从 ImageReader 获取了一个 Image 并从中读取数据:

Image image = imageReader.acquireLatestImage();字节 [] 数据 = getDataFromImage(image);位图位图 = BitmapFactory.decodeByteArray(data, 0, data.length);

问题是生成的位图是null.

这是 getDataFromImage 方法:

public static byte[] getDataFromImage(Image image) {Image.Plane[] 平面 = image.getPlanes();ByteBuffer 缓冲区 = 飞机 [0].getBuffer();字节 [] 数据 = 新字节 [buffer.capacity()];缓冲区.获取(数据);返回数据;}

acquireLatestImage返回的Image数据总是默认大小为7672320,解码返回null.

更具体地说,当 ImageReader 尝试获取图像时,返回状态 ACQUIRE_NO_BUFS.

解决方案

在花了一些时间并了解了一些超出预期的 Android 图形架构之后,我已经开始使用它了.所有必要的部分都有详细的文档记录,但如果您还不熟悉 OpenGL,可能会让人头疼,所以这里有一个很好的总结傻瓜".

我假设你

  • 了解 Grafika,这是一个非官方的 Android 媒体 API 测试套件,由 Google 热爱工作的员工编写在业余时间;
  • 如有必要,可以通读Khronos GL ES 文档以填补 OpenGL ES 知识的空白;
  • 已阅读本文档并理解其中的大部分内容(至少部分关于硬件 Composer 和 BufferQueue).

BufferQueue 是 ImageReader 的内容.那个类一开始的名字很糟糕——最好把它叫做ImageReceiver";– BufferQueue 接收端的愚蠢包装器(无法通过任何其他公共 API 访问).不要被愚弄:它不执行任何转换.它不允许查询格式,生产者支持,即使 C++ BufferQueue 在内部公开该信息.在简单的情况下它可能会失败,例如如果生产者使用自定义的、晦涩的格式(例如 BGRA).

上面列出的问题是为什么我建议使用 OpenGL ES glReadPixels 作为通用后备,但仍然尝试使用 ImageReader(如果可用),因为它可能允许以最少的副本/转换检索图像.


为了更好地了解如何使用 OpenGL 来完成任务,让我们看看由 ImageReader/MediaCodec 返回的 Surface.没什么特别的,只是 SurfaceTexture 上的普通 Surface 有两个陷阱:OES_EGL_image_externalEGL_ANDROID_recordable.

OES_EGL_image_external

简单地说,OES_EGL_image_externalflag,必须传递给 glBindTexture 才能使纹理与 BufferQueue 一起使用.它不是定义特定的颜色格式等,而是从生产者那里收到的任何内容的不透明容器.实际内容可能是 YUV 色彩空间(Camera API 必需的)、RGBA/BGRA(通常由视频驱动程序使用)或其他可能是供应商特定的格式.制作人可能会提供一些细节,例如 JPEG 或 RGB565 表示,但不要抱太大希望.

从 Android 6.0 开始,CTS 测试涵盖的唯一生产者是相机 API(AFAIK 仅是 Java 外观).之所以有许多 MediaProjection + RGBA8888 ImageReader 示例四处飞扬,是因为它是一种经常遇到的常见面额,并且是 OpenGL ES 规范对 glReadPixels 强制要求的唯一格式.如果显示编辑器决定使用完全不可读的格式,或者只是使用 ImageReader 类(例如 BGRA8888)不支持的格式,并且您将不得不处理它,请不要感到惊讶.

EGL_ANDROID_recordable

从阅读规范可以看出,这是一个标志,传递给 eglChooseConfig 以温和地推动生产者生成 YUV 图像.或者优化从视频内存读取的管道.或者其他的东西.我不知道任何 CTS 测试,以确保它是正确的处理(甚至规范本身也表明,个别生产者可能被硬编码以给予特殊处理),所以如果它碰巧不受支持,请不要感到惊讶(请参阅 Android5.0 模拟器)或静默忽略.Java 类中没有定义,只需自己定义常量,就像 Grafika 那样.

进入困难部分

那么在后台正确的方式"读取 VirtualDisplay 应该怎么做?

  1. 创建 EGL 上下文和 EGL 显示,可能带有recordable";标志,但不一定.
  2. 创建一个屏幕外缓冲区,用于在从视频内存读取图像数据之前存储图像数据.
  3. 创建 GL_TEXTURE_EXTERNAL_OES 纹理.
  4. 创建一个 GL 着色器,用于将第 3 步中的纹理绘制到第 2 步中的缓冲区.视频驱动程序将(希望)确保外部"中包含的任何东西纹理将安全地转换为传统的 RGBA(请参阅规范).
  5. 使用外部"创建 Surface + SurfaceTexture纹理.
  6. 为上述 SurfaceTexture 安装 OnFrameAvailableListener(这必须在下一步之前完成,否则 BufferQueue 会被搞砸!)
  7. 将第 5 步中的表面提供给 VirtualDisplay

您的 OnFrameAvailableListener 回调将包含以下步骤:

  • 使上下文成为当前的(例如,使您的屏幕外缓冲区成为当前的);
  • updateTexImage 向生产者请求图像;
  • getTransformMatrix 检索纹理的变换矩阵,修复任何可能困扰生产者输出的疯狂问题.请注意,此矩阵将修复 OpenGL 倒置坐标系,但我们将在下一步中重新引入倒置.
  • 绘制外部"我们的屏幕外缓冲区上的纹理,使用之前创建的着色器.除非您想以翻转的图像结束,否则着色器需要额外翻转它的 Y 坐标.
  • 使用 glReadPixels 从屏幕外视频缓冲区读取到 ByteBuffer.

当使用 ImageReader 读取视频内存时,上述大部分步骤在内部执行,但有些不同.创建缓冲区中行的对齐方式可以由 glPixelStore 定义(默认为 4,因此在使用 4 字节 RGBA8888 时您不必考虑它).

请注意,除了使用着色器处理纹理之外,GL ES 不会在格式之间进行自动转换(与桌面 OpenGL 不同).如果您需要 RGBA8888 数据,请确保以该格式分配屏幕外缓冲区并从 glReadPixels 请求它.

EglCore eglCore;表面生产者方;SurfaceTexture 纹理;int 纹理 ID;OffscreenSurface 消费者端;字节缓冲区缓冲区;Texture2dProgram 着色器;FullFrameRect 屏幕;...//显示器的尺寸,或者你想从中读取的任何内容int w, h = ...//如果你愿意,可以随意尝试 FLAG_RECORDABLEeglCore = new EglCore(null, EglCore.FLAG_TRY_GLES3);consumerSide = new OffscreenSurface(eglCore, w, h);consumerSide.makeCurrent();着色器 = 新的 Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT)screen = new FullFrameRect(shader);纹理 = 新 SurfaceTexture(textureId = screen.createTextureObject(), false);纹理.setDefaultBufferSize(reqWidth, reqHeight);producerSide = 新表面(纹理);纹理.setOnFrameAvailableListener(this);buf = ByteBuffer.allocateDirect(w * h * 4);buf.order(ByteOrder.nativeOrder());currentBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);

只有在完成上述所有操作后,您才能使用 producerSide Surface 初始化您的 VirtualDisplay.

帧回调代码:

float[] matrix = new float[16];布尔值关闭;公共无效 onFrameAvailable(SurfaceTexture surfaceTexture) {//关闭 EGL 后可能仍有待处理的回调如果(关闭)返回;consumerSide.makeCurrent();纹理.updateTexImage();纹理.getTransformMatrix(矩阵);consumerSide.makeCurrent();//将图像绘制到帧缓冲对象screen.drawFrame(textureId, matrix);consumerSide.swapBuffers();缓冲区.倒带();GLES20.glReadPixels(0, 0, w, h, GLES10.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);缓冲区.倒带();currentBitmap.copyPixelsFromBuffer(buffer);//恭喜,你应该在位图中有你的图像//可以释放资源或者继续获取//你正在写的任何穷人的录像机的帧}

上面的代码是一个大大简化的方法版本,可以在这个 Github 项目中找到,但所有引用的类直接来自 Grafika.>

根据您的硬件,您可能需要多跳几圈才能完成任务:使用 setSwapInterval、在制作屏幕截图前调用 glFlush 等.其中大部分可以你自己从 LogCat 的内容中找出来的.

为了避免 Y 坐标反转,将 Grafika 使用的顶点着色器替换为以下一个:

String VERTEX_SHADER_FLIPPED =统一 mat4 uMVPMatrix;
"+统一 mat4 uTexMatrix;
"+属性 vec4 aPosition;
"+属性 vec4 aTextureCoord;
"+不同的 vec2 vTextureCoord;
"+void main() {
";+"gl_Position = uMVPMatrix * aPosition;
"+"vec2 coordInterm = (uTexMatrix * aTextureCoord).xy;
"+//OpenGL ES:如何翻转 Y 坐标:第 6542 版";"vTextureCoord = vec2(coordInterm.x, 1.0 - coordInterm.y);
"+}
";

离别的话

当 ImageReader 不适合您时,或者您想在从 GPU 移动图像之前对 Surface 内容执行一些着色器处理时,可以使用上述方法.

对屏幕外缓冲区进行额外复制可能会损害它的速度,但如果您知道接收缓冲区的确切格式(例如来自 ImageReader)并为 glReadPixels 使用相同的格式,则运行着色器的影响将是最小的.

例如,如果您的视频驱动程序使用 BGRA 作为内部格式,您将检查是否支持 EXT_texture_format_BGRA8888(可能会),分配屏幕外缓冲区并使用 glReadPixels 以这种格式检索图像.

如果您想执行完整的零拷贝或使用 OpenGL 不支持的格式(例如 JPEG),您仍然最好使用 ImageReader.

With the MediaProjection APIs available in Android L it's possible to

capture the contents of the main screen (the default display) into a Surface object, which your app can then send across the network

I have managed to get the VirtualDisplay working, and my SurfaceView is correctly displaying the content of the screen.

What I want to do is to capture a frame displayed in the Surface, and print it to file. I have tried the following, but all I get is a black file:

Bitmap bitmap = Bitmap.createBitmap
    (surfaceView.getWidth(), surfaceView.getHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
surfaceView.draw(canvas);
printBitmapToFile(bitmap);

Any idea on how to retrieve the displayed data from the Surface?

EDIT

So as @j__m suggested I'm now setting up the VirtualDisplay using the Surface of an ImageReader:

Display display = getWindowManager().getDefaultDisplay();
Point size = new Point();
display.getSize(size);
displayWidth = size.x;
displayHeight = size.y;

imageReader = ImageReader.newInstance(displayWidth, displayHeight, ImageFormat.JPEG, 5);

Then I create the virtual display passing the Surface to the MediaProjection:

int flags = DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY | DisplayManager.VIRTUAL_DISPLAY_FLAG_PUBLIC;

DisplayMetrics metrics = getResources().getDisplayMetrics();
int density = metrics.densityDpi;

mediaProjection.createVirtualDisplay("test", displayWidth, displayHeight, density, flags, 
      imageReader.getSurface(), null, projectionHandler);

Finally, in order to get a "screenshot" I acquire an Image from the ImageReader and read the data from it:

Image image = imageReader.acquireLatestImage();
byte[] data = getDataFromImage(image);
Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);

The problem is that the resulting bitmap is null.

This is the getDataFromImage method:

public static byte[] getDataFromImage(Image image) {
   Image.Plane[] planes = image.getPlanes();
   ByteBuffer buffer = planes[0].getBuffer();
   byte[] data = new byte[buffer.capacity()];
   buffer.get(data);

   return data;
}

The Image returned from the acquireLatestImage has always data with default size of 7672320 and the decoding returns null.

More specifically, when the ImageReader tries to acquire an image, the status ACQUIRE_NO_BUFS is returned.

解决方案

After spending some time and learning about Android graphics architecture a bit more than desirable, I have got it to work. All necessary pieces are well-documented, but can cause headaches, if you aren't already familiar with OpenGL, so here is a nice summary "for dummies".

I am assuming that you

  • Know about Grafika, an unofficial Android media API test-suite, written by Google's work-loving employees in their spare time;
  • Can read through Khronos GL ES docs to fill gaps in OpenGL ES knowledge, when necessary;
  • Have read this document and understood most of written there (at least parts about hardware composers and BufferQueue).

The BufferQueue is what ImageReader is about. That class was poorly named to begin with – it would be better to call it "ImageReceiver" – a dumb wrapper around receiving end of BufferQueue (inaccessible via any other public API). Don't be fooled: it does not perform any conversions. It does not allow querying formats, supported by producer, even if C++ BufferQueue exposes that information internally. It may fail in simple situations, for example if producer uses a custom, obscure format, (such as BGRA).

The above-listed issues are why I recommend to use OpenGL ES glReadPixels as generic fallback, but still attempt to use ImageReader if available, since it potentially allows retrieving the image with minimal copies/transformations.


To get a better idea how to use OpenGL for the task, let's look at Surface, returned by ImageReader/MediaCodec. It is nothing special, just normal Surface on top of SurfaceTexture with two gotchas: OES_EGL_image_external and EGL_ANDROID_recordable.

OES_EGL_image_external

Simply put, OES_EGL_image_external is a a flag, that must be passed to glBindTexture to make the texture work with BufferQueue. Rather than defining specific color format etc., it is an opaque container for whatever is received from producer. Actual contents may be in YUV colorspace (mandatory for Camera API), RGBA/BGRA (often used by video drivers) or other, possibly vendor-specific format. The producer may offer some niceties, such as JPEG or RGB565 representation, but don't hold your hopes high.

The only producer, covered by CTS tests as of Android 6.0, is a Camera API (AFAIK only it's Java facade). The reason, why there are many MediaProjection + RGBA8888 ImageReader examples flying around is because it is a frequently encountered common denomination and the only format, mandated by OpenGL ES spec for glReadPixels. Still don't be surprised if display composer decides to use completely unreadable format or simply the one, unsupported by ImageReader class (such as BGRA8888) and you will have to deal with it.

EGL_ANDROID_recordable

As evident from reading the specification, it is a flag, passed to eglChooseConfig in order to gently push producer towards generating YUV images. Or optimize the pipeline for reading from video memory. Or something. I am not aware of any CTS tests, ensuring it's correct treatment (and even specification itself suggests, that individual producers may be hard-coded to give it special treatment), so don't be surprised if it happens to be unsupported (see Android 5.0 emulator) or silently ignored. There is no definition in Java classes, just define the constant yourself, like Grafika does.

Getting to hard part

So what is one supposed to do to read from VirtualDisplay in background "the right way"?

  1. Create EGL context and EGL display, possibly with "recordable" flag, but not necessarily.
  2. Create an offscreen buffer for storing image data before it is read from video memory.
  3. Create GL_TEXTURE_EXTERNAL_OES texture.
  4. Create a GL shader for drawing the texture from step 3 to buffer from step 2. The video driver will (hopefully) ensure, that anything, contained in "external" texture will be safely converted to conventional RGBA (see the spec).
  5. Create Surface + SurfaceTexture, using "external" texture.
  6. Install OnFrameAvailableListener to the said SurfaceTexture (this must be done before the next step, or else the BufferQueue will be screwed!)
  7. Supply the surface from step 5 to the VirtualDisplay

Your OnFrameAvailableListener callback will contain the following steps:

  • Make the context current (e.g. by making your offscreen buffer current);
  • updateTexImage to request an image from producer;
  • getTransformMatrix to retrieve the transformation matrix of texture, fixing whatever madness may be plaguing the producer's output. Note, that this matrix will fix the OpenGL upside-down coordinate system, but we will reintroduce the upside-downness in the next step.
  • Draw the "external" texture on our offscreen buffer, using the previously created shader. The shader needs to additionally flip it's Y coordinate unless you want to end up with flipped image.
  • Use glReadPixels to read from your offscreen video buffer into a ByteBuffer.

Most of above steps are internally performed when reading video memory with ImageReader, but some differ. Alignment of rows in created buffer can be defined by glPixelStore (and defaults to 4, so you don't have to account for it when using 4-byte RGBA8888).

Note, that aside from processing a texture with shaders GL ES does no automatic conversion between formats (unlike the desktop OpenGL). If you want RGBA8888 data, make sure to allocate the offscreen buffer in that format and request it from glReadPixels.

EglCore eglCore;

Surface producerSide;
SurfaceTexture texture;
int textureId;

OffscreenSurface consumerSide;
ByteBuffer buf;

Texture2dProgram shader;
FullFrameRect screen;

...

// dimensions of the Display, or whatever you wanted to read from
int w, h = ...

// feel free to try FLAG_RECORDABLE if you want
eglCore = new EglCore(null, EglCore.FLAG_TRY_GLES3);

consumerSide = new OffscreenSurface(eglCore, w, h);
consumerSide.makeCurrent();

shader = new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT)
screen = new FullFrameRect(shader);

texture = new SurfaceTexture(textureId = screen.createTextureObject(), false);
texture.setDefaultBufferSize(reqWidth, reqHeight);
producerSide = new Surface(texture);
texture.setOnFrameAvailableListener(this);

buf = ByteBuffer.allocateDirect(w * h * 4);
buf.order(ByteOrder.nativeOrder());

currentBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);

Only after doing all of above you can initialize your VirtualDisplay with producerSide Surface.

Code of frame callback:

float[] matrix = new float[16];

boolean closed;

public void onFrameAvailable(SurfaceTexture surfaceTexture) {
  // there may still be pending callbacks after shutting down EGL
  if (closed) return;

  consumerSide.makeCurrent();

  texture.updateTexImage();
  texture.getTransformMatrix(matrix);

  consumerSide.makeCurrent();

  // draw the image to framebuffer object
  screen.drawFrame(textureId, matrix);
  consumerSide.swapBuffers();

  buffer.rewind();
  GLES20.glReadPixels(0, 0, w, h, GLES10.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);

  buffer.rewind();
  currentBitmap.copyPixelsFromBuffer(buffer);

  // congrats, you should have your image in the Bitmap
  // you can release the resources or continue to obtain
  // frames for whatever poor-man's video recorder you are writing
}

The code above is a greatly simplified version of approach, found in this Github project, but all referenced classes come directly from Grafika.

Depending on your hardware you may have to jump few extra hoops to get things done: using setSwapInterval, calling glFlush before making the screenshot etc. Most of these can be figured out on your own from contents of LogCat.

In order to avoid Y coordinate reversal, replace the vertex shader, used by Grafika, with the following one:

String VERTEX_SHADER_FLIPPED =
        "uniform mat4 uMVPMatrix;
" +
        "uniform mat4 uTexMatrix;
" +
        "attribute vec4 aPosition;
" +
        "attribute vec4 aTextureCoord;
" +
        "varying vec2 vTextureCoord;
" +
        "void main() {
" +
        "    gl_Position = uMVPMatrix * aPosition;
" +
        "    vec2 coordInterm = (uTexMatrix * aTextureCoord).xy;
" +
        // "OpenGL ES: how flip the Y-coordinate: 6542nd edition"
        "    vTextureCoord = vec2(coordInterm.x, 1.0 - coordInterm.y);
" +
        "}
";

Parting words

The above-described approach can be used when ImageReader does not work for you, or if you want to perform some shader processing on Surface contents before moving images from GPU.

It's speed may be harmed by doing extra copy to offscreen buffer, but the impact of running shader would be minimal if you know the exact format of received buffer (e.g. from ImageReader) and use the same format for glReadPixels.

For example, if your video driver is using BGRA as internal format, you would check if EXT_texture_format_BGRA8888 is supported (it likely would), allocate offscreen buffer and retrive the image in this format with glReadPixels.

If you want to perform a complete zero-copy or employ formats, not supported by OpenGL (e.g. JPEG), you are still better off using ImageReader.

这篇关于使用 MediaProjection 截取屏幕截图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆