DirectX部分屏幕捕获 [英] DirectX Partial Screen Capture

查看:113
本文介绍了DirectX部分屏幕捕获的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图创建一个程序来捕获全屏Directx应用程序,在屏幕上查找一组特定的像素,如果找到,则在屏幕上绘制图像。

I am trying to create a program that will capture a full screen directx application, look for a specific set of pixels on the screen and if it finds it then draw an image on the screen.

我已经能够设置应用程序,使用代码使用DirectX捕获屏幕

I have been able to set up the application to capture the screen the directx libraries using the code the answer for this question Capture screen using DirectX

在此示例中,使用以下命令将代码保存到硬盘驱动器中IWIC库。我宁愿操纵像素而不是保存像素。

In this example the code saves to the harddrive using the IWIC libraries. I would rather manipulate the pixels instead of saving it.

在捕获屏幕并获得整个屏幕像素的LPBYTE之后,我不确定如何将其裁剪为我想要的区域,然后能够操纵像素阵列。

After I have captured the screen and have a LPBYTE of the entire screen pixels I am unsure how to crop it to the region I want and then being able to manipulate the pixel array. Is it just a multi dimensional byte array?

我认为应该这样做的方式是

The way I think I should do it is


  1. 捕获到IWIC位图的屏幕(完成)。

  2. 使用ID2D1RenderTarget :: CreateBitmapFromWicBitmap
  3. 将IWIC位图转换为ID2D1位图
  4. 创建新的ID2D1 :: bitmap来存储部分图像。

  5. 使用ID2D1 :: CopyFromBitmap将ID2D1位图的区域复制到新位图。

  6. 将其渲染回屏幕使用ID2D1。

  1. Capture screen to IWIC bitmap (done).
  2. Convert IWIC bitmap to ID2D1 bitmap using ID2D1RenderTarget::CreateBitmapFromWicBitmap
  3. Create new ID2D1::Bitmap to store partial image.
  4. Copy region of the ID2D1 bitmap to a new bitmap using ID2D1::CopyFromBitmap.
  5. Render back onto screen using ID2D1 .

任何帮助都将受到赞赏。

Any help on any of this would be so much appreciated.

推荐答案

这里是原始代码的修改版本a>仅将屏幕的一部分捕获到缓冲区中,并且还返回大步。然后浏览所有像素,并将其颜色作为返回缓冲区的示例用法转储。

Here is a modified version of the original code that only captures a portion of the screen into a buffer, and also gives back the stride. Then it browses all the pixels, dumps their colors as a sample usage of the returned buffer.

在此示例中,缓冲区由函数分配,因此必须释放使用完后,它就会:

In this sample, the buffer is allocated by the function, so you must free it once you've used it:

// sample usage
int main()
{
  LONG left = 10;
  LONG top = 10;
  LONG width = 100;
  LONG height = 100;
  LPBYTE buffer;
  UINT stride;
  RECT rc = { left, top, left + width, top + height };
  Direct3D9TakeScreenshot(D3DADAPTER_DEFAULT, &buffer, &stride, &rc);

  // In 32bppPBGRA format, each pixel is represented by 4 bytes
  // with one byte each for blue, green, red, and the alpha channel, in that order.
  // But don't forget this is all modulo endianness ...
  // So, on Intel architecture, if we read a pixel from memory
  // as a DWORD, it's reversed (ARGB). The macros below handle that.

  // browse every pixel by line
  for (int h = 0; h < height; h++)
  {
    LPDWORD pixels = (LPDWORD)(buffer + h * stride);
    for (int w = 0; w < width; w++)
    {
      DWORD pixel = pixels[w];
      wprintf(L"#%02X#%02X#%02X#%02X\n", GetBGRAPixelAlpha(pixel), GetBGRAPixelRed(pixel), GetBGRAPixelGreen(pixel), GetBGRAPixelBlue(pixel));
    }
  }

  // get pixel at 50, 50 in the buffer, as #ARGB
  DWORD pixel = GetBGRAPixel(buffer, stride, 50, 50);
  wprintf(L"#%02X#%02X#%02X#%02X\n", GetBGRAPixelAlpha(pixel), GetBGRAPixelRed(pixel), GetBGRAPixelGreen(pixel), GetBGRAPixelBlue(pixel));

  SavePixelsToFile32bppPBGRA(width, height, stride, buffer, L"test.png", GUID_ContainerFormatPng);
  LocalFree(buffer);
  return 0;;
}

#define GetBGRAPixelBlue(p)         (LOBYTE(p))
#define GetBGRAPixelGreen(p)        (HIBYTE(p))
#define GetBGRAPixelRed(p)          (LOBYTE(HIWORD(p)))
#define GetBGRAPixelAlpha(p)        (HIBYTE(HIWORD(p)))
#define GetBGRAPixel(b,s,x,y)       (((LPDWORD)(((LPBYTE)b) + y * s))[x])

int main()

HRESULT Direct3D9TakeScreenshot(UINT adapter, LPBYTE *pBuffer, UINT *pStride, const RECT *pInputRc = nullptr)
{
  if (!pBuffer || !pStride) return E_INVALIDARG;

  HRESULT hr = S_OK;
  IDirect3D9 *d3d = nullptr;
  IDirect3DDevice9 *device = nullptr;
  IDirect3DSurface9 *surface = nullptr;
  D3DPRESENT_PARAMETERS parameters = { 0 };
  D3DDISPLAYMODE mode;
  D3DLOCKED_RECT rc;

  *pBuffer = NULL;
  *pStride = 0;

  // init D3D and get screen size
  d3d = Direct3DCreate9(D3D_SDK_VERSION);
  HRCHECK(d3d->GetAdapterDisplayMode(adapter, &mode));

  LONG width = pInputRc ? (pInputRc->right - pInputRc->left) : mode.Width;
  LONG height = pInputRc ? (pInputRc->bottom - pInputRc->top) : mode.Height;

  parameters.Windowed = TRUE;
  parameters.BackBufferCount = 1;
  parameters.BackBufferHeight = height;
  parameters.BackBufferWidth = width;
  parameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
  parameters.hDeviceWindow = NULL;

  // create device & capture surface (note it needs desktop size, not our capture size)
  HRCHECK(d3d->CreateDevice(adapter, D3DDEVTYPE_HAL, NULL, D3DCREATE_SOFTWARE_VERTEXPROCESSING, &parameters, &device));
  HRCHECK(device->CreateOffscreenPlainSurface(mode.Width, mode.Height, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &surface, nullptr));

  // get pitch/stride to compute the required buffer size
  HRCHECK(surface->LockRect(&rc, pInputRc, 0));
  *pStride = rc.Pitch;
  HRCHECK(surface->UnlockRect());

  // allocate buffer
  *pBuffer = (LPBYTE)LocalAlloc(0, *pStride * height);
  if (!*pBuffer)
  {
    hr = E_OUTOFMEMORY;
    goto cleanup;
  }

  // get the data
  HRCHECK(device->GetFrontBufferData(0, surface));

  // copy it into our buffer
  HRCHECK(surface->LockRect(&rc, pInputRc, 0));
  CopyMemory(*pBuffer, rc.pBits, rc.Pitch * height);
  HRCHECK(surface->UnlockRect());

cleanup:
  if (FAILED(hr))
  {
    if (*pBuffer)
    {
      LocalFree(*pBuffer);
      *pBuffer = NULL;
    }
    *pStride = 0;
  }
  RELEASE(surface);
  RELEASE(device);
  RELEASE(d3d);
  return hr;
}

这篇关于DirectX部分屏幕捕获的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆