如何将CPU内存(UCHAR缓冲区)带入GPU内存(ID3D11Texture2D资源) [英] How to take CPU memory(UCHAR Buffer) in to GPU memory(ID3D11Texture2D Resource)

查看:342
本文介绍了如何将CPU内存(UCHAR缓冲区)带入GPU内存(ID3D11Texture2D资源)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

此处的代码将在GPU中运行并捕获Windows屏幕,它为我们提供了 ID3D11Texture2D 资源。使用 ID3D11DeviceContext :: Map 我将GPU 资源放入 BYTE BYTE 缓冲区到CPU内存 g_iMageBuffer 的缓冲区,其 UCHAR

The code here will run in GPU and capture windows screen, it give us ID3D11Texture2D Resource. Using ID3D11DeviceContext::Map I taking GPU resource in to BYTE buffer from BYTEbuffer in to CPU Memory g_iMageBuffer its a UCHAR.

现在我想进行逆向工程,我想在其中添加 g_iMageBuffer buffer(CPU Memory)到 ID3D11Texture2D (GPU内存)。请有人帮助我如何进行逆向工程我是图形部分的新手。

Now I want to do reverse engineering, I want to take g_iMageBufferbuffer(CPU Memory) in to ID3D11Texture2D(GPU memory). Please someone help me how to do this reverse engineering I am new to graphical part.

//Variable Declaration
IDXGIOutputDuplication* IDeskDupl;
IDXGIResource*          lDesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO IFrameInfo;
ID3D11Texture2D*        IAcquiredDesktopImage;
ID3D11Texture2D*        lDestImage;
ID3D11DeviceContext*    lImmediateContext;
UCHAR*                  g_iMageBuffer=nullptr;

//Screen capture start here
hr = lDeskDupl->AcquireNextFrame(20, &lFrameInfo, &lDesktopResource);

// >QueryInterface for ID3D11Texture2D
hr = lDesktopResource->QueryInterface(IID_PPV_ARGS(&lAcquiredDesktopImage));
lDesktopResource.Release();

// Copy image into GDI drawing texture
lImmediateContext->CopyResource(lDestImage,lAcquiredDesktopImage);
lAcquiredDesktopImage.Release();
lDeskDupl->ReleaseFrame();  

// Copy GPU Resource to CPU
D3D11_TEXTURE2D_DESC desc;
lDestImage->GetDesc(&desc);
D3D11_MAPPED_SUBRESOURCE resource;
UINT subresource = D3D11CalcSubresource(0, 0, 0);
lImmediateContext->Map(lDestImage, subresource, D3D11_MAP_READ_WRITE, 0, &resource);

std::unique_ptr<BYTE> pBuf(new BYTE[resource.RowPitch*desc.Height]);
UINT lBmpRowPitch = lOutputDuplDesc.ModeDesc.Width * 4;
BYTE* sptr = reinterpret_cast<BYTE*>(resource.pData);
BYTE* dptr = pBuf.get() + resource.RowPitch*desc.Height - lBmpRowPitch;
UINT lRowPitch = std::min<UINT>(lBmpRowPitch, resource.RowPitch);

for (size_t h = 0; h < lOutputDuplDesc.ModeDesc.Height; ++h)
{
    memcpy_s(dptr, lBmpRowPitch, sptr, lRowPitch);
    sptr += resource.RowPitch;
    dptr -= lBmpRowPitch;
}

lImmediateContext->Unmap(lDestImage, subresource);
long g_captureSize=lRowPitch*desc.Height;
g_iMageBuffer= new UCHAR[g_captureSize];
g_iMageBuffer = (UCHAR*)malloc(g_captureSize);

//Copying to UCHAR buffer 
memcpy(g_iMageBuffer,pBuf,g_captureSize);


推荐答案

您不需要反向工程。您描述的内容称为加载纹理。

You don't need reverse engineering. What you describe is called "loading a texture".

如何:以编程方式初始化纹理

如何:从文件初始化纹理


如果您似乎不熟悉DirectX编程,请考虑通过用于DX11的DirectX工具套件进行工作教程。特别是,请确保您已阅读 ComPtr ThrowIfFailed

As you appear to be new to DirectX programming, consider working through the DirectX Tool Kit for DX11 tutorials. In particular, make sure you read the section on ComPtr and ThrowIfFailed.

这篇关于如何将CPU内存(UCHAR缓冲区)带入GPU内存(ID3D11Texture2D资源)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆