如何申请像Windows 8应用边缘检测oncamera流的图像效果? [英] How to apply image effects like edge detection oncamera stream in Windows 8 app?

查看:272
本文介绍了如何申请像Windows 8应用边缘检测oncamera流的图像效果?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试应用在Windows图像处理效果8应用相机直接提供。
我已经尝试用帆布和应用效果的摄像头直接得到后重绘图像的方式。但是,这种方法工作正常,基本的效果,但对于像边缘检测的影响及其产生较大的滞后,同时使用帆布方式闪烁。



其他的方法是创建MFT(媒体基础变换),但它可以在C,对此,我也没办法实现。



谁能告诉我,我如何能实现我对摄像头流应用效果的目的,直接在Windows 8 Metro风格应用程序要么通过提高画布的方法,使像边缘检测唐很大的影响不会有任何问题,或者我如何在C#中使用MFT,因为我已经在C#语言或者通过其他方式合作?


解决方案

我刚刚打了不少这方面的最后一周,甚至考虑写一个博客文章了。我想这个答案可能是一样好。



您可以去的MFT的方式,这需要在C ++中做,但是事情,你需要写不会是C#和C ++之间很大的不同。值得注意的唯一的事情是,我认为MFT工程YUV色彩空间,让您的典型的卷积滤镜/影响可能表现稍有不同或需要转换为RGB。如果你决定走这条路在C#应用程序方面,你需要做的唯一的事情就是打电话给MediaCapture.AddEffectAsync()。好了这一点,你需要编辑Package.appxmanifest等,但让我们一起去首先第一件事情



如果你看的Media捕捉摄像头采用样本 - 它已经做了你所需要的。它适用于你的相机饲料的灰度效果。它包括在一个应用程序,在C#版本使用C ++ MFT项目。我不得不效果应用到的MediaElement这可能不是你所需要的,但也很简单 - 调用MediaElement.AddVideoEffect()和您的视频文件播放现在应用灰度效果。为了能够使用MFT - 你需要简单地添加到GrayscaleTransform项目的引用,并添加以下行到你的appxmanifest:

 <&扩展GT; 
<扩展分类=windows.activatableClass.inProcessServer>
< InProcessServer>
<路径> GrayscaleTransform.dll< /路径和GT;
< ActivatableClass ActivatableClassId =GrayscaleTransform.GrayscaleEffect的ThreadingModel =两者/>
< / InProcessServer>
< /扩展>
< /扩展>



MFT代码是如何工作的:



下面的几行创建一个像素的颜色变换矩阵

 浮规模=(浮点)MFGetAttributeDouble(m_pAttributes,MFT_GRAYSCALE_SATURATION,0.0); 
浮动角=(浮点)MFGetAttributeDouble(m_pAttributes,MFT_GRAYSCALE_CHROMA_ROTATION,0.0);
m_transform = D2D1 :: Matrix3x2F ::量表(刻度,刻度)* D2D1 :: Matrix3x2F ::旋转(角度);



根据视频馈送的像素格式 - 不同的变换方法被选择为扫描像素。寻找这些行:

  m_pTransformFn = TransformImage_YUY2; 
m_pTransformFn = TransformImage_UYVY;
m_pTransformFn = TransformImage_NV12;

有关我的样本M4V文件 - 格式被检测为NV12,因此它被调用TransformImage_NV12

有关规定的范围(m_rcDest)内的像素,或者如果没有指定范围内整个屏幕范围内 - 的TransformImage_〜方法调用TransformChroma(垫,和放大器; U,&安培; 5) 。
。对于其他像素 - 从原始帧的值复制



TransformChroma使用m_transform变换像素。如果你想改变的效果 - 你可以简单地改变m_transform矩阵,或者如果您需要访问相邻像素作为边缘检测滤波器 - 修改TransformImage_的方法来处理这​​些像素



这是做到这一点的方法之一。我认为这是相当CPU密集型,所以我个人更喜欢写这样操作的像素着色器。如何申请像素着色器的视频流虽然?嗯,我还没有应用,但我相信你可以的传输视频帧来支持DirectX表面很容易,过一会儿给他们一个像素着色器。到目前为止 - 我能够传输视频帧,我希望下周应用着色器。我可能会写一篇博客文章了。我把meplayer类从媒体引擎本地C ++回放采样并把它移到一个模板C ++的DirectX项目转换为WinRTComponent库中,然后用C#/ XAML应用程序中使用它,关联的swapchain的meplayer类与我在C#项目中使用显示视频的SwapChainBackgroundPanel创建。我不得不做出的meplayer班上有几个变化。首先 - 我不得不将它移到一个公共空间,将使其可用于其他装配。然后,我不得不修改其创建的swapchain以接受与SwapChainBackgroundPanel使用格式为:

  DXGI_SWAP_CHAIN​​_DESC1 swapChainDesc = {0}; 
swapChainDesc.Width = m_rcTarget.right;
swapChainDesc.Height = m_rcTarget.bottom;
//最常见的swapchain格式是DXGI_FORMAT_R8G8B8A8-UNORM
swapChainDesc.Format = m_d3dFormat;
swapChainDesc.Stereo = FALSE;

//不要使用多重采样
swapChainDesc.SampleDesc.Count = 1;
swapChainDesc.SampleDesc.Quality = 0;

//swapChainDesc.BufferUsage = DXGI_USAGE_BACK_BUFFER | DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; //允许它被用作呈现目标。
//使用超过1缓冲,以使翻页效果。
//swapChainDesc.BufferCount = 4;
swapChainDesc.BufferCount = 2;
//swapChainDesc.Scaling = DXGI_SCALING_NONE;
swapChainDesc.Scaling = DXGI_SCALING_STRETCH;
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL;
swapChainDesc.Flags = 0;

最后 - 而不是调用CreateSwapChainForCoreWindow - 我打电话CreateSwapChainForComposition和我SwapChainBackgroundPanel的swapchain关联:

  //创建交换链,然后将其与SwapChainBackgroundPanel关联。 
DX :: ThrowIfFailed(
spDXGIFactory.Get() - GT; CreateSwapChainForComposition(
spDevice.Get(),
和swapChainDesc,
nullptr,//允许在所有显示器上
和m_spDX11SwapChain)
);

ComPtr< ISwapChainBackgroundPanelNative> dxRootPanelAsSwapChainBackgroundPanel;

//设置上SwapChainBackgroundPanel交换链。
的reinterpret_cast<的IUnknown *>(m_swapChainPanel) - GT;的QueryInterface(
IID_PPV_ARGS(安培; dxRootPanelAsSwapChainBackgroundPanel)
);

DX :: ThrowIfFailed(
dxRootPanelAsSwapChainBackgroundPanel-> SetSwapChain(m_spDX11SwapChain.Get())
);



*编辑如下



忘记了大约一更多的事情。如果你的目标是留在纯C# - 也许通过调用的 MediaCapture.CapturePhotoToStreamAsync(),提供一个MemoryStream,然后调用WriteableBitmap的。的在流上的SetSource()) - 您可以使用 WriteableBitmapEx 处理您的图像。它可能不是最高的性能,但是如果你的分辨率不太高或您的帧速率的要求并不高 - 它可能只是不够。 CodePlex上的项目不正式支持WinRT的,但是我有一个应该工作,你可以尝试在这里版本(Dropbox的)


I am trying to apply image manipulation effects in Windows 8 app on camera feeds directly. I have tried a way using canvas and redrawing images after applying effects getting from webcam directly. But this approach works fine for basic effects but for effects like edge detection its creating large lag and flickering while using canvas approach.

Other way is to create MFT(media foundation transform) but it can be implemented in C about which i have no idea.

Can anyone tell me how can i achieve my purpose of applying effects on webcam stream directly in Windows 8 metro style app either by improving canvas approach so that large effects like edge detection don not have any issues or how can i apply MFT in C# since i have worked on C# language or by some other approach?

解决方案

I have just played quite a bit in this area the last week and even considered writing a blog post about it. I guess this answer can be just as good.

You can go the MFT way, which needs to be done in C++, but the things you would need to write would not be much different between C# and C++. The only thing of note is that I think the MFT works in YUV color space, so your typical convolution filters/effects might behave a bit differently or require conversion to RGB. If you decide to go that route On the C# application side the only thing you would need to do is to call MediaCapture.AddEffectAsync(). Well that and you need to edit your Package.appxmanifest etc., but let's go with first things first.

If you look at the Media capture using webcam sample - it already does what you need. It applies a grayscale effect to your camera feed. It includes a C++ MFT project that is used in an application that is available in C# version. I had to apply the effect to a MediaElement which might not be what you need, but is just as simple - call MediaElement.AddVideoEffect() and your video file playback now applies the grayscale effect. To be able to use the MFT - you need to simply add a reference to the GrayscaleTransform project and add following lines to your appxmanifest:

<Extensions>
  <Extension Category="windows.activatableClass.inProcessServer">
    <InProcessServer>
      <Path>GrayscaleTransform.dll</Path>
      <ActivatableClass ActivatableClassId="GrayscaleTransform.GrayscaleEffect" ThreadingModel="both" />
    </InProcessServer>
  </Extension>
</Extensions>

How the MFT code works:

The following lines create a pixel color transformation matrix

float scale = (float)MFGetAttributeDouble(m_pAttributes, MFT_GRAYSCALE_SATURATION, 0.0f);
float angle = (float)MFGetAttributeDouble(m_pAttributes, MFT_GRAYSCALE_CHROMA_ROTATION, 0.0f);
m_transform = D2D1::Matrix3x2F::Scale(scale, scale) * D2D1::Matrix3x2F::Rotation(angle);

Depending on the pixel format of the video feed - a different transformation method is selected to scan the pixels. Look for these lines:

m_pTransformFn = TransformImage_YUY2;
m_pTransformFn = TransformImage_UYVY;
m_pTransformFn = TransformImage_NV12;

For my sample m4v file - the format is detected as NV12, so it is calling TransformImage_NV12.

For pixels within the specified range (m_rcDest) or within the entire screen if no range was specified - the TransformImage_~ methods call TransformChroma(mat, &u, &v). For other pixels - the values from original frame are copied.

TransformChroma transforms the pixels using m_transform. If you want to change the effect - you can simply change the m_transform matrix or if you need access to neighboring pixels as in an edge detection filter - modify the TransformImage_ methods to process these pixels.

This is one way to do it. I think it is quite CPU intensive, so personally I prefer to write a pixel shader for such operations. How do you apply a pixel shader to a video stream though? Well, I am not quite there yet, but I believe you can transfer video frames to a DirectX surface fairly easily and call a pixel shader on them later. So far - I was able to transfer the video frames and I am hoping to apply the shaders next week. I might write a blog post about it. I took the meplayer class from the Media engine native C++ playback sample and moved it to a template C++ DirectX project converted to a WinRTComponent library, then used it with a C#/XAML application, associating the swapchain the meplayer class creates with the SwapChainBackgroundPanel that I use in the C# project to display the video. I had to make a few changes in the meplayer class. First - I had to move it to a public namespace that would make it available to other assembly. Then I had to modify the swapchain it creates to a format accepted for use with a SwapChainBackgroundPanel:

        DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {0};
        swapChainDesc.Width = m_rcTarget.right;
        swapChainDesc.Height = m_rcTarget.bottom;
        // Most common swapchain format is DXGI_FORMAT_R8G8B8A8-UNORM
        swapChainDesc.Format = m_d3dFormat;
        swapChainDesc.Stereo = false;

        // Don't use Multi-sampling
        swapChainDesc.SampleDesc.Count = 1;
        swapChainDesc.SampleDesc.Quality = 0;

        //swapChainDesc.BufferUsage = DXGI_USAGE_BACK_BUFFER | DXGI_USAGE_RENDER_TARGET_OUTPUT;
        swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // Allow it to be used as a render target.
        // Use more than 1 buffer to enable Flip effect.
        //swapChainDesc.BufferCount = 4;
        swapChainDesc.BufferCount = 2;
        //swapChainDesc.Scaling = DXGI_SCALING_NONE;
        swapChainDesc.Scaling = DXGI_SCALING_STRETCH;
        swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL;
        swapChainDesc.Flags = 0;

Finally - instead of calling CreateSwapChainForCoreWindow - I am calling CreateSwapChainForComposition and associating the swapchain with my SwapChainBackgroundPanel:

        // Create the swap chain and then associate it with the SwapChainBackgroundPanel.
        DX::ThrowIfFailed(
            spDXGIFactory.Get()->CreateSwapChainForComposition(
                spDevice.Get(),
                &swapChainDesc,
                nullptr,                                // allow on all displays
                &m_spDX11SwapChain)
            );

        ComPtr<ISwapChainBackgroundPanelNative> dxRootPanelAsSwapChainBackgroundPanel;

        // Set the swap chain on the SwapChainBackgroundPanel.
        reinterpret_cast<IUnknown*>(m_swapChainPanel)->QueryInterface(
            IID_PPV_ARGS(&dxRootPanelAsSwapChainBackgroundPanel)
            );

        DX::ThrowIfFailed(
            dxRootPanelAsSwapChainBackgroundPanel->SetSwapChain(m_spDX11SwapChain.Get())
            );

*EDIT follows

Forgot about one more thing. If your goal is to stay in pure C# - if you figure out how to capture frames to a WriteableBitmap (maybe by calling MediaCapture.CapturePhotoToStreamAsync() with a MemoryStream and then calling WriteableBitmap.SetSource() on the stream) - you can use WriteableBitmapEx to process your images. It might not be top performance, but if your resolution is not too high or your frame-rate requirements are not high - it might just be enough. The project on CodePlex does not officially support WinRT yet, but I have a version that should work that you can try here (Dropbox).

这篇关于如何申请像Windows 8应用边缘检测oncamera流的图像效果?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆