如何在 Windows 8 应用程序中应用边缘检测 oncamera 流等图像效果? [英] How to apply image effects like edge detection oncamera stream in Windows 8 app?

查看:17
本文介绍了如何在 Windows 8 应用程序中应用边缘检测 oncamera 流等图像效果?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将 Windows 8 应用程序中的图像处理效果直接应用于相机源.在直接应用从网络摄像头获得的效果后,我尝试了一种使用画布和重绘图像的方法.但是这种方法适用于基本效果,但对于边缘检测等效果,在使用画布方法时会产生很大的延迟和闪烁.

I am trying to apply image manipulation effects in Windows 8 app on camera feeds directly. I have tried a way using canvas and redrawing images after applying effects getting from webcam directly. But this approach works fine for basic effects but for effects like edge detection its creating large lag and flickering while using canvas approach.

其他方法是创建 MFT(媒体基础转换),但它可以在 C 中实现,我不知道.

Other way is to create MFT(media foundation transform) but it can be implemented in C about which i have no idea.

谁能告诉我如何通过改进画布方法来实现直接在 Windows 8 Metro 风格应用程序中对网络摄像头流应用效果的目的,以便边缘检测等大型效果没有任何问题,或者我如何应用 MFTC#,因为我从事过 C# 语言或其他方法?

Can anyone tell me how can i achieve my purpose of applying effects on webcam stream directly in Windows 8 metro style app either by improving canvas approach so that large effects like edge detection don not have any issues or how can i apply MFT in C# since i have worked on C# language or by some other approach?

推荐答案

上周我在这个领域玩了很多,甚至考虑写一篇关于它的博客文章.我想这个答案可能一样好.

I have just played quite a bit in this area the last week and even considered writing a blog post about it. I guess this answer can be just as good.

您可以采用 MFT 方式,这需要在 C++ 中完成,但是您需要编写的东西在 C# 和 C++ 之间并没有太大区别.唯一需要注意的是,我认为 MFT 在 YUV 颜色空间中工作,因此您的典型卷积过滤器/效果可能会表现得有些不同或需要转换为 RGB.如果您决定走这条路 在 C# 应用程序端,您唯一需要做的就是调用 MediaCapture.AddEffectAsync().好吧,你需要编辑你的 Package.appxmanifest 等,但让我们先做第一件事.

You can go the MFT way, which needs to be done in C++, but the things you would need to write would not be much different between C# and C++. The only thing of note is that I think the MFT works in YUV color space, so your typical convolution filters/effects might behave a bit differently or require conversion to RGB. If you decide to go that route On the C# application side the only thing you would need to do is to call MediaCapture.AddEffectAsync(). Well that and you need to edit your Package.appxmanifest etc., but let's go with first things first.

如果您查看 使用网络摄像头进行媒体捕获示例- 它已经满足您的需求.它将灰度效果应用于您的相机馈送.它包括一个 C++ MFT 项目,该项目在 C# 版本中可用的应用程序中使用.我必须将效果应用于可能不是您需要的 MediaElement,但同样简单 - 调用 MediaElement.AddVideoEffect() 并且您的视频文件播放现在应用灰度效果.为了能够使用 MFT - 您只需添加对 GrayscaleTransform 项目的引用并将以下行添加到您的 appxmanifest:

If you look at the Media capture using webcam sample - it already does what you need. It applies a grayscale effect to your camera feed. It includes a C++ MFT project that is used in an application that is available in C# version. I had to apply the effect to a MediaElement which might not be what you need, but is just as simple - call MediaElement.AddVideoEffect() and your video file playback now applies the grayscale effect. To be able to use the MFT - you need to simply add a reference to the GrayscaleTransform project and add following lines to your appxmanifest:

<Extensions>
  <Extension Category="windows.activatableClass.inProcessServer">
    <InProcessServer>
      <Path>GrayscaleTransform.dll</Path>
      <ActivatableClass ActivatableClassId="GrayscaleTransform.GrayscaleEffect" ThreadingModel="both" />
    </InProcessServer>
  </Extension>
</Extensions>

MFT 代码的工作原理:

How the MFT code works:

以下几行创建一个像素颜色变换矩阵

The following lines create a pixel color transformation matrix

float scale = (float)MFGetAttributeDouble(m_pAttributes, MFT_GRAYSCALE_SATURATION, 0.0f);
float angle = (float)MFGetAttributeDouble(m_pAttributes, MFT_GRAYSCALE_CHROMA_ROTATION, 0.0f);
m_transform = D2D1::Matrix3x2F::Scale(scale, scale) * D2D1::Matrix3x2F::Rotation(angle);

根据视频源的像素格式 - 选择不同的转换方法来扫描像素.查找这些行:

Depending on the pixel format of the video feed - a different transformation method is selected to scan the pixels. Look for these lines:

m_pTransformFn = TransformImage_YUY2;
m_pTransformFn = TransformImage_UYVY;
m_pTransformFn = TransformImage_NV12;

对于我的示例 m4v 文件 - 格式被检测为 NV12,因此它正在调用 TransformImage_NV12.

For my sample m4v file - the format is detected as NV12, so it is calling TransformImage_NV12.

对于指定范围 (m_rcDest) 内的像素,如果没有指定范围,则在整个屏幕内 - TransformImage_~ 方法调用 TransformChroma(mat, &u, &v).对于其他像素 - 复制原始帧中的值.

For pixels within the specified range (m_rcDest) or within the entire screen if no range was specified - the TransformImage_~ methods call TransformChroma(mat, &u, &v). For other pixels - the values from original frame are copied.

TransformChroma 使用 m_transform 转换像素.如果你想改变效果 - 你可以简单地改变 m_transform 矩阵,或者如果你需要像边缘检测过滤器那样访问相邻像素 - 修改 TransformImage_ 方法来处理这些像素.

TransformChroma transforms the pixels using m_transform. If you want to change the effect - you can simply change the m_transform matrix or if you need access to neighboring pixels as in an edge detection filter - modify the TransformImage_ methods to process these pixels.

这是一种方法.我认为这是相当 CPU 密集型的,所以我个人更喜欢为此类操作编写像素着色器.但是,您如何将像素着色器应用于视频流?好吧,我还没到那一步,但我相信你可以 传输视频帧 到 DirectX 表面相当容易,然后在它们上调用像素着色器.到目前为止 - 我能够传输视频帧,我希望在下周应用着色器.我可能会写一篇关于它的博客文章.我从 媒体引擎原生 C++ 播放示例 中获取了 meplayer 类并将其移至转换为 WinRTComponent 库的模板 C++ DirectX 项目,然后将其与 C#/XAML 应用程序一起使用,将 meplayer 类创建的交换链与我在 C# 项目中用于显示视频的 SwapChainBackgroundPanel 相关联.我不得不在 meplayer 类中进行一些更改.首先 - 我必须将它移动到一个公共命名空间,使其可用于其他程序集.然后我不得不将它创建的交换链修改为可以与 SwapChainBackgroundPanel 一起使用的格式:

This is one way to do it. I think it is quite CPU intensive, so personally I prefer to write a pixel shader for such operations. How do you apply a pixel shader to a video stream though? Well, I am not quite there yet, but I believe you can transfer video frames to a DirectX surface fairly easily and call a pixel shader on them later. So far - I was able to transfer the video frames and I am hoping to apply the shaders next week. I might write a blog post about it. I took the meplayer class from the Media engine native C++ playback sample and moved it to a template C++ DirectX project converted to a WinRTComponent library, then used it with a C#/XAML application, associating the swapchain the meplayer class creates with the SwapChainBackgroundPanel that I use in the C# project to display the video. I had to make a few changes in the meplayer class. First - I had to move it to a public namespace that would make it available to other assembly. Then I had to modify the swapchain it creates to a format accepted for use with a SwapChainBackgroundPanel:

        DXGI_SWAP_CHAIN_DESC1 swapChainDesc = {0};
        swapChainDesc.Width = m_rcTarget.right;
        swapChainDesc.Height = m_rcTarget.bottom;
        // Most common swapchain format is DXGI_FORMAT_R8G8B8A8-UNORM
        swapChainDesc.Format = m_d3dFormat;
        swapChainDesc.Stereo = false;

        // Don't use Multi-sampling
        swapChainDesc.SampleDesc.Count = 1;
        swapChainDesc.SampleDesc.Quality = 0;

        //swapChainDesc.BufferUsage = DXGI_USAGE_BACK_BUFFER | DXGI_USAGE_RENDER_TARGET_OUTPUT;
        swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // Allow it to be used as a render target.
        // Use more than 1 buffer to enable Flip effect.
        //swapChainDesc.BufferCount = 4;
        swapChainDesc.BufferCount = 2;
        //swapChainDesc.Scaling = DXGI_SCALING_NONE;
        swapChainDesc.Scaling = DXGI_SCALING_STRETCH;
        swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL;
        swapChainDesc.Flags = 0;

最后——我不是调用 CreateSwapChainForCoreWindow——而是调用 CreateSwapChainForComposition 并将交换链与我的 SwapChainBackgroundPanel 相关联:

Finally - instead of calling CreateSwapChainForCoreWindow - I am calling CreateSwapChainForComposition and associating the swapchain with my SwapChainBackgroundPanel:

        // Create the swap chain and then associate it with the SwapChainBackgroundPanel.
        DX::ThrowIfFailed(
            spDXGIFactory.Get()->CreateSwapChainForComposition(
                spDevice.Get(),
                &swapChainDesc,
                nullptr,                                // allow on all displays
                &m_spDX11SwapChain)
            );

        ComPtr<ISwapChainBackgroundPanelNative> dxRootPanelAsSwapChainBackgroundPanel;

        // Set the swap chain on the SwapChainBackgroundPanel.
        reinterpret_cast<IUnknown*>(m_swapChainPanel)->QueryInterface(
            IID_PPV_ARGS(&dxRootPanelAsSwapChainBackgroundPanel)
            );

        DX::ThrowIfFailed(
            dxRootPanelAsSwapChainBackgroundPanel->SetSwapChain(m_spDX11SwapChain.Get())
            );

*编辑跟随

又忘了一件事.如果您的目标是使用纯 C# - 如果您弄清楚如何将帧捕获到 WriteableBitmap(可能通过调用 MediaCapture.CapturePhotoToStreamAsync() 使用 MemoryStream 然后调用 WriteableBitmap.SetSource() 在流上) - 你可以使用 WriteableBitmapEx 来处理您的图像.它可能不是顶级性能,但如果您的分辨率不是太高或您的帧速率要求不高 - 它可能就足够了.CodePlex 上的项目尚未正式支持 WinRT,但我有一个应该可以使用的版本,您可以尝试 这里(Dropbox).

Forgot about one more thing. If your goal is to stay in pure C# - if you figure out how to capture frames to a WriteableBitmap (maybe by calling MediaCapture.CapturePhotoToStreamAsync() with a MemoryStream and then calling WriteableBitmap.SetSource() on the stream) - you can use WriteableBitmapEx to process your images. It might not be top performance, but if your resolution is not too high or your frame-rate requirements are not high - it might just be enough. The project on CodePlex does not officially support WinRT yet, but I have a version that should work that you can try here (Dropbox).

这篇关于如何在 Windows 8 应用程序中应用边缘检测 oncamera 流等图像效果?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆