UWP - 通过套接字将网络摄像头流式传输到 MediaElement - 图片损坏? [英] UWP - Streaming WebCam over Socket to MediaElement - Broken Picture?

查看:24
本文介绍了UWP - 通过套接字将网络摄像头流式传输到 MediaElement - 图片损坏?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

背景

我编写的代码从网络摄像头记录视频剪辑,将它们写入内存流,然后通过 Socket 连接传输数据,然后将数据重新组装成视频并在媒体元素上播放.

The code I've written records video clips from a webcam, writes them to a memory stream, then transmits the data across a Socket connection where it's re-assembled into video and played back on a Media Element.

最终目标是创建一个婴儿监控系统,服务器/摄像头在 Windows IOT Raspberry Pi 上运行,以及一个 UWP 应用程序,我和我的女朋友可以在我们的手机或笔记本电脑上查看.除了从房子的另一部分查看摄像头之外,我们还可以在我们中的一个人不在家时登录,及时我也会连接 PIR 运动传感器和警报系统,但首先要做的是首先.

The ultimate goal is to create a baby monitor system, with the server/camera running on a Windows IOT Raspberry Pi, and a UWP app that my girlfriend and I can view on our mobile phones, or the laptop. As well as viewing the camera from another part of the house, we'll also be able to log in when one of us is away from home and in time I'll wire up a PIR motion sensor and alerting system also, but first things first.

整个系统工作得相当好,视频有 5 秒的延迟,这对我来说是可以接受的(目前),并且使用 MediaPlaybackList 视频以相当恒定的速率以无缝(像这样无缝)流式传输可以暂时获得)视频之间的过渡.MediaPlaybackList 在项目播放时将其删除,从而使内存占用保持相对恒定.

The system as a whole work fairly well, there's a 5 second delay in the video which is acceptable to me (for now), and using a MediaPlaybackList the video is streamed at a fairly constant rate with seamless (as seamless as this can get for now) transition between videos. The MediaPlaybackList removes items as they've been played, keeping the memory footprint to a relative constant.

问题

当视频在客户端播放时,它会得到频繁但随机的断图部分.它没有任何模式,无论如何我都找不到,我能描述的唯一方法是图片的一部分被水平分成两半,两半交换,图片的右侧显示在左边,反之亦然.这就像闪烁,因为破碎的位只显示几分之一秒,因为另一位会在一秒左右后出现在图片的其他地方.

When the video plays back on the client end, it gets frequent but random sections of broken picture. There's no pattern to it, not one that I can find anyway, and the only way I can describe it is that part of the picture is split in half horizontally and the two halves are swapped around, the right side of the picture being displayed on the left, and vice versa. It's like a flicker, as in the broken bit is only displayed for a fraction of a second, because another one appears a second or so later somewhere else on the picture.

这是一个例子:

现在,这里有几个有趣的点..

Now, here's a couple of interesting points..

1) 在我开始使用 MediaPlaybackList 对数据流进行排队之前,我使用的方法是从传入的套接字流中提取每个视频,将其作为 StorageFile 保存到本地磁盘,然后将这些 StorageFile 排队,播放按顺序删除它们,然后删除它们(我在源代码管理中仍然有这个代码的一个版本,我可以挖掘出来,但我不喜欢创建和销毁 StorageFiles 的想法,因为它看起来效率极低).然而,使用这种方法并没有导致我现在看到的破碎的图片......这让我相信视频本身是好的,也许这是将它重新组合在一起并流式传输到的方式的问题媒体元素?

1) Before I started using a MediaPlaybackList to queue up streams of data, I was using a method of extracting each video from the incoming socket stream, saving it to the local disk as a StorageFile, then queueing up these StorageFiles, playing them in order and deleting them afterwards (I still have a version of this code in source control which I can dig out, but I don't like the idea of creating and destroying StorageFiles as it seems horrendously inefficient). However, using this method did not result in the broken pictures that I'm now seeing... this leads me to believe that the video itself is fine, and that perhaps it's an issue with the way it's being put back together and streamed to the Media Element?

2) 我女朋友的猫在我没有意识到的情况下将网络摄像头(Microsoft Lifecam HD-3000)撞到了一边,直到我运行服务器并注意到图片呈 90 度角时我才意识到……有趣(且令人费解)的事情是交付给客户的图片并没有像我上面描述的那样分解.我在这里看到的唯一区别是,图片显示为 480 x 640(从侧面的相机),而不是标准的 640 x 480.这意味着什么,我不确定...

2) My girlfriend's cat knocked the webcam (a Microsoft Lifecam HD-3000) onto its side without me realising, and I didn't realise until I ran the server and noticed the picture was at a 90 degree angle.. the interesting (and puzzling) thing about this was that the picture delivered to the client didn't break up as I've been describing above. The only difference here that I can see is that the picture then came through as 480 x 640 (from the camera sitting on its side), rather than the standard 640 x 480. What this means, I'm not sure...

对问题的思考

  • 与视频的大小/尺寸有关(它在一边播放得很好,所以与此有关)?
  • 与比特率有关吗?
  • 与客户端重新组装字节的方式有关吗?
  • 与流的编码有关吗?

来源

这里有一些我认为可能相关的代码片段,完整的解决方案源可以在 GitHub 上找到,这里:视频套接字服务器.

Here's a few snippets of code that I think are probably relevant, the full solution source can be found on GitHub, here: Video Socket Server .

服务器

while (true)
{
    try
    {
        //record a 5 second video to stream
        Debug.WriteLine($"Recording started");
        var memoryStream = new InMemoryRandomAccessStream();
        await _mediaCap.StartRecordToStreamAsync(MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Vga), memoryStream);
        await Task.Delay(TimeSpan.FromSeconds(5));
        await _mediaCap.StopRecordAsync();
        Debug.WriteLine($"Recording finished, {memoryStream.Size} bytes");

        //create a CurrentVideo object to hold stream data and give it a unique id
        //which the client app can use to ensure they only request each video once
        memoryStream.Seek(0);
        CurrentVideo.Id = Guid.NewGuid();
        CurrentVideo.Data = new byte[memoryStream.Size];

        //read the stream data into the CurrentVideo  
        await memoryStream.ReadAsync(CurrentVideo.Data.AsBuffer(), (uint)memoryStream.Size, InputStreamOptions.None);
        Debug.WriteLine($"Bytes written to stream");

        //signal to waiting connections that there's a new video
        _signal.Set();
        _signal.Reset();
    }
    catch (Exception ex)
    {
        Debug.WriteLine($"StartRecording -> {ex.Message}");
        break;
    }
}

连接

//use the guid to either get the current video, or wait for the 
//next new one that's added by the server
Guid guid = Guid.Empty;
Guid.TryParse(command, out guid);
byte[] data = _server.GetCurrentVideoDataAsync(guid);
if (data != null)
    await _socket.OutputStream.WriteAsync(data.AsBuffer());

客户端应用

byte[] inbuffer = new byte[10000000];

//block on the input stream until we've received the full packet,
//but use the Partial option so that we don't have to fill the entire buffer before we continue.
//this is important, because the idea is to set the buffer big enough to handle any packet we'll receive,
//meaning we'll never fill the entire buffer... and we don't want to block here indefinitely
result = await socket.InputStream.ReadAsync(inbuffer.AsBuffer(), inbuffer.AsBuffer().Capacity, InputStreamOptions.Partial);

//strip off the Guid, leaving just the video data
byte[] guid = result.ToArray().Take(16).ToArray();
byte[] data = result.ToArray().Skip(16).ToArray();
_guid = new Guid(guid);

//wrap the data in a stream, create a MediaSource from it,
//then use that to create a MediaPlackbackItem which gets added 
//to the back of the playlist...
var stream = new MemoryStream(data);
var source = MediaSource.CreateFromStream(stream.AsRandomAccessStream(), "video/mp4");
var item = new MediaPlaybackItem(source);
_playlist.Items.Add(item);

推荐答案

我想做类似的事情(从 Raspberry Pi 上的 UWP 应用流式传输视频/音频),但我一直在使用来自Windows 10 SDK 经过一些调整后我已经能够可靠地工作(示例代码存在线程同步问题).但是,SDK 示例使用使用媒体扩展的专有协议,并且通过互联网重定向流并不容易,这是我的用例,因此我查看了您的代码并使其正常工作(具有相同的错误).简单实时通信

I'm looking to do something similar (stream video/audio from a UWP app on a Raspberry Pi) but I have been using the simple-communications sample from the Windows 10 SDK which after a bit of tweaking I have been able to get working reliably (there are thread sync issues with the sample code). However the SDK sample uses a proprietary protocol using media extensions and it isn't easy to redirect the stream over the internet which is my use-case so I had a look at your code and got it working (with the same bugs). Simple Real Time communication

对您的方法的几点评论:

A couple of comments on your approach:

1) RPi 不能很好地处理 Win10 上的视频,因为它不能使用硬件视频编码器,所以软件中的一切都可以.这将导致故障,我看到 CPU 性能显着提高,利用率超过 50%,这意味着至少有一个 CPU 内核接近最大值,可能是将视频压缩处理为 MP4 的内核.但是,我运行了 SDK 示例并获得了无故障查看和大约 70% 的 CPU,因此您的问题可能在其他地方.

1) The RPi can't process video on Win10 very well as it can't use the hardware video encoders so does everything in software. This will cause glitches and I see the CPU performance increasing significantly with over 50% utilisation which means at least one of the CPU cores is working close to max, possibly the one handling the video compression to MP4. However I ran up the SDK sample and got glitch free viewing and about 70% CPU so your problem is likely elsewhere.

2) 5 秒的延迟延迟很重要.我得到的实时样本的延迟不到 100 毫秒,但是当我将流媒体计时器调低到 1 秒时,中断很严重且不可行.您是否考虑过更改设计以便在捕获期间进行流式传输,但是我不确定 InMemoryRandomAccessStream 是否会让您这样做.另一种替代方法是捕获预览流并将自定义媒体接收器写入缓冲区(由于非托管代码而难以做到,并且可能无法像简单通信示例那样轻松压缩).

2) 5 seconds of latency delay is significant. I get less than 100mSec latency with the real time sample however when I adjusted down your streaming timer to 1 second the breakup was significant and unworkable. Have you thought about changing the design so it streams during capture however I'm not sure the InMemoryRandomAccessStream will let you do that. Another alternative is to capture the preview stream and write a custom media sink to buffer (harder to do as not managed code and likely not able to compress as easily) like the Simple Communication sample does.

3) MP4 是一种容器而不是压缩格式,并且不是为流式传输而构建的,因为除非将 moov 元数据记录放置在文件的开头,否则必须在开始之前下载整个文件.不确定 UWP 如何处理这个问题,可能需要您在发送前关闭流的方法,以确保另一端可以正常播放.

3) MP4 is a container not a compression format, and isn't built for streaming as the whole file has to be downloaded before it starts unless the moov metadata record is placed at the beginning of the file. Not sure how UWP handles this, likely your approach of closing off the stream before sending is required to ensure the other end can play it properly.

所以不是一个完整的答案,但希望以上内容有所帮助.

So not a complete answer but hopefully the above helps.

这篇关于UWP - 通过套接字将网络摄像头流式传输到 MediaElement - 图片损坏?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆