我应该使用什么技术从一系列内存中的位图生成WebM实时流? [英] What technologies should I use to produce a WebM live stream from a series of in-memory bitmaps?

查看:105
本文介绍了我应该使用什么技术从一系列内存中的位图生成WebM实时流?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

老板给我带来了一些挑战,这有点超出我通常的工作范围,而且我在确定应该使用哪些技术/项目方面遇到了困难. (我不介意,我想要一些新"东西:)

Boss handed me a bit of a challenge that is a bit out of my usual ballpark and I am having trouble identifying which technologies/projects I should use. (I don't mind, I asked for something 'new' :)

工作:构建一个.NET服务器端进程,该进程可以每秒从缓冲区中提取一个位图10次,并生成/提供10fps的视频流,以在启用了HTML5的现代浏览器中显示.

Job: Build a .NET server-side process that can pick up a bitmap from a buffer 10 times per second and produce/serve a 10fps video stream for display in a modern HTML5 enabled browser.

我应该在这里寻找哪些乐高积木?

What Lego blocks should I be looking for here?

戴夫

推荐答案

您将要使用FFmpeg.基本流程如下:

You'll want to use FFmpeg. Here's the basic flow:

Your App -> FFmpeg STDIN -> VP8 or VP9 video wrapped in WebM

如果要在这些图像中进行流传输,可能最简单的方法是将位图解码为原始RGB或RGBA位图,然后将每个帧写入FFmpeg的STDIN.您将必须先阅读第一个位图以确定大小和颜色信息,然后使用正确的参数执行FFmpeg子进程.完成后,关闭管道,FFmpeg将完成输出文件.如果需要,您甚至可以将FFmpeg的STDOUT重定向到S3上的Blob存储之类的地方.

If you're streaming in these images, probably the easiest thing to do is decode the bitmap into a raw RGB or RGBA bitmap, and then write each frame to FFmpeg's STDIN. You will have to read the first bitmap first to determine the size and color information, then execute the FFmpeg child process with the correct parameters. When you're done, close the pipe and FFmpeg will finish up your output file. If you want, you can even redirect FFmpeg's STDOUT to somewhere like blob storage on S3 or something.

如果一次上传所有图像,然后然后创建视频,则更加简单.只需按顺序列出文件并执行FFmpeg. FFmpeg完成后,您应该有一个视频.

If all the images are uploaded at once and then you create the video, it's even easier. Just make a list of the files in-order and execute FFmpeg. When FFmpeg is done, you should have a video.

其他一些信息将帮助您了解如何构建FFmpeg命令行:WebM是一种容器格式.它什么也没做,只是跟踪多少视频流,多少音频流,用于这些流的哪种编解码器,字幕流,元数据(如缩略图)等.WebM基本上是Matroska(.mkv),但是禁用了某些功能,使浏览器制造商更容易采用WebM标准.在WebM中,您至少需要一个视频流. VP8和VP9是非常兼容的编解码器.如果您想添加音频,Opus是可以使用的标准编解码器.

One additional bit of information that will help you understand how to build an FFmpeg command line: WebM is a container format. It doesn't do anything but keep track of how many video streams, how many audio streams, what codecs to use for those streams, subtitle streams, metadata (like thumbnail images), etc. WebM is basically Matroska (.mkv), but with some features disabled to make adopting the WebM standard easier for browser makers. Inside WebM, you'll want at least one video stream. VP8 and VP9 are very compatible codecs. If you want to add audio, Opus is a standard codec you can use.

一些资源可以帮助您入门:

Some resources to get you started:

  • FFmpeg Documentation (https://ffmpeg.org/documentation.html)
  • Converting raw images to video (https://superuser.com/a/469517/48624)
  • VP8 Encoding (http://trac.ffmpeg.org/wiki/Encode/VP8)
  • FFmpeg Binaries for Windows (https://ffmpeg.zeranoe.com/builds/)

这篇关于我应该使用什么技术从一系列内存中的位图生成WebM实时流?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆