将原始数据转换为适用于iOS的可显示视频 [英] Converting raw data to displayable video for iOS

查看:240
本文介绍了将原始数据转换为适用于iOS的可显示视频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个有趣的问题,我需要研究相关的非常低级别的视频流。

I have an interesting problem I need to research related to very low level video streaming.

有人有任何经验转换原始的字节流(分为每像素信息,但不是标准格式的视频)转换成低分辨率视频流?我相信,我可以将数据映射到每像素字节的RGB值,因为与原始数据中的值相对应的颜色值将由我们确定。我不知道从那里去哪里,或者每个像素需要RGB格式。

Has anyone had any experience converting a raw stream of bytes(separated into per pixel information, but not a standard format of video) into a low resolution video stream? I believe that I can map the data into RGB value per pixel bytes, as the color values that correspond to the value in the raw data will be determined by us. I'm not sure where to go from there, or what the RGB format needs to be per pixel.

我看过FFMPeg,但它的文档很大,我不知道从哪里开始。

I've looked at FFMPeg but it's documentation is massive and I don't know where to start.

具体的问题我已经包括,是否可以创建CVPixelBuffer与该像素数据?如果我要这样做,我需要转换为每像素数据的格式?

Specific questions I have include, is it possible to create CVPixelBuffer with that pixel data? If I were to do that, what sort of format for the per pixel data would I need to convert to?

此外,我应该深入了解OpenGL,如果是,最好的地方在这个主题上寻找信息?

Also, should I be looking deeper into OpenGL, and if so where would the best place to look for information on this topic?

CGBitmapContextCreate怎么样?例如,如果我去了一些类似这个的东西,那么典型的像素字节需要看起来像什么?这是否足够快以保持帧速率在20fps以上?

What about CGBitmapContextCreate? For example, if I went I went with something like this, what would a typical pixel byte need to look like? Would this be fast enough to keep the frame rate above 20fps?

编辑

我认为在你们两人的优秀帮助下,还有一些对我自己的研究,我已经组织了一个如何构建原始RGBA数据的计划,然后从该数据构造一个CGImage,反过来创建一个CVPixelBuffer从CGImage从这里 CGImage中的CVPixelBuffer

I think with the excellent help of you two, and some more research on my own, I've put together a plan for how to construct the raw RGBA data, then construct a CGImage from that data, in turn create a CVPixelBuffer from that CGImage from here CVPixelBuffer from CGImage.

然而,随着数据的播放,我不知道我会看什么样的FPS。我将它们绘制到CALayer,还是有一些类似AVAssetWriter的类,我可以用它来添加CVPixelBuffers来播放它。我的经验是使用AVAssetWriter将构建的CoreAnimation层次结构导出到视频,因此视频始终在开始播放之前构建,而不是作为实时视频显示。

However, to then play that live as the data comes in, I'm not sure what kind of FPS I would be looking at. Do I paint them to a CALayer, or is there some similar class to AVAssetWriter that I could use to play it as I append CVPixelBuffers. The experience that I have is using AVAssetWriter to export constructed CoreAnimation hierarchies to video, so the videos are always constructed before they begin playing, and not displayed as live video.

推荐答案

我以前做过,我知道你在一段时间内发现了我的GPUImage项目。当我回答那里的问题时,GPUImageRawDataInput是您想要的,因为它可以将RGBA,BGRA或RGB数据快速上传到OpenGL ES纹理中。从那里,帧数据可以被过滤,显示到屏幕上,或者记录到电影文件中。

I've done this before, and I know that you found my GPUImage project a little while ago. As I replied on the issues there, the GPUImageRawDataInput is what you want for this, because it does a fast upload of RGBA, BGRA, or RGB data directly into an OpenGL ES texture. From there, the frame data can be filtered, displayed to the screen, or recorded into a movie file.

您将建议的通过CGImage到CVPixelBuffer的路径不是根据我的个人经验,取得非常好的表现。通过Core Graphics进行实时视频时,开销太大。你想直接在OpenGL ES上以最快的显示速度在这里。

Your proposed path of going through a CGImage to a CVPixelBuffer is not going to yield very good performance, based on my personal experience. There's too much overhead when passing through Core Graphics for realtime video. You want to go directly to OpenGL ES for the fastest display speed here.

我甚至可以改进我的代码,使其比现在更快。我目前使用 glTexImage2D()从本地字节更新纹理数据,但使用iOS 5.0中引入的纹理缓存可能会更快地加快内部的刷新数据纹理保持其大小。设置缓存的开销有一些开销,使得它们对于一次性上传速度有点慢,但是快速更新数据应该更快。

I might even be able to improve my code to make it faster than it is right now. I currently use glTexImage2D() to update texture data from local bytes, but it would probably be even faster to use the texture caches introduced in iOS 5.0 to speed up refreshing data within a texture that maintains its size. There's some overhead in setting up the caches that makes them a little slower for one-off uploads, but rapidly updating data should be faster with them.

这篇关于将原始数据转换为适用于iOS的可显示视频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆