将openGL上下文另存为视频输出 [英] Saving the openGL context as a video output
问题描述
我目前正在尝试将openGL
中制作的动画保存到视频文件中.我尝试使用openCV
的videowriter
,但是没有优势.我已经能够成功生成快照,并使用SDL
库将其另存为bmp
.如果我保存所有快照,然后使用ffmpeg
生成视频,那就像收集4 GB的图像一样.不切实际.
如何在渲染过程中直接编写视频帧?
这是我需要时用来拍摄快照的代码:
I am currently trying to save the animation made in openGL
to a video file. I have tried using openCV
's videowriter
but to no advantage. I have successfully been able to generate a snapshot and save it as bmp
using the SDL
library. If I save all snapshots and then generate the video using ffmpeg
, that is like collecting 4 GB worth of images. Not practical.
How can I write video frames directly during rendering?
Here the code i use to take snapshots when I require:
void snapshot(){
SDL_Surface* snap = SDL_CreateRGBSurface(SDL_SWSURFACE,WIDTH,HEIGHT,24, 0x000000FF, 0x0000FF00, 0x00FF0000, 0);
char * pixels = new char [3 *WIDTH * HEIGHT];
glReadPixels(0, 0,WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
for (int i = 0 ; i <HEIGHT ; i++)
std::memcpy( ((char *) snap->pixels) + snap->pitch * i, pixels + 3 * WIDTH * (HEIGHT-i - 1), WIDTH*3 );
delete [] pixels;
SDL_SaveBMP(snap, "snapshot.bmp");
SDL_FreeSurface(snap);
}
我需要视频输出.我发现ffmpeg
可用于从C ++代码创建视频,但无法弄清楚该过程.请帮忙!
I need the video output. I have discovered that ffmpeg
can be used to create videos from C++ code but have not been able to figure out the process. Please help!
编辑:我尝试使用openCV
CvVideoWriter
类,但是程序在声明后就崩溃了("segmentation fault
").编译显示过程没有错误.有什么建议吗?
EDIT : I have tried using openCV
CvVideoWriter
class but the program crashes ("segmentation fault
") the moment it is declared.Compilation shows no errors ofcourse. Any suggestions to that?
Python用户解决方案(需要Python2.7
,python-imaging
,python-opengl
,python-opencv
,您要写入的格式的编解码器,我在Ubuntu 14.04 64-bit
上):
SOLUTION FOR PYTHON USERS (Requires Python2.7
,python-imaging
,python-opengl
,python-opencv
, codecs of format you want to write to, I am on Ubuntu 14.04 64-bit
):
def snap():
pixels=[]
screenshot = glReadPixels(0,0,W,H,GL_RGBA,GL_UNSIGNED_BYTE)
snapshot = Image.frombuffer("RGBA",W,H),screenshot,"raw","RGBA",0,0)
snapshot.save(os.path.dirname(videoPath) + "/temp.jpg")
load = cv2.cv.LoadImage(os.path.dirname(videoPath) + "/temp.jpg")
cv2.cv.WriteFrame(videoWriter,load)
此处W
和H
是窗口尺寸(宽度,高度).发生了什么事,我正在使用PIL将从glReadPixels
命令读取的原始像素转换为JPEG
图像.我正在将该JPEG加载到openCV
图像中并写入视频刻录机.通过将PIL图像直接用于视频刻录机,我遇到了某些问题(这将节省I/O
的数百万个时钟周期),但是现在,我没有对此进行任何处理. Image
是PIL
模块cv2
是python-opencv
模块.
Here W
and H
are the window dimensions (width,height). What is happening is I am using PIL to convert the raw pixels read from the glReadPixels
command into a JPEG
image. I am loading that JPEG into the openCV
image and writing to the videowriter. I was having certain issues by directly using the PIL image into the videowriter (which would save millions of clock cycles of I/O
), but right now I am not working on that. Image
is a PIL
module cv2
is a python-opencv
module.
推荐答案
听起来好像您正在使用命令行实用程序:ffmpeg
.您应该使用libavcodec
和libavformat
,而不是使用命令行对来自静止图像集合的视频进行编码.这些是实际构建ffmpeg
的库,可让您对视频进行编码并将其存储为标准的流/交换格式(例如RIFF/AVI),而无需使用单独的程序.
It sounds as though you are using the command line utility: ffmpeg
. Rather than using the command-line to encode video from a collection of still images, you should use libavcodec
and libavformat
. These are the libraries upon which ffmpeg
is actually built, and will allow you to encode video and store it in a standard stream/interchange format (e.g. RIFF/AVI) without using a separate program.
您可能不会找到很多有关实现此方法的教程,因为传统上人们一直希望使用ffmpeg
来实现此目的.也就是说,解码各种视频格式以在OpenGL中显示.我认为随着PS4和Xbox One控制台引入游戏视频编码,这种情况很快就会改变,对此功能的需求突然会激增.
You probably will not find a lot of tutorials on implementing this because it has traditionally been the case that people wanted to use ffmpeg
to go the other way; that is, decode various video formats for display in OpenGL. I think this is going to change very soon with the introduction of gameplay video encoding to the PS4 and Xbox One consoles, suddenly demand for this functionality will skyrocket.
一般过程是这样的,
- 选择容器格式和CODEC
- 通常一个人会决定另一个人(例如MPEG-2 + MPEG节目流)
- Pick a container format and CODEC
- Often one will decide the other, (e.g. MPEG-2 + MPEG Program Stream)
- 您将在缓冲区变满时或每隔n毫秒执行一次此操作;您可能要优先选择一个,具体取决于您是否要实时流式传输视频.
与此相关的一件好事是,您实际上不需要写入文件.由于您要定期对静止帧缓冲区中的数据包进行编码,因此,您可以根据需要在网络上流式传输编码后的视频,这就是编解码器和容器(交换)格式分开的原因.
One nice thing about this is you do not actually need to write to a file. Since you are periodically encoding packets of data from your buffer of still frames, you can stream your encoded video over a network if you want - this is why codec and container (interchange) format are separate.
另一个好处是,您不必同步CPU和GPU,您可以设置像素缓冲区对象,并让OpenGL将数据复制到CPU内存中位于GPU后两帧的位置.这使得对视频的实时编码的要求大大降低,如果视频延迟要求不是不合理的话,则只需定期将视频编码并刷新到磁盘或通过网络即可.这在实时渲染中效果很好,因为您有足够大的数据池来保持CPU线程一直在忙着进行编码.
Another nice thing is you do not have to synchronize the CPU and GPU, you can setup a pixel buffer object and have OpenGL copy data into CPU memory a couple of frames behind the GPU. This makes real-time encoding of video much less demanding, you only have to encode and flush the video to disk or over the network periodically if video latency demands are not unreasonable. This works very well in real-time rendering, since you have a large enough pool of data to keep a CPU thread busy encoding at all times.
编码帧甚至可以在GPU上实时完成,只要有足够的存储空间来容纳较大的帧缓冲区(因为最终必须将编码后的数据从GPU复制到CPU,并且您希望尽可能少地执行此操作) .显然,这不是使用ffmpeg
来完成的,为此目的,有专门的库使用CUDA/OpenCL/计算着色器.我从未使用过它们,但它们确实存在.
Encoding frames can even be done in real-time on the GPU provided enough storage for a large buffer of frames (since ultimately the encoded data has to be copied from GPU to CPU and you want to do this as infrequently as possible). Obviously this is not done using ffmpeg
, there are specialized libraries using CUDA / OpenCL / compute shaders for this purpose. I have never used them, but they do exist.
出于可移植性考虑,您应该坚持使用libavcodec和Pixel Buffer Objects进行异步GPU-> CPU复制.如今的CPU具有足够的内核,如果您缓冲足够的帧并在多个同时线程中进行编码(这会增加同步开销并在输出编码视频时增加延迟),或者仅丢弃帧/降低分辨率(穷人的解决方案).
For portability sake, you should stick with libavcodec and Pixel Buffer Objects for asynchronous GPU->CPU copy. CPUs these days have enough cores that you can probably get away without GPU-assisted encoding if you buffer enough frames and encode in multiple simultaneous threads (this creates added synchronization overhead and increased latency when outputting encoded video) or simply drop frames / lower resolution (poor man's solution).
这里涵盖了很多概念,这些概念远远超出了SDL的范围,但是您确实想知道如何以比当前解决方案更好的性能来做到这一点.简而言之,请使用OpenGL像素缓冲区对象传输数据,并使用libavcodec进行编码.可以在ffmpeg上找到示例应用程序来对视频进行编码. http://ffmpeg.org/doxygen/trunk/group__libavc.html"rel =" noreferrer> libavcodec示例页面.
There are a lot of concepts covered here that go well beyond the scope of SDL, but you did ask how to do this with better performance than your current solution. In short, use OpenGL Pixel Buffer Objects to transfer data, and libavcodec for encoding. An example application that encodes video can be found on the ffmpeg libavcodec examples page.
这篇关于将openGL上下文另存为视频输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!