从C ++中的像素值数组创建视频 [英] Create video from array of pixel values in C++

查看:97
本文介绍了从C ++中的像素值数组创建视频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有人知道将数组中存储的像素值序列保存到视频的方法吗?目前,我正在使用Cimg可视化一个简单的n体模拟,尽管我可以将每次迭代保存到一个图像文件中,但这非常慢.对于处理视频的类似库的任何建议将不胜感激.本质上,我只想将创建的Cimg窗口中显示的内容记录到视频文件中.该程序是用Linux上的C ++编写的,并使用g ++进行编译.

Does anyone know of a method to save a sequence of pixel values, stored in an array to a video? Currently I'm using Cimg to visualise a simple n-body simulation, whilst I can save each iteration to an image file, this is very slow. Any suggestions on a similar library for handling video would be appreciated. Essentially, I just want to record what's displayed in the Cimg window I create to a video file. The program is written in C++, on linux, compiling with g++.

我可以运行模拟并使用屏幕捕获软件将其记录下来,这一事实似乎暗示有可能,但是我想要一个更简洁的解决方案.

The fact that I can run the simulation and record it running with screen capturing software would seem to imply it's possible, but I'd like a tidier solution.

干杯, 安格斯

推荐答案

我今天正在玩这个游戏,并以为我会分享我的结果.您可以从 CImg 输出原始RGB视频,然后使用 ffmpeg 将其编码为如下视频:

I was playing around doing this today, and thought I would share my results. You can output raw RGB video from CImg and then use ffmpeg to encode it up into video like this:

#include <iostream>
#include "CImg.h"

using namespace std;
using namespace cimg_library;

int main()
{
   const unsigned int width=1024;
   const unsigned int height=768;

   // Basic frame we will draw in
   CImg<unsigned char> image(width,height,1,3);

   unsigned char magenta[] = {255,0,255};

   // We are going to output 300 frames of 1024x768 RGB raw video
   // ... making a 10s long video at 30fps
   int radius=100;
   int cx=100;
   int cy=100;
   for(int frame=0;frame<300;frame++){
      // Start with black - it shows fewer stains ;-)
      image.fill(0);
      image.draw_circle(cx,cy,radius,magenta);

      // Move and re-colour circle
      cx+=2; cy++; if(magenta[1]!=255){magenta[1]++;}

      // Output to ffmpeg to make video, in planar GBR format
      // i.e. run program like this
      // ./main | ffmpeg -y -f rawvideo -pixel_format gbrp -video_size 1024x768 -i - -c:v h264 -pix_fmt yuv420p video.mov
      char* s=reinterpret_cast<char*>(image.data()+(width*height));   // Get start of G plane
      std::cout.write(s,width*height);                                // Output it
      s=reinterpret_cast<char*>(image.data()+2*(width*height));       // Get start of B plane
      std::cout.write(s,width*height);                                // Output it
      s=reinterpret_cast<char*>(image.data());                        // Get start of R plane
      std::cout.write(s,width*height);                                // Output it
   }
}

我想我不会去好莱坞,因为视频不是很刺激!

I guess I won't make it to Hollywood as the video is not very exciting!

像上面那样运行上面的代码来制作视频:

Run the above code like this to make a video:

./main | ffmpeg -y -f rawvideo -pixel_format gbrp -video_size 1024x768 -i - -c:v h264 -pix_fmt yuv420p video.mov


注释1

要实现的是 CImg 以平面配置存储数据,这意味着首先是所有红色像素,然后是所有绿色像素,然后是所有蓝色像素,然后是所有蓝色像素.没有任何填充或空格.

The thing to realise is that CImg stores data in a planar configuration, which means all the red pixels first, then all the green ones directly afterwards and then all the blue ones straight after that - all without any padding or spaces.

想象一下CImg中的4x4图像(具有16个像素):

Imagine a 4x4 image (with 16 pixels) in CImg:

RRRRRRRRRRRRRRRR GGGGGGGGGGGGGGGG BBBBBBBBBBBBBBBB

与常规RGB数据不同,常规RGB数据将存储与以下图像相同的图像:

unlike regular RGB data, which would store the same image as:

RGB RGB RGB RGB RGB RGB RGB RGB RGB RGB RGB RGB RGB RGB RGB RGB 

因此,您可以重新整理周围的所有数据并将其重新格式化,然后以-pixel_fmt rgb24的形式传递给 ffmpeg ,或者按照我的方式进行操作,并在 CImg 平面格式,然后选择匹配的-pixel_fmt gbrp(其中p表示平面" ).您只需要按照正确的B,G,R顺序输出平面.另请参见注释4 .

So, you can either shuffle all the data around and reformat it and pass to ffmpeg as -pixel_fmt rgb24, or do as I did and output in CImg's planar format and choose a matching -pixel_fmt gbrp (where the p means "planar"). You just have to output the planes in the correct B,G,R order. See also Note 4.

注释2

我选择执行3个write(),每个颜色平面一个,为了便于说明,将聚集写入" writev()一起使用会更有效. ,因此:

I chose to do 3 write()s, one for each colour plane, for the sake of clarity of demonstration, it would be more efficient to use a "gathered write" with writev(), so this:

char* s=reinterpret_cast<char*>(image.data()+(width*height));   // Get start of G plane
std::cout.write(s,width*height);                                // Output it
s=reinterpret_cast<char*>(image.data()+2*(width*height));       // Get start of B plane
std::cout.write(s,width*height);                                // Output it
s=reinterpret_cast<char*>(image.data());                        // Get start of R plane
std::cout.write(s,width*height);  

将变成(未经测试)的样子:

would become something like (untested):

struct iovec iov[3];
ssize_t nwritten;

iov[0].iov_base = reinterpret_cast<char*>(image.data()+(width*height))
iov[0].iov_len  = width*height;
iov[1].iov_base = reinterpret_cast<char*>(image.data()+2*(width*height));
iov[1].iov_len  = width*height;
iov[2].iov_base = reinterpret_cast<char*>(image.data());  
iov[2].iov_len  = width*height;

nwritten = writev(STDOUT_FILENO,iov,3);


注释3

我使用-c:v h264 -pix_fmt yuv420p使视频与Mac上的Apple QuickTime 兼容,但是无论如何您都可以轻松更改输出-更难的部分是获得 CImg fmpeg .

I used the -c:v h264 -pix_fmt yuv420p to make the video compatible with Apple's QuickTime on my Mac, but you can change the output easily anyway - the harder part was getting the interface between CImg and fmpeg right.

注释4

如果您想重新整理数据并将其写到 ffmpeg 非平面(-pixel_fmt rgb),我本来就是这样做的,并且代码是这样的:

If you want to shuffle the data around and write it to ffmpeg non-planar (-pixel_fmt rgb), I did that originally and the code was like this:

// Outside main loop
unsigned char* BIP = new unsigned char[width*height*3];
unsigned char *d,*r,*g,*b;

...
...

// Now output it...
// ... remember CImg is band-interleaved by plane  RRRRRR GGGGGG BBBBBB
// ... not band-interleaved by pixel RGB RGB RGB RGB
r=image.data();       // Start of R plane in CImg image
g=r+(width*height);   // Start of G plane in CImg image
b=g+(width*height);   // Start of B plane in CImg image
d=BIP;                // Destination buffer in RGB order
for(int i=0;i<width*height;i++){
   *d++=*r++;
   *d++=*g++;
   *d++=*b++;
}
// Output to ffmpeg to make video, i.e. run program like this
// ./main | ffmpeg -y -f rawvideo -pixel_format rgb24 -video_size 1024x768 -i - -c:v h264 -pix_fmt yuv420p video.mov
std::cout.write(reinterpret_cast<char*>(BIP),width*height*3);

理论上,您可以使用 CImg permute_axes()方法执行此操作,但我没有成功.

In theory, you can do this with CImg's permute_axes() method, but I had no success.

这篇关于从C ++中的像素值数组创建视频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆