使用ffmpeg创建视频 [英] Create video using ffmpeg

查看:183
本文介绍了使用ffmpeg创建视频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有100张图像(PNG),我想使用这些图像创建一个视频。我正在使用ffmpeg库。使用命令行可以轻松创建视频。但是我如何通过编码来实现?



任何帮助将不胜感激。

  #pragma GCC诊断忽略-Wdeprecated-declaration


#include< stdlib.h>
#include< stdio.h>
#include< string.h>

#ifdef HAVE_AV_CONFIG_H
#undef HAVE_AV_CONFIG_H
#endif

externC
{
#includelibavutil / imgutils.h
#includelibavutil / opt.h
#includelibavcodec / avcodec.h
#includelibavutil / mathematics.h
#include libavutil / samplefmt.h
}

#define INBUF_SIZE 4096
#define AUDIO_INBUF_SIZE 20480
#define AUDIO_REFILL_THRESH 4096




static void video_encode_example(const char * filename,int codec_id)
{
AVCodec * codec;
AVCodecContext * c = NULL;
int i,out_size,size,x,y,outbuf_size;
FILE * f;
AVFrame *图片;
uint8_t * outbuf;
int nrOfFramesPerSecond = 25;
int nrOfSeconds = 1;


printf(视频编码\);

//找到mpeg1视频编码器
codec = avcodec_find_encoder((CodecID)codec_id);
if(!codec){
fprintf(stderr,codec not found\\\
);
exit(1);
}

c = avcodec_alloc_context3(codec);
picture = avcodec_alloc_frame();

//放样本参数
c-> bit_rate = 400000;
//分辨率必须是2的倍数
c-> width = 352;
c-> height = 288;
//每秒帧数
c-> time_base =(AVRational){1,25};
c-> gop_size = 10; //每十个帧发出一个帧内
c-> max_b_frames = 1;
c-> pix_fmt = PIX_FMT_YUV420P;

if(codec_id == CODEC_ID_H264)
av_opt_set(c-> priv_data,preset,slow,0);

//打开它
if(avcodec_open2(c,codec,NULL)< 0){
fprintf(stderr,can not open codec\\\
);
exit(1);
}

f = fopen(filename,wb);
if(!f){
fprintf(stderr,can not open%s\\\
,filename);
exit(1);
}

//分配图像和输出缓冲区
outbuf_size = 100000;
outbuf =(uint8_t *)malloc(outbuf_size);

//可以通过任何方式分配图像,av_image_alloc()是
// *只要使用av_malloc()才使用最方便的方式
av_image_alloc(图片 - > data,picture-> linesize,
c-> width,c-> height,c-> pix_fmt,1);

//编码1秒视频
int nrOfFramesTotal = nrOfFramesPerSecond * nrOfSeconds;

//编码1秒视频
for(i = 0; i fflush(stdout);
//为(y = 0; y< c-> height; y ++){
为(x = 0; x< c-> ; width; x ++){
picture-> data [0] [y * picture-> linesize [0] + x] = x + y + i * 3;
}
}

// Cb和Cr
(y = 0; y height / 2; y ++){
对于(x = 0; x c- width / 2; x ++){
picture-> data [1] [y * picture-> linesize [1] + x] = 128 + y +我* 2;
picture-> data [2] [y * picture-> linesize [2] + x] = 64 + x + i * 5;
}
}

//编码图像
out_size = avcodec_encode_video(c,outbuf,outbuf_size,picture);
printf(encoding frame%3d(size =%5d)\\\
,i,out_size);
fwrite(outbuf,1,out_size,f);
}

//获取延迟帧
(; out_size; i ++){
fflush(stdout);

out_size = avcodec_encode_video(c,outbuf,outbuf_size,NULL);
printf(write frame%3d(size =%5d)\\\
,i,out_size);
fwrite(outbuf,1,out_size,f);
}

//添加序列结束代码以具有真正的mpeg文件
outbuf [0] = 0x00;
outbuf [1] = 0x00;
outbuf [2] = 0x01;
outbuf [3] = 0xb7;
fwrite(outbuf,1,4,f);
fclose(f);
免费(outbuf);

avcodec_close(c);
// av_free(c);
// av_free(picture-> data [0]);
// av_free(picture);
printf(\\\
);
}

int main(int argc,char ** argv)
{
const char * filename;


avcodec_register_all();

if(argc <= 1){

video_encode_example(/ home / radix / Desktop / OpenCV / FFMPEG_Output / op89.png,AV_CODEC_ID_H264)
} else {
filename = argv [1];
}


返回0;
}




  • 正在搜索每次获取类似于此的代码。但是我不明白是用来创建图像的视频。


解决方案

一再出现的原因是因为您使用的是 encoding_example.c 作为参考。请不要这样做这个例子中最根本的错误是它不会教你编解码器和容器之间的区别。实际上,它完全忽略了容器。



什么是编解码器?
编解码器是压缩媒体类型的方法。例如,H264将压缩原始视频。想象一下1080p视频帧,通常采用YUV格式,具有4:2:0色度子采样。原始的,这是每帧1080 * 1920 * 3/2字节,即〜3MB / f。对于60fps,这是180MB /秒,或1.44吉比特/秒(gbps)。这是很多数据,所以我们压缩它。在这个决议中,您可以以几兆比特/秒(mbps)获得现代编解码器的质量,如H264,HEVC或VP9。对于音频,像AAC或Opus这样的编解码器很受欢迎。



什么是容器?
容器需要视频或音频(或字幕)数据包(压缩或未压缩),并将它们交织在单个输出文件中进行组合存储。因此,不是得到一个文件的视频和一个音频,你得到一个文件交织数据包两者。这样可以有效的寻求和索引,通常也允许添加元数据存储(作者,标题)等等。流行容器的例子有MOV,MP4(其实只是mov),AVI,Ogg,Matroska或WebM(这只是matroska)。



(可以存储对于H264来说,这是一个名为annexb的原始H264,这实际上是你上面所做的,所以为什么它不起作用?嗯,你忽略了头数据包像SPS和PPS一样,这些都是在avctx-> extradata中,需要在第一个视频数据包之前编写,使用一个容器会照顾你,但是没有,所以没有工作。) / p>

如何在FFmpeg中使用容器?参见例如帖子,特别是调用类似 avformat_write _ *()(基本上任何听起来像输出)。我很高兴回答更具体的问题,但我认为上述职位应该清除大部分的困惑。


I have 100 images(PNG) and I want to create a video using these images. I am using the ffmpeg library for this. Using command line I can create video easily. But how do I do it through coding?

Any help will be appreciated.

#pragma GCC diagnostic ignored "-Wdeprecated-declarations"


#include <stdlib.h>
#include <stdio.h>
#include <string.h>

#ifdef HAVE_AV_CONFIG_H
#undef HAVE_AV_CONFIG_H
#endif

extern "C"
{
#include "libavutil/imgutils.h"
#include "libavutil/opt.h"
#include "libavcodec/avcodec.h"
#include "libavutil/mathematics.h"
#include "libavutil/samplefmt.h"
}

#define INBUF_SIZE 4096
#define AUDIO_INBUF_SIZE 20480
#define AUDIO_REFILL_THRESH 4096




static void video_encode_example(const char *filename, int codec_id)
{
   AVCodec *codec;
   AVCodecContext *c= NULL;
   int i, out_size, size, x, y, outbuf_size;
   FILE *f;
   AVFrame *picture;
   uint8_t *outbuf;
   int nrOfFramesPerSecond  =25;
   int nrOfSeconds =1;


   printf("Video encoding\n");

//    find the mpeg1 video encoder
   codec = avcodec_find_encoder((CodecID) codec_id);
   if (!codec) {
       fprintf(stderr, "codec not found\n");
       exit(1);
   }

   c = avcodec_alloc_context3(codec);
   picture= avcodec_alloc_frame();

//    put sample parameters
   c->bit_rate = 400000;
//    resolution must be a multiple of two
   c->width = 352;
   c->height = 288;
//    frames per second
   c->time_base= (AVRational){1,25};
   c->gop_size = 10;  //emit one intra frame every ten frames
   c->max_b_frames=1;
   c->pix_fmt = PIX_FMT_YUV420P;

   if(codec_id == CODEC_ID_H264)
       av_opt_set(c->priv_data, "preset", "slow", 0);

//    open it
   if (avcodec_open2(c, codec, NULL) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
   }

   f = fopen(filename, "wb");
   if (!f) {
       fprintf(stderr, "could not open %s\n", filename);
       exit(1);
   }

//    alloc image and output buffer
   outbuf_size = 100000;
   outbuf = (uint8_t*) malloc(outbuf_size);

//    the image can be allocated by any means and av_image_alloc() is
//    * just the most convenient way if av_malloc() is to be used
   av_image_alloc(picture->data, picture->linesize,
                  c->width, c->height, c->pix_fmt, 1);

//    encode 1 second of video
   int nrOfFramesTotal = nrOfFramesPerSecond * nrOfSeconds;

//    encode 1 second of video
   for(i=0;i < nrOfFramesTotal; i++) {
       fflush(stdout);
//        prepare a dummy image

       for(y=0;y<c->height;y++) {
           for(x=0;x<c->width;x++) {
               picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
           }
       }

//        Cb and Cr
       for(y=0;y<c->height/2;y++) {
           for(x=0;x<c->width/2;x++) {
               picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
               picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
           }
       }

//        encode the image
       out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture);
       printf("encoding frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);
   }

//    get the delayed frames
   for(; out_size; i++) {
       fflush(stdout);

       out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
       printf("write frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, out_size, f);
   }

//    add sequence end code to have a real mpeg file
   outbuf[0] = 0x00;
   outbuf[1] = 0x00;
   outbuf[2] = 0x01;
   outbuf[3] = 0xb7;
   fwrite(outbuf, 1, 4, f);
   fclose(f);
   free(outbuf);

   avcodec_close(c);
//   av_free(c);
//   av_free(picture->data[0]);
//   av_free(picture);
   printf("\n");
}

int main(int argc, char **argv)
{
   const char *filename;


   avcodec_register_all();

   if (argc <= 1) {

       video_encode_example("/home/radix/Desktop/OpenCV/FFMPEG_Output/op89.png", AV_CODEC_ID_H264);
   } else {
       filename = argv[1];
   }


   return 0;
}

  • On searching everytime i m getting code similar to this.But i don't understood hot to use it for creating video from images.

解决方案

The reason this comes up again and again is because you're using encoding_example.c as your reference. Please don't do that. The most fundamental mistake in this example is that it doesn't teach you the difference between codecs and containers. In fact, it ignored containers altogether.

What is a codec? A codec is a method of compressing a media type. H264, for example, will compress raw video. Imagine a 1080p video frame, which is typically in YUV format with 4:2:0 chroma subsampling. Raw, this is 1080*1920*3/2 bytes per frame, i.e. ~3MB/f. For 60fps, this is 180MB/sec, or 1.44 gigabit/sec (gbps). That's a lot of data, so we compress it. At that resolution, you can get pretty quality at a few megabit/sec (mbps) for modern codecs, like H264, HEVC or VP9. For audio, codecs like AAC or Opus are popular.

What is a container? A container takes video or audio (or subtitle) packets (compressed or uncompressed) and interleaves them for combined storage in a single output file. So rather than getting one file for video and one for audio, you get one file that interleaves packets for both. This allows effective seeking and indexing, it typically also allows adding metadata storage ("author", "title") and so on. Examples of popular containers are MOV, MP4 (which is really just mov), AVI, Ogg, Matroska or WebM (which is really just matroska).

(You can store video-only data in a file if you want. For H264, this is called "annexb" raw H264. This is actually what you were doing above. So why didn't it work? Well, you're ignoring "header" packets like the SPS and PPS. These are in avctx->extradata and need to be written before the first video packet. Using a container would take care of that for you, but you didn't, so it didn't work.)

How do you use a container in FFmpeg? See e.g. this post, particularly the sections calling functions like avformat_write_*() (basically anything that sounds like output). I'm happy to answer more specific questions, but I think the above post should clear out most confusion for you.

这篇关于使用ffmpeg创建视频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆