将AVPackets合并为mp4文件-重新访问 [英] Muxing AVPackets into mp4 file - revisited

查看:54
本文介绍了将AVPackets合并为mp4文件-重新访问的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在这里指的是这个线程:将AVPackets混合为mp4文件

I'm refering to this thread here: Muxing AVPackets into mp4 file

那里的问题基本上与我的问题相同,第一个答案看起来非常有希望.用户 pogorskiy 提供的源代码(伪类)似乎完全符合我的要求:

The question over there is mainly the same that I have and the first answer looks very promising. The source (somkind of pseudo) code that the user pogorskiy provides seems to do exactly what I need:

AVOutputFormat * outFmt = av_guess_format("mp4", NULL, NULL);
AVFormatContext *outFmtCtx = NULL;
avformat_alloc_output_context2(&outFmtCtx, outFmt, NULL, NULL);
AVStream * outStrm = av_new_stream(outFmtCtx, 0);

AVCodec * codec = NULL;
avcodec_get_context_defaults3(outStrm->codec, codec);
outStrm->codec->coder_type = AVMEDIA_TYPE_VIDEO;

///....
/// set some required value, such as
/// outStrm->codec->flags
/// outStrm->codec->sample_aspect_ratio
/// outStrm->disposition
/// outStrm->codec->codec_tag
/// outStrm->codec->bits_per_raw_sample
/// outStrm->codec->chroma_sample_location
/// outStrm->codec->codec_id
/// outStrm->codec->codec_tag
/// outStrm->codec->time_base
/// outStrm->codec->extradata 
/// outStrm->codec->extradata_size
/// outStrm->codec->pix_fmt
/// outStrm->codec->width
/// outStrm->codec->height
/// outStrm->codec->sample_aspect_ratio
/// see ffmpeg.c for details  

avio_open(&outFmtCtx->pb, outputFileName, AVIO_FLAG_WRITE);

avformat_write_header(outFmtCtx, NULL);

for (...)
{
av_write_frame(outFmtCtx, &pkt);
}

av_write_trailer(outFmtCtx);
avio_close(outFmtCtx->pb);
avformat_free_context(outFmtCtx);

pkt 数据,我是从connectec摄像机的第三方API收到的.没有可打开的文件可从中读取输入数据,也没有可从相机接收的RTSP流.这只是一个API调用,它为我提供了指向H264编码帧的指针,该帧正是AVPacket的原始数据.

The pkt data, I receive from a third party API from my connectec camera. There is no file to open, to read the input data from and there is no RTSP stream to be received from the camera. It is just an API call, that gives me the pointer to a H264 encoded frame which is exactly the raw data for an AVPacket.

无论如何,我尝试将这段代码用作应用程序的基础,但是出现的第一个问题是,我遇到运行时错误:

Anyway, I try to use this code as base for my applicatio, but the first problem that occurs is, that I get a runtime error:

Could not find tag for codec none in stream #0, codec not currently supported in container

因此,我开始按照 pogorskiy 的建议向编解码器添加更多信息:

So I started adding some more information to the codec, as pogorskiy suggested:

outStrm->codec->codec_id = AV_CODEC_ID_H264;
outStrm->codec->width = 1920;
outStrm->codec->height = 1080;

现在,我希望提供一个codec_id,运行时消息至少会有所不同,但仍然相同:

Now that I provided a codec_id, I was hoping, that the runtime message changes to at least something different, but it still ist the same:

Could not find tag for codec none in stream #0, codec not currently supported in container

关于如何设置结构以便打开mp4文件以将数据包写入其中的任何想法吗?

Any idea on how I can set up the structures, so that I can open an mp4 file for writing my packets to?

推荐答案

好的,我知道了.至少我可以打开一个mp4文件,并将H264编码的数据包写入其中.该文件甚至在VLC中打开并显示第一帧.仅此而已,只是一个开始.

Okay, I got it working. At least I can open an mp4 file and write my H264 encoded packets to it. The file even opens in VLC and shows the very first frame... Nothing more, but it is a start.

因此,我将代码放在她身上,以显示此最小解决方案.如果有人提出他/她的意见,我仍然感到非常高兴,因为它仍然无法完美运行...

So I place the code her, in order to show this minimal solution. I still am very happy if somebody would give his/hers opinion on it, because it still does not work perfectly...

char outputFileName[] = "camera.mp4";

av_log_set_level(AV_LOG_DEBUG);

AVOutputFormat * outFmt = av_guess_format("mp4", NULL, NULL);
AVFormatContext *outFmtCtx = NULL;
avformat_alloc_output_context2(&outFmtCtx, outFmt, NULL, NULL);
AVStream * outStrm = avformat_new_stream(outFmtCtx, NULL);
outStrm->id = 0;
outStrm->time_base = {1, 30};
outStrm->avg_frame_rate = {1, 30};

AVCodec * codec = NULL;
avcodec_get_context_defaults3(outStrm->codec, codec);

outFmtCtx->video_codec_id = AV_CODEC_ID_H264;

///....
/// set some required value, such as
/// outStrm->codec->flags
/// outStrm->codec->sample_aspect_ratio
/// outStrm->disposition
/// outStrm->codec->codec_tag
/// outStrm->codec->bits_per_raw_sample
/// outStrm->codec->chroma_sample_location
outStrm->codecpar->codec_id = AV_CODEC_ID_H264;
outStrm->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
/// outStrm->codec->time_base
/// outStrm->codec->extradata 
/// outStrm->codec->extradata_size
/// outStrm->codec->pix_fmt
outStrm->codecpar->width = 1920;
outStrm->codecpar->height = 1080;
/// outStrm->codec->sample_aspect_ratio
/// see ffmpeg.c for details  

avio_open(&outFmtCtx->pb, outputFileName, AVIO_FLAG_WRITE);

avformat_write_header(outFmtCtx, NULL); 

*** Camera access loop via GenICam API starts here ***
n++;
av_init_packet(&avPacket);
avPacket.data = static_cast<uint8_t*>(pPtr); // raw data from the Camera with H264 encoded frame
avPacket.size = datasize; // datasize received from the GenICam API along with pPtr (the raw data)
avPacket.pts = (1/30) * n; // stupid try to set pts and dts somehow... Working on this...
avPacket.dts = (1/30) * (n-1);
avPacket.pos = n;
avPacket.stream_index = outStrm->index;

av_write_frame(outFmtCtx, &avPacket);

**** Camera access loop ends here ****

av_write_trailer(outFmtCtx);
avio_close(outFmtCtx->pb);
avformat_free_context(outFmtCtx);

正如我所说,生成的mp4文件在瞬间显示了第一帧,此后停止播放.我认为会显示第一帧,因为我确保这是一个I帧,其中包含完整的图像.

As I said, the resulting mp4 file shows the very first frame for a split second and after that it stops playing. I think the first frame is displayed, because I make sure that this is an I-frame, containing the complete image.

我不知道是否必须向多路复用器提供一些其他数据才能获得可用的mp4文件.我还在努力.

I don't know if I have to provide some additional data to the muxer in order to get a working mp4 file. I'm still working on this.

任何评论和想法都非常欢迎!

Any comments and ideas are highly welcome!

谢谢,迈克

这篇关于将AVPackets合并为mp4文件-重新访问的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆