从iOS AVAssetWriter生成的原始H264 mdat获取PTS [英] Get PTS from raw H264 mdat generated by iOS AVAssetWriter

查看:991
本文介绍了从iOS AVAssetWriter生成的原始H264 mdat获取PTS的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试同时读写由AVAssetWriter写的H.264 mov文件。我设法提取单个NAL单元,将它们打包成ffmpeg的AVPackets,并使用ffmpeg将它们写入另一种视频格式。它的工作原理和生成的文件播放很好,但播放速度不正确。如何从原始H.264数据计算出正确的PTS / DTS值?或者也许还有其他一些方法可以得到他们?

I'm trying to simultaneously read and write H.264 mov file written by AVAssetWriter. I managed to extract individual NAL units, pack them into ffmpeg's AVPackets and write them into another video format using ffmpeg. It works and the resulting file plays well except the playback speed is not right. How do I calculate the correct PTS/DTS values from raw H.264 data? Or maybe there exists some other way to get them?

这是我试过的:


  1. 将捕获最小/最大帧速率限制为30,并假设输出文件将为30 fps。实际上,它的fps总是小于我设置的值。而且,我认为从数据包到数据包的fps并不是一样的。

  1. Limit capture min/max frame rate to 30 and assume that the output file will be 30 fps. In fact its fps is always less than values that I set. And also, I think the fps is not constant from packet to packet.

记住每个写入样本的演示时间戳,并假定样本一一对应NALU并将保存的时间戳应用于输出数据包。这不起作用。

Remember each written sample's presentation timestamp and assume that samples map one-to-one to NALUs and apply saved timestamp to output packet. This doesn't work.

将PTS设置为0或AV_NOPTS_VALUE。不起作用。

Setting PTS to 0 or AV_NOPTS_VALUE. Doesn't work.

从谷歌搜索我了解原始的H.264数据通常不包含任何时间信息。它有时可以在SEI内部有一些时间信息,但是我使用的文件没有。另一方面,有一些应用程序正在做我正在做的事情,所以我想这是可能的。

From googling about it I understand that raw H.264 data usually doesn't contain any timing info. It can sometimes have some timing info inside SEI, but the files that I use don't have it. On the other hand, there are some applications that do exactly what I'm trying to do, so I suppose it is possible somehow.

推荐答案

您将不得不自己生成它们,或者访问MP4 / MOV容器中含有Atom的定时信息,以生成PTS / DTS信息。在libavformat中的FFmpeg的mov.c可能有帮助。

You will either have to generate them yourself, or access the Atom's containing timing information in the MP4/MOV container to generate PTS/DTS information. FFmpeg's mov.c in libavformat might help.

您使用AVAssetWriter编写的每个样本/框架都将与VCL NAL进行一对一映射。如果你正在做的就是转换,那么FFmpeg会做的都很重。当从一种容器格式转到另一种容器格式时,它将适当地保持定时信息。

Each sample/frame you write with AVAssetWriter will map one to one with the VCL NALs. If all you are doing is converting then have FFmpeg do all the heavy lifting. It will properly maintain the timing information when going from one container format to another.

由AVAssetWriter生成的比特流不包含SEI数据。它只包含SPS / PPS / I / P帧。 SPS也不包含VUI或HRD参数。

The bitstream generated by AVAssetWriter does not contain SEI data. It only contains SPS/PPS/I/P frames. The SPS also does not contain VUI or HRD parameters.

- 编辑 -

另请保留请注意,如果您从CMSampleBufferRef中保存PTS信息,则时基可能与目标容器的时基不同。例如,AVFoundation时基为纳秒,FLV文件为毫秒。

Also, keep in mind that if you are saving PTS information from the CMSampleBufferRef's then the time base may be different from that of the target container. For instance AVFoundation time base is nanoseconds, and a FLV file is milliseconds.

这篇关于从iOS AVAssetWriter生成的原始H264 mdat获取PTS的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆