通过 ffmpegwrapper 切割 MPEG-TS 文件? [英] Cutting MPEG-TS file via ffmpegwrapper?

查看:16
本文介绍了通过 ffmpegwrapper 切割 MPEG-TS 文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的设备上有 MPEG-TS 文件.我想从设备上的文件开始缩短相当准确的时间.

I have MPEG-TS files on the device. I would like to cut a fairly-exact time off the start of the files on-device.

使用 FFmpegWrapper 作为基础,我希望能够实现这一目标.

Using FFmpegWrapper as a base, I'm hoping to achieve this.

不过,我对 ffmpeg 的 C API 有点迷失.我从哪里开始?

I'm a little lost on the C API of ffmpeg, however. Where do I start?

我尝试在我正在寻找的开始 PTS 之前丢弃所有数据包,但这破坏了视频流.

I tried just dropping all packets prior to a start PTS I was looking for, but this broke the video stream.

    packet->pts = av_rescale_q(packet->pts, inputStream.stream->time_base, outputStream.stream->time_base);
    packet->dts = av_rescale_q(packet->dts, inputStream.stream->time_base, outputStream.stream->time_base);

    if(startPts == 0){
        startPts = packet->pts;
    }

    if(packet->pts < cutTimeStartPts + startPts){
        av_free_packet(packet);
        continue;
    }

如何在不破坏视频流的情况下切断输入文件的部分开头?背靠背播放时,我希望 2 个剪辑片段无缝地一起运行.

How do I cut off part of the start of the input file without destroying the video stream? When played back to back, I want 2 cut segments to run seamlessly together.

ffmpeg -i time.ts -c:v libx264 -c:a copy -ss $CUT_POINT -map 0 -y after.ts
ffmpeg -i time.ts -c:v libx264 -c:a copy -to $CUT_POINT -map 0 -y before.ts

似乎是我需要的.我认为需要重新编码,以便视频可以从任意点而不是现有关键帧开始.如果有更有效的解决方案,那就太好了.如果没有,这就足够了.

Seems to be what I need. I think the re-encode is needed so the video can start at any arbitrary point and not an existing keyframe. If there's a more efficient solution, that's great. If not, this is good enough.

这是我的尝试.我正在拼凑从 here 复制的各种我不完全理解的部分.我暂时放弃了剪辑"部分,以尝试在没有分层复杂性的情况下编写音频 + 视频编码.我在 avcodec_encode_video2(...)

Here's my attempt. I'm cobbling together various pieces I don't fully understand copied from here. I'm leaving off the "cutting" piece for now to try and get audio + video encoded written without layering complexity. I get EXC_BAD_ACCESS on avcodec_encode_video2(...)

- (void)convertInputPath:(NSString *)inputPath outputPath:(NSString *)outputPath
                 options:(NSDictionary *)options progressBlock:(FFmpegWrapperProgressBlock)progressBlock
         completionBlock:(FFmpegWrapperCompletionBlock)completionBlock {
    dispatch_async(conversionQueue, ^{
        FFInputFile *inputFile = nil;
        FFOutputFile *outputFile = nil;
        NSError *error = nil;

        inputFile = [[FFInputFile alloc] initWithPath:inputPath options:options];
        outputFile = [[FFOutputFile alloc] initWithPath:outputPath options:options];

        [self setupDirectStreamCopyFromInputFile:inputFile outputFile:outputFile];
        if (![outputFile openFileForWritingWithError:&error]) {
            [self finishWithSuccess:NO error:error completionBlock:completionBlock];
            return;
        }
        if (![outputFile writeHeaderWithError:&error]) {
            [self finishWithSuccess:NO error:error completionBlock:completionBlock];
            return;
        }

        AVRational default_timebase;
        default_timebase.num = 1;
        default_timebase.den = AV_TIME_BASE;
        FFStream *outputVideoStream = outputFile.streams[0];
        FFStream *inputVideoStream = inputFile.streams[0];

        AVFrame *frame;
        AVPacket inPacket, outPacket;

        frame = avcodec_alloc_frame();
        av_init_packet(&inPacket);

        while (av_read_frame(inputFile.formatContext, &inPacket) >= 0) {
            if (inPacket.stream_index == 0) {
                int frameFinished;
                avcodec_decode_video2(inputVideoStream.stream->codec, frame, &frameFinished, &inPacket);
//                if (frameFinished && frame->pkt_pts >= starttime_int64 && frame->pkt_pts <= endtime_int64) {
                if (frameFinished){
                    av_init_packet(&outPacket);
                    int output;
                    avcodec_encode_video2(outputVideoStream.stream->codec, &outPacket, frame, &output);
                    if (output) {
                        if (av_write_frame(outputFile.formatContext, &outPacket) != 0) {
                            fprintf(stderr, "convert(): error while writing video frame
");
                            [self finishWithSuccess:NO error:nil completionBlock:completionBlock];
                        }
                    }
                    av_free_packet(&outPacket);
                }
                if (frame->pkt_pts > endtime_int64) {
                    break;
                }
            }
        }
        av_free_packet(&inPacket);

        if (![outputFile writeTrailerWithError:&error]) {
            [self finishWithSuccess:NO error:error completionBlock:completionBlock];
            return;
        }

        [self finishWithSuccess:YES error:nil completionBlock:completionBlock];
    });
}

推荐答案

FFmpeg(在本例中为 libavformat/codec)API 非常接近地映射 ffmpeg.exe 命令行参数.要打开文件,请使用 avformat_open_input_file().最后两个参数可以为 NULL.这将为您填写 AVFormatContext.现在您开始使用 av_read_frame() 循环读取帧.pkt.stream_index 会告诉你每个数据包属于哪个流,而 avformatcontext->streams[pkt.stream_index] 是伴随的流信息,它告诉你它使用的是什么编解码器,是否是视频/音频等. 使用 avformat_close() 关闭.

The FFmpeg (libavformat/codec, in this case) API maps the ffmpeg.exe commandline arguments pretty closely. To open a file, use avformat_open_input_file(). The last two arguments can be NULL. This fills in the AVFormatContext for you. Now you start reading frames using av_read_frame() in a loop. The pkt.stream_index will tell you which stream each packet belongs to, and avformatcontext->streams[pkt.stream_index] is the accompanying stream information, which tells you what codec it uses, whether it's video/audio, etc. Use avformat_close() to shut down.

对于多路复用,您使用反向,请参阅多路复用了解详情.基本上它是分配avio_open2每个流输入文件中的现有流(基本上是 context->streams[]),avformat_write_header()av_interleaved_write_frame() 在循环中,av_write_trailer() 关闭(和 av_write_trailer()>免费最后分配上下文).

For muxing, you use the inverse, see muxing for details. Basically it's allocate, avio_open2, add streams for each existing stream in the input file (basically context->streams[]), avformat_write_header(), av_interleaved_write_frame() in a loop, av_write_trailer() to shut down (and free the allocated context in the end).

视频流的编码/解码使用 libavcodec 完成.对于从多路复用器获得的每个 AVPacket,请使用 avcodec_decode_video2().使用 avcodec_encode_video2() 对输出的 AVFrame 进行编码.请注意,两者都会引入延迟,因此对每个函数的前几次调用不会返回任何数据,您需要通过使用 NULL 输入数据调用每个函数来刷新缓存数据,以从中获取尾包/帧.av_interleave_write_frame 将正确交错数据包,因此视频/音频流不会不同步(如:相同时间戳的视频数据包在 ts 文件中的音频数据包之后出现 MB).

Encoding/decoding of the video stream(s) is done using libavcodec. For each AVPacket you get from the muxer, use avcodec_decode_video2(). Use avcodec_encode_video2() for encoding of the output AVFrame. Note that both will introduce delay so the first few calls to each function will not return any data and you need to flush cached data by calling each function with NULL input data to get the tail packets/frames out of it. av_interleave_write_frame will interleave packets correctly so the video/audio stream will not desync (as in: video packets of the same timestamp occur MBs after audio packets in the ts file).

如果您需要 avcodec_decode_video2、avcodec_encode_video2、av_read_frame 或 av_interleaved_write_frame 的更详细示例,只需谷歌$function 示例",您就会看到完整的示例,显示如何正确使用它们.对于 x264 编码,当调用 avcodec_open2 进行编码质量设置时,在 AVCodecContext 中设置一些默认参数.在 C API 中,您可以使用 AVDictionary 执行此操作,例如:

If you need more detailed examples for avcodec_decode_video2, avcodec_encode_video2, av_read_frame or av_interleaved_write_frame, just Google "$function example" and you'll see full-fledged examples showing how to use them correctly. For x264 encoding, set some default parameters in the AVCodecContext when calling avcodec_open2 for encoding quality settings. In the C API, you do that using AVDictionary, e.g.:

AVDictionary opts = *NULL;
av_dict_set(&opts, "preset", "veryslow", 0);
// use either crf or b, not both! See the link above on H264 encoding options
av_dict_set_int(&opts, "b", 1000, 0);
av_dict_set_int(&opts, "crf", 10, 0);

[edit] 哦,我忘记了一个部分,时间戳.每个 AVPacket 和 AVFrame 在其结构中都有一个 pts 变量,您可以使用它来决定是否在输出流中包含数据包/帧.因此,对于音频,您可以使用解复用步骤中的 AVPacket.pts 作为分隔符,并且对于视频,您可以使用解码步骤中的 AVFrame.pts 作为分隔符.它们各自的文档会告诉您它们的单位.

[edit] Oh I forgot one part, the timestamping. Each AVPacket and AVFrame has a pts variable in its struct, and you can use that to decide whether to include the packet/frame in the output stream. So for audio, you'd use AVPacket.pts from the demuxing step as a delimiter, and for video, you'd use AVFrame.pts from the decoding step as a delimited. Their respective documentation tells you in what unit they are.

[edit2] 我看到您在没有实际代码的情况下仍然存在一些问题,所以这里有一个真正的(工作)转码器,它可以重新编码视频并重新混合音频.它可能有大量的错误、泄漏和缺乏适当的错误报告,它也不处理时间戳(我把它留给你作为练习),但它完成了你要求的基本事情:

[edit2] I see you're still having some issues without actual code, so here's a real (working) transcoder which re-codes video and re-muxes audio. It probably has tons of bugs, leaks and lacks proper error reporting, it also doesn't deal with timestamps (I'm leaving that to you as an exercise), but it does the basic things that you asked for:

#include <stdio.h>
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>

static AVFormatContext *inctx, *outctx;
#define MAX_STREAMS 16
static AVCodecContext *inavctx[MAX_STREAMS];
static AVCodecContext *outavctx[MAX_STREAMS];

static int openInputFile(const char *file) {
    int res;

    inctx = NULL;
    res = avformat_open_input(& inctx, file, NULL, NULL);
    if (res != 0)
        return res;
    res = avformat_find_stream_info(inctx, NULL);
    if (res < 0)
        return res;

    return 0;
}

static void closeInputFile(void) {
    int n;

    for (n = 0; n < inctx->nb_streams; n++)
        if (inavctx[n]) {
            avcodec_close(inavctx[n]);
            avcodec_free_context(&inavctx[n]);
        }

    avformat_close_input(&inctx);
}

static int openOutputFile(const char *file) {
    int res, n;

    outctx = avformat_alloc_context();
    outctx->oformat = av_guess_format(NULL, file, NULL);
    if ((res = avio_open2(&outctx->pb, file, AVIO_FLAG_WRITE, NULL, NULL)) < 0)
        return res;

    for (n = 0; n < inctx->nb_streams; n++) {
        AVStream *inst = inctx->streams[n];
        AVCodecContext *inc = inst->codec;

        if (inc->codec_type == AVMEDIA_TYPE_VIDEO) {
            // video decoder
            inavctx[n] = avcodec_alloc_context3(inc->codec);
            avcodec_copy_context(inavctx[n], inc);
            if ((res = avcodec_open2(inavctx[n], avcodec_find_decoder(inc->codec_id), NULL)) < 0)
                return res;

            // video encoder
            AVCodec *encoder = avcodec_find_encoder_by_name("libx264");
            AVStream *outst = avformat_new_stream(outctx, encoder);
            outst->codec->width = inavctx[n]->width;
            outst->codec->height = inavctx[n]->height;
            outst->codec->pix_fmt = inavctx[n]->pix_fmt;
            AVDictionary *dict = NULL;
            av_dict_set(&dict, "preset", "veryslow", 0);
            av_dict_set_int(&dict, "crf", 10, 0);
            outavctx[n] = avcodec_alloc_context3(encoder);
            avcodec_copy_context(outavctx[n], outst->codec);
            if ((res = avcodec_open2(outavctx[n], encoder, &dict)) < 0)
                return res;
        } else if (inc->codec_type == AVMEDIA_TYPE_AUDIO) {
            avformat_new_stream(outctx, inc->codec);
            inavctx[n] = outavctx[n] = NULL;
        } else {
            fprintf(stderr, "Don’t know what to do with stream %d
", n);
            return -1;
        }
    }

    if ((res = avformat_write_header(outctx, NULL)) < 0)
        return res;

    return 0;
}

static void closeOutputFile(void) {
    int n;

    av_write_trailer(outctx);
    for (n = 0; n < outctx->nb_streams; n++)
        if (outctx->streams[n]->codec)
            avcodec_close(outctx->streams[n]->codec);
    avformat_free_context(outctx);
}

static int encodeFrame(int stream_index, AVFrame *frame, int *gotOutput) {
    AVPacket outPacket;
    int res;

    av_init_packet(&outPacket);
    if ((res = avcodec_encode_video2(outavctx[stream_index], &outPacket, frame, gotOutput)) < 0) {
        fprintf(stderr, "Failed to encode frame
");
        return res;
    }
    if (*gotOutput) {
        outPacket.stream_index = stream_index;
        if ((res = av_interleaved_write_frame(outctx, &outPacket)) < 0) {
            fprintf(stderr, "Failed to write packet
");
            return res;
        }
    }
    av_free_packet(&outPacket);

    return 0;
}

static int decodePacket(int stream_index, AVPacket *pkt, AVFrame *frame, int *frameFinished) {
    int res;

    if ((res = avcodec_decode_video2(inavctx[stream_index], frame,
                                     frameFinished, pkt)) < 0) {
        fprintf(stderr, "Failed to decode frame
");
        return res;
    }
    if (*frameFinished){
        int hasOutput;

        frame->pts = frame->pkt_pts;
        return encodeFrame(stream_index, frame, &hasOutput);
    } else {
        return 0;
    }
}

int main(int argc, char *argv[]) {
    char *input = argv[1];
    char *output = argv[2];
    int res, n;

    printf("Converting %s to %s
", input, output);
    av_register_all();
    if ((res = openInputFile(input)) < 0) {
        fprintf(stderr, "Failed to open input file %s
", input);
        return res;
    }
    if ((res = openOutputFile(output)) < 0) {
        fprintf(stderr, "Failed to open output file %s
", input);
        return res;
    }

    AVFrame *frame = av_frame_alloc();
    AVPacket inPacket;

    av_init_packet(&inPacket);
    while (av_read_frame(inctx, &inPacket) >= 0) {
        if (inavctx[inPacket.stream_index] != NULL) {
            int frameFinished;
            if ((res = decodePacket(inPacket.stream_index, &inPacket, frame, &frameFinished)) < 0) {
                return res;
            }
        } else {
            if ((res = av_interleaved_write_frame(outctx, &inPacket)) < 0) {
                fprintf(stderr, "Failed to write packet
");
                return res;
            }
        }
    }

    for (n = 0; n < inctx->nb_streams; n++) {
        if (inavctx[n]) {
            // flush decoder
            int frameFinished;
            do {
                inPacket.data = NULL;
                inPacket.size = 0;
                if ((res = decodePacket(n, &inPacket, frame, &frameFinished)) < 0)
                    return res;
            } while (frameFinished);

            // flush encoder
            int gotOutput;
            do {
                if ((res = encodeFrame(n, NULL, &gotOutput)) < 0)
                    return res;
            } while (gotOutput);
        }
    }
    av_free_packet(&inPacket);

    closeInputFile();
    closeOutputFile();

    return 0;
}

这篇关于通过 ffmpegwrapper 切割 MPEG-TS 文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆