通过ffmpegwrapper切割MPEG-TS文件? [英] Cutting MPEG-TS file via ffmpegwrapper?

查看:157
本文介绍了通过ffmpegwrapper切割MPEG-TS文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在设备上有MPEG-TS文件。我想从设备上的文件开始剪下相当精确的时间。



使用 FFmpegWrapper 作为基础,我希望能实现这一点。



然而,我对ffmpeg的C API有点迷失。我在哪里开始?



我尝试在启动PTS之前删除所有数据包,但这打破了视频流。

  packet-> pts = av_rescale_q(packet-> pts,inputStream.stream-> time_base,outputStream.stream-> time_base); 
packet-> dts = av_rescale_q(packet-> dts,inputStream.stream-> time_base,outputStream.stream-> time_base);

if(startPts == 0){
startPts = packet-> pts;
}

if(packet-> pts< cutTimeStartPts + startPts){
av_free_packet(packet);
继续;
}

如何切断输入文件的一部分,而不会破坏视频流?当回放到背面时,我想要2个裁剪段无缝地一起运行。

  ffmpeg -i time.ts -c:v libx264 -c:a copy -ss $ CUT_POINT -map 0 -y after.ts 
ffmpeg -i time.ts -c:v libx264 -c:a copy -to $ CUT_POINT -map 0 -y before.ts

似乎是我需要的。我认为需要重新编码,所以视频可以从任意点开始,而不是现有的关键帧。如果有一个更有效的解决方案,那太好了。如果没有,这是足够好的。



编辑:这是我的尝试。我正在拼凑各种各样的部分,我完全不了解这里复制的内容。我现在要离开切割部分,尝试获取音频+视频编码,无需分层复杂性。我在 avcodec_encode_video2(...)中获得EXC_BAD_ACCESS

   - (void )convertInputPath:(NSString *)inputPath outputPath:(NSString *)outputPath 
选项:(NSDictionary *)选项progressBlock:(FFmpegWrapperProgressBlock)progressBlock
completionBlock:(FFmpegWrapperCompletionBlock)completionBlock {
dispatch_async(conversionQueue ,^ {
FFInputFile * inputFile = nil;
FFOutputFile * outputFile = nil;
NSError * error = nil;

inputFile = [[FFInputFile alloc] initWithPath:输入路径选项:options];
outputFile = [[FFOutputFile alloc] initWithPath:outputPath options:options];

[self setupDirectStreamCopyFromInputFile:inputFile outputFile:outputFile];
if(! [outputFile openFileForWritingWithError:& error]){
[self finishWithSuccess:NO er​​ror:error completionBlock:completionBlock];
return;
}
if(![outputFile writeHeaderWithError:& error]){
[self finishWithSuccess:NO er​​ror:error completionBlock:completionBlock];
返回;
}

AVRational default_timebase;
default_timebase.num = 1;
default_timebase.den = AV_TIME_BASE;
FFStream * outputVideoStream = outputFile.streams [0];
FFStream * inputVideoStream = inputFile.streams [0];

AVFrame * frame;
AVPacket inPacket,outPacket;

frame = avcodec_alloc_frame();
av_init_packet(& inPacket);

while(av_read_frame(inputFile.formatContext,& inPacket)> = 0){
if(inPacket.stream_index == 0){
int frameFinished;
avcodec_decode_video2(inputVideoStream.stream-> codec,frame,& frameFinished,& inPacket);
// if(frameFinished&& frame-> pkt_pts> = starttime_int64&& frame-> pkt_pts< = endtime_int64){
if(frameFinished){
av_init_packet(安培; outPacket);
int输出;
avcodec_encode_video2(outputVideoStream.stream-> codec,& outPacket,frame& output);
if(output){
if(av_write_frame(outputFile.formatContext,& outPacket)!= 0){
fprintf(stderr,convert():写入视频帧时出错) N);
[self finishWithSuccess:NO er​​ror:nil completionBlock:completionBlock];
}
}
av_free_packet(& outPacket);
}
if(frame-> pkt_pts> endtime_int64){
break;
}
}
}
av_free_packet(& inPacket);

if(![outputFile writeTrailerWithError:& error]){
[self finishWithSuccess:NO er​​ror:error completionBlock:completionBlock];
返回;
}

[self finishWithSuccess:YES error:nil completionBlock:completionBlock];
});
}


解决方案

FFmpeg(libavformat / codec ,在这种情况下)API非常仔细地映射ffmpeg.exe命令行参数。要打开文件,请使用 avformat_open_input_file ()。最后两个参数可以为NULL。这将为您填写AVFormatContext。现在,您可以在循环中使用 av_read_frame()开始阅读框架。 pkt.stream_index将告诉您每个数据包属于哪个流,而avformatcontext-> streams [pkt.stream_index]是随附的流信息,它告诉您使用哪种编解码器,无论是视频/音频等。使用 avformat_close()关闭。



复用,您使用反向,有关详细信息,请参阅复用。基本上是分配avio_open2 为每个添加流输入文件中的现有流(基本上是context-> streams []), avformat_write_header(),在循环中 av_interleaved_write_frame() av_write_trailer()关闭(和免费最后分配的上下文)。



视频流的编码/解码使用libavcodec完成。对于您从复用器获取的每个AVPacket,请使用 avcodec_decode_video2()。使用 avcodec_encode_video2()输出AVFrame的编码。请注意,两者都会引入延迟,因此,对每个函数的前几次调用不会返回任何数据,您需要通过使用NULL输入数据调用每个函数来刷新缓存数据,以获取其中的尾部数据包/帧。 av_interleave_write_frame将正确地交换数据包,以便视频/音频流不会同步(如:相同时间戳的视频数据包在ts文件中的音频数据包之后发生MB)。



如果您需要更详细的avcodec_decode_video2,avcodec_encode_video2,av_read_frame或av_interleaved_write_frame的示例,只需Google$ function example,您将看到完整的示例,显示如何正确使用它们。对于x264编码,当调用avcodec_open2进行编码质量设置时,在AVCodecContext中设置一些默认参数。在C API中,您可以使用 AVDictionary ,例如:

  AVDictionary opts = * NULL; 
av_dict_set(& opts,preset,veryslow,0);
//使用crf或b,而不是两者!请参阅上面的H264编码选项上的链接
av_dict_set_int(& opts,b,1000,0);
av_dict_set_int(& opts,crf,10,0);

哦,我忘了一部分,时间戳。每个AVPacket和AVFrame在其结构中都有一个pts变量,您可以使用它来决定是否在输出流中包含数据包/帧。因此,对于音频,您可以使用分解步骤中的 AVPacket.pts 作为分隔符,对于视频,您可以使用 AVFrame.pts 作为分隔符。他们各自的文件告诉你他们是什么单位。



[edit2]我看到你仍然有一些问题没有实际的代码,所以这里是一个真正的(工作)代码转换器它重新编码视频和重新复用音频。它可能有大量的错误,泄漏和缺乏适当的错误报告,它也不处理时间戳(我作为一个练习离开你),但它做了你所要求的基本事情:

  #include< stdio.h> 
#include< libavformat / avformat.h>
#include< libavcodec / avcodec.h>

static AVFormatContext * inctx,* outctx;
#define MAX_STREAMS 16
static AVCodecContext * inavctx [MAX_STREAMS];
static AVCodecContext * outavctx [MAX_STREAMS];

static int openInputFile(const char * file){
int res;

inctx = NULL;
res = avformat_open_input(& inctx,file,NULL,NULL);
if(res!= 0)
return res;
res = avformat_find_stream_info(inctx,NULL);
if(res< 0)
return res;

return 0;
}

static void closeInputFile(void){
int n;

for(n = 0; n if(inavctx [n]){
avcodec_close(inavctx [n]);
avcodec_free_context(& inavctx [n]);
}

avformat_close_input(& inctx);
}

static int openOutputFile(const char * file){
int res,n;

outctx = avformat_alloc_context();
outctx-> oformat = av_guess_format(NULL,file,NULL);
if((res = avio_open2(& outctx-> pb,file,AVIO_FLAG_WRITE,NULL,NULL))< 0)
return res; (n = 0; n< inctx-> nb_streams; n ++)

{
AVStream * inst = inctx-> streams [n]
AVCodecContext * inc = inst-> codec;

if(inc-> codec_type == AVMEDIA_TYPE_VIDEO){
//视频解码器
inavctx [n] = avcodec_alloc_context3(inc-> codec);
avcodec_copy_context(inavctx [n],inc);
if((res = avcodec_open2(inavctx [n],avcodec_find_decoder(inc-> codec_id),NULL))< 0)
return res;

//视频编码器
AVCodec * encoder = avcodec_find_encoder_by_name(libx264);
AVStream * outst = avformat_new_stream(outctx,encoder);
outst-> codec-> width = inavctx [n] - > width;
outst-> codec-> height = inavctx [n] - > height;
outst-> codec-> pix_fmt = inavctx [n] - > pix_fmt;
AVDictionary * dict = NULL;
av_dict_set(& dict,preset,veryslow,0);
av_dict_set_int(& dict,crf,10,0);
outavctx [n] = avcodec_alloc_context3(编码器);
avcodec_copy_context(outavctx [n],outst-> codec);
if((res = avcodec_open2(outavctx [n],encoder,& dict))< 0)
return res;
} else if(inc-> codec_type == AVMEDIA_TYPE_AUDIO){
avformat_new_stream(outctx,inc-> codec);
inavctx [n] = outavctx [n] = NULL;
} else {
fprintf(stderr,不知道怎么处理stream%d\\\
,n);
return -1;
}
}

if((res = avformat_write_header(outctx,NULL))< 0)
return res;

return 0;
}

static void closeOutputFile(void){
int n;

av_write_trailer(outctx);
for(n = 0; n nb_streams; n ++)
if(outctx-> streams [n] - > codec)
avcodec_close(outctx->流[N] - >编解码器);
avformat_free_context(outctx);
}

static int encodeFrame(int stream_index,AVFrame * frame,int * gotOutput){
AVPacket outPacket;
int res;

av_init_packet(& outPacket);
if((res = avcodec_encode_video2(outavctx [stream_index],& outPacket,frame,gotOutput))< 0){
fprintf(stderr,无法编码frame\\\
);
return res;
}
if(* gotOutput){
outPacket.stream_index = stream_index;
if((res = av_interleaved_write_frame(outctx,& outPacket))< 0){
fprintf(stderr,Failed to write packet\\\
);
return res;
}
}
av_free_packet(& outPacket);

return 0;
}

static int decodePacket(int stream_index,AVPacket * pkt,AVFrame * frame,int * frameFinished){
int res;

if((res = avcodec_decode_video2(inavctx [stream_index],frame,
frameFinished,pkt))< 0){
fprintf(stderr,Failed to decode frame \
);
return res;
}
if(* frameFinished){
int hasOutput;

frame-> pts = frame-> pkt_pts;
return encodeFrame(stream_index,frame,& hasOutput);
} else {
return 0;
}
}

int main(int argc,char * argv []){
char * input = argv [1];
char * output = argv [2];
int res,n;

printf(将%s转换为%s\\\
,输入,输出);
av_register_all();
if((res = openInputFile(input))< 0){
fprintf(stderr,无法打开输入文件%s\\\
,输入);
return res;
}
if((res = openOutputFile(output))< 0){
fprintf(stderr,无法打开输出文件%s\\\
,输入);
return res;
}

AVFrame * frame = av_frame_alloc();
AVPacket inPacket;

av_init_packet(& inPacket);
while(av_read_frame(inctx,& inPacket)> = 0){
if(inavctx [inPacket.stream_index]!= NULL){
int frameFinished;
if((res = decodePacket(inPacket.stream_index,& inPacket,frame,& frameFinished))< 0){
return res;
}
} else {
if((res = av_interleaved_write_frame(outctx,& inPacket)))< 0){
fprintf(stderr,Failed to write packet\ N);
return res;
}
}
}

for(n = 0; n if(inavctx [n ]){
// flush decoder
int frameFinished;
do {
inPacket.data = NULL;
inPacket.size = 0;
if((res = decodePacket(n,& inPacket,frame,& frameFinished))< 0)
return res;
} while(frameFinished);

// flush encoder
int gotOutput;
do {
if((res = encodeFrame(n,NULL,& gotOutput))< 0)
return res;
} while(gotOutput);
}
}
av_free_packet(& inPacket);

closeInputFile();
closeOutputFile();

return 0;
}


I have MPEG-TS files on the device. I would like to cut a fairly-exact time off the start of the files on-device.

Using FFmpegWrapper as a base, I'm hoping to achieve this.

I'm a little lost on the C API of ffmpeg, however. Where do I start?

I tried just dropping all packets prior to a start PTS I was looking for, but this broke the video stream.

    packet->pts = av_rescale_q(packet->pts, inputStream.stream->time_base, outputStream.stream->time_base);
    packet->dts = av_rescale_q(packet->dts, inputStream.stream->time_base, outputStream.stream->time_base);

    if(startPts == 0){
        startPts = packet->pts;
    }

    if(packet->pts < cutTimeStartPts + startPts){
        av_free_packet(packet);
        continue;
    }

How do I cut off part of the start of the input file without destroying the video stream? When played back to back, I want 2 cut segments to run seamlessly together.

ffmpeg -i time.ts -c:v libx264 -c:a copy -ss $CUT_POINT -map 0 -y after.ts
ffmpeg -i time.ts -c:v libx264 -c:a copy -to $CUT_POINT -map 0 -y before.ts

Seems to be what I need. I think the re-encode is needed so the video can start at any arbitrary point and not an existing keyframe. If there's a more efficient solution, that's great. If not, this is good enough.

EDIT: Here's my attempt. I'm cobbling together various pieces I don't fully understand copied from here. I'm leaving off the "cutting" piece for now to try and get audio + video encoded written without layering complexity. I get EXC_BAD_ACCESS on avcodec_encode_video2(...)

- (void)convertInputPath:(NSString *)inputPath outputPath:(NSString *)outputPath
                 options:(NSDictionary *)options progressBlock:(FFmpegWrapperProgressBlock)progressBlock
         completionBlock:(FFmpegWrapperCompletionBlock)completionBlock {
    dispatch_async(conversionQueue, ^{
        FFInputFile *inputFile = nil;
        FFOutputFile *outputFile = nil;
        NSError *error = nil;

        inputFile = [[FFInputFile alloc] initWithPath:inputPath options:options];
        outputFile = [[FFOutputFile alloc] initWithPath:outputPath options:options];

        [self setupDirectStreamCopyFromInputFile:inputFile outputFile:outputFile];
        if (![outputFile openFileForWritingWithError:&error]) {
            [self finishWithSuccess:NO error:error completionBlock:completionBlock];
            return;
        }
        if (![outputFile writeHeaderWithError:&error]) {
            [self finishWithSuccess:NO error:error completionBlock:completionBlock];
            return;
        }

        AVRational default_timebase;
        default_timebase.num = 1;
        default_timebase.den = AV_TIME_BASE;
        FFStream *outputVideoStream = outputFile.streams[0];
        FFStream *inputVideoStream = inputFile.streams[0];

        AVFrame *frame;
        AVPacket inPacket, outPacket;

        frame = avcodec_alloc_frame();
        av_init_packet(&inPacket);

        while (av_read_frame(inputFile.formatContext, &inPacket) >= 0) {
            if (inPacket.stream_index == 0) {
                int frameFinished;
                avcodec_decode_video2(inputVideoStream.stream->codec, frame, &frameFinished, &inPacket);
//                if (frameFinished && frame->pkt_pts >= starttime_int64 && frame->pkt_pts <= endtime_int64) {
                if (frameFinished){
                    av_init_packet(&outPacket);
                    int output;
                    avcodec_encode_video2(outputVideoStream.stream->codec, &outPacket, frame, &output);
                    if (output) {
                        if (av_write_frame(outputFile.formatContext, &outPacket) != 0) {
                            fprintf(stderr, "convert(): error while writing video frame\n");
                            [self finishWithSuccess:NO error:nil completionBlock:completionBlock];
                        }
                    }
                    av_free_packet(&outPacket);
                }
                if (frame->pkt_pts > endtime_int64) {
                    break;
                }
            }
        }
        av_free_packet(&inPacket);

        if (![outputFile writeTrailerWithError:&error]) {
            [self finishWithSuccess:NO error:error completionBlock:completionBlock];
            return;
        }

        [self finishWithSuccess:YES error:nil completionBlock:completionBlock];
    });
}

解决方案

The FFmpeg (libavformat/codec, in this case) API maps the ffmpeg.exe commandline arguments pretty closely. To open a file, use avformat_open_input_file(). The last two arguments can be NULL. This fills in the AVFormatContext for you. Now you start reading frames using av_read_frame() in a loop. The pkt.stream_index will tell you which stream each packet belongs to, and avformatcontext->streams[pkt.stream_index] is the accompanying stream information, which tells you what codec it uses, whether it's video/audio, etc. Use avformat_close() to shut down.

For muxing, you use the inverse, see muxing for details. Basically it's allocate, avio_open2, add streams for each existing stream in the input file (basically context->streams[]), avformat_write_header(), av_interleaved_write_frame() in a loop, av_write_trailer() to shut down (and free the allocated context in the end).

Encoding/decoding of the video stream(s) is done using libavcodec. For each AVPacket you get from the muxer, use avcodec_decode_video2(). Use avcodec_encode_video2() for encoding of the output AVFrame. Note that both will introduce delay so the first few calls to each function will not return any data and you need to flush cached data by calling each function with NULL input data to get the tail packets/frames out of it. av_interleave_write_frame will interleave packets correctly so the video/audio stream will not desync (as in: video packets of the same timestamp occur MBs after audio packets in the ts file).

If you need more detailed examples for avcodec_decode_video2, avcodec_encode_video2, av_read_frame or av_interleaved_write_frame, just Google "$function example" and you'll see full-fledged examples showing how to use them correctly. For x264 encoding, set some default parameters in the AVCodecContext when calling avcodec_open2 for encoding quality settings. In the C API, you do that using AVDictionary, e.g.:

AVDictionary opts = *NULL;
av_dict_set(&opts, "preset", "veryslow", 0);
// use either crf or b, not both! See the link above on H264 encoding options
av_dict_set_int(&opts, "b", 1000, 0);
av_dict_set_int(&opts, "crf", 10, 0);

[edit] Oh I forgot one part, the timestamping. Each AVPacket and AVFrame has a pts variable in its struct, and you can use that to decide whether to include the packet/frame in the output stream. So for audio, you'd use AVPacket.pts from the demuxing step as a delimiter, and for video, you'd use AVFrame.pts from the decoding step as a delimited. Their respective documentation tells you in what unit they are.

[edit2] I see you're still having some issues without actual code, so here's a real (working) transcoder which re-codes video and re-muxes audio. It probably has tons of bugs, leaks and lacks proper error reporting, it also doesn't deal with timestamps (I'm leaving that to you as an exercise), but it does the basic things that you asked for:

#include <stdio.h>
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>

static AVFormatContext *inctx, *outctx;
#define MAX_STREAMS 16
static AVCodecContext *inavctx[MAX_STREAMS];
static AVCodecContext *outavctx[MAX_STREAMS];

static int openInputFile(const char *file) {
    int res;

    inctx = NULL;
    res = avformat_open_input(& inctx, file, NULL, NULL);
    if (res != 0)
        return res;
    res = avformat_find_stream_info(inctx, NULL);
    if (res < 0)
        return res;

    return 0;
}

static void closeInputFile(void) {
    int n;

    for (n = 0; n < inctx->nb_streams; n++)
        if (inavctx[n]) {
            avcodec_close(inavctx[n]);
            avcodec_free_context(&inavctx[n]);
        }

    avformat_close_input(&inctx);
}

static int openOutputFile(const char *file) {
    int res, n;

    outctx = avformat_alloc_context();
    outctx->oformat = av_guess_format(NULL, file, NULL);
    if ((res = avio_open2(&outctx->pb, file, AVIO_FLAG_WRITE, NULL, NULL)) < 0)
        return res;

    for (n = 0; n < inctx->nb_streams; n++) {
        AVStream *inst = inctx->streams[n];
        AVCodecContext *inc = inst->codec;

        if (inc->codec_type == AVMEDIA_TYPE_VIDEO) {
            // video decoder
            inavctx[n] = avcodec_alloc_context3(inc->codec);
            avcodec_copy_context(inavctx[n], inc);
            if ((res = avcodec_open2(inavctx[n], avcodec_find_decoder(inc->codec_id), NULL)) < 0)
                return res;

            // video encoder
            AVCodec *encoder = avcodec_find_encoder_by_name("libx264");
            AVStream *outst = avformat_new_stream(outctx, encoder);
            outst->codec->width = inavctx[n]->width;
            outst->codec->height = inavctx[n]->height;
            outst->codec->pix_fmt = inavctx[n]->pix_fmt;
            AVDictionary *dict = NULL;
            av_dict_set(&dict, "preset", "veryslow", 0);
            av_dict_set_int(&dict, "crf", 10, 0);
            outavctx[n] = avcodec_alloc_context3(encoder);
            avcodec_copy_context(outavctx[n], outst->codec);
            if ((res = avcodec_open2(outavctx[n], encoder, &dict)) < 0)
                return res;
        } else if (inc->codec_type == AVMEDIA_TYPE_AUDIO) {
            avformat_new_stream(outctx, inc->codec);
            inavctx[n] = outavctx[n] = NULL;
        } else {
            fprintf(stderr, "Don’t know what to do with stream %d\n", n);
            return -1;
        }
    }

    if ((res = avformat_write_header(outctx, NULL)) < 0)
        return res;

    return 0;
}

static void closeOutputFile(void) {
    int n;

    av_write_trailer(outctx);
    for (n = 0; n < outctx->nb_streams; n++)
        if (outctx->streams[n]->codec)
            avcodec_close(outctx->streams[n]->codec);
    avformat_free_context(outctx);
}

static int encodeFrame(int stream_index, AVFrame *frame, int *gotOutput) {
    AVPacket outPacket;
    int res;

    av_init_packet(&outPacket);
    if ((res = avcodec_encode_video2(outavctx[stream_index], &outPacket, frame, gotOutput)) < 0) {
        fprintf(stderr, "Failed to encode frame\n");
        return res;
    }
    if (*gotOutput) {
        outPacket.stream_index = stream_index;
        if ((res = av_interleaved_write_frame(outctx, &outPacket)) < 0) {
            fprintf(stderr, "Failed to write packet\n");
            return res;
        }
    }
    av_free_packet(&outPacket);

    return 0;
}

static int decodePacket(int stream_index, AVPacket *pkt, AVFrame *frame, int *frameFinished) {
    int res;

    if ((res = avcodec_decode_video2(inavctx[stream_index], frame,
                                     frameFinished, pkt)) < 0) {
        fprintf(stderr, "Failed to decode frame\n");
        return res;
    }
    if (*frameFinished){
        int hasOutput;

        frame->pts = frame->pkt_pts;
        return encodeFrame(stream_index, frame, &hasOutput);
    } else {
        return 0;
    }
}

int main(int argc, char *argv[]) {
    char *input = argv[1];
    char *output = argv[2];
    int res, n;

    printf("Converting %s to %s\n", input, output);
    av_register_all();
    if ((res = openInputFile(input)) < 0) {
        fprintf(stderr, "Failed to open input file %s\n", input);
        return res;
    }
    if ((res = openOutputFile(output)) < 0) {
        fprintf(stderr, "Failed to open output file %s\n", input);
        return res;
    }

    AVFrame *frame = av_frame_alloc();
    AVPacket inPacket;

    av_init_packet(&inPacket);
    while (av_read_frame(inctx, &inPacket) >= 0) {
        if (inavctx[inPacket.stream_index] != NULL) {
            int frameFinished;
            if ((res = decodePacket(inPacket.stream_index, &inPacket, frame, &frameFinished)) < 0) {
                return res;
            }
        } else {
            if ((res = av_interleaved_write_frame(outctx, &inPacket)) < 0) {
                fprintf(stderr, "Failed to write packet\n");
                return res;
            }
        }
    }

    for (n = 0; n < inctx->nb_streams; n++) {
        if (inavctx[n]) {
            // flush decoder
            int frameFinished;
            do {
                inPacket.data = NULL;
                inPacket.size = 0;
                if ((res = decodePacket(n, &inPacket, frame, &frameFinished)) < 0)
                    return res;
            } while (frameFinished);

            // flush encoder
            int gotOutput;
            do {
                if ((res = encodeFrame(n, NULL, &gotOutput)) < 0)
                    return res;
            } while (gotOutput);
        }
    }
    av_free_packet(&inPacket);

    closeInputFile();
    closeOutputFile();

    return 0;
}

这篇关于通过ffmpegwrapper切割MPEG-TS文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆