iOS采取多个屏幕截图 [英] iOS Take Multiple Screen Shots

查看:161
本文介绍了iOS采取多个屏幕截图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个包含视频的 NSURL ,我想要每秒钟记录该视频的一帧十次。我有代码,将捕获我的播放器的图像,但我有麻烦设置它捕获10帧每秒。我试着这样的东西,但它是返回相同的初始帧的视频,正确的次数?这是我有:

I have an NSURL that contains a video, and I want to record a frame of that video ten times every seconds. And I have code that will capture an image of my player, but I have trouble setting it up to take capture 10 frames per second. I am trying something like this, but it is returning the same initial frame of the video, the correct number of times? Here is what I have:

AVAsset *asset = [AVAsset assetWithURL:videoUrl];
CMTime vidLength = asset.duration;
float seconds = CMTimeGetSeconds(vidLength);
int frameCount = 0;
for (float i = 0; i < seconds;) {
    AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset];
    CMTime time = CMTimeMake(i, 10);
    CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL];
    UIImage *thumbnail = [UIImage imageWithCGImage:imageRef];
                                CGImageRelease(imageRef);
NSString* filename = [NSString stringWithFormat:@"Documents/frame_%d.png", frameCount];
NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename];

[UIImagePNGRepresentation(thumbnail) writeToFile: pngPath atomically: YES];
frameCount++;
i = i + 0.1;
}

但是不是获取视频的当前时间i

But instead of getting the frame at the current time i of the video, I just get the initial frame?

如何获取视频的帧10次?

How can I get the frame of the video 10 times a second?

感谢您的帮助:)

推荐答案

您正在得到初始帧,因为您正试图创建CMTime帮助float value: / p>

You are getting initial frame because you are trying to create CMTime with help of float value:

CMTime time = CMTimeMake(i, 10);

由于 CMTimeMake 函数 int64_t 第一个参数,你的float值将被四舍五入到int,你会得到不正确的结果。

Since CMTimeMake function takes int64_t value as first parameter, your float value will be rounded to int, and you will get incorrect result.

让你的代码有点变化:

1)首先,您需要找到需要从视频中获取的总帧数。你写道你每秒需要10帧,因此代码将是:

1) At first, you need to find total frames count that you need to get from the video. You wrote that you need 10 frames per second, so the code will be:

int requiredFramesCount = seconds * 10;

2)接下来,您需要找到一个值,每个步骤的CMTime值:

2) Next you need to find a value that will be increasing your CMTime value on each step:

int64_t step = vidLength.value / requiredFramesCount;

3)最后,您需要将requestedTimeToleranceBefore和requestedTimeToleranceAfter设置为kCMTimeZero ,以在精确的时间获得框架:

3) And lastly, you need to set requestedTimeToleranceBefore and requestedTimeToleranceAfter to kCMTimeZero, to get a frame at precise time:

imageGenerator.requestedTimeToleranceAfter = kCMTimeZero;
imageGenerator.requestedTimeToleranceBefore = kCMTimeZero;

以下是代码的外观:

CMTime vidLength = asset.duration;
float seconds = CMTimeGetSeconds(vidLength);

int requiredFramesCount = seconds * 10;
int64_t step = vidLength.value / requiredFramesCount;

int value = 0;

for (int i = 0; i < requiredFramesCount; i++) {

    AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc]initWithAsset:asset];
    imageGenerator.requestedTimeToleranceAfter = kCMTimeZero;
    imageGenerator.requestedTimeToleranceBefore = kCMTimeZero;

    CMTime time = CMTimeMake(value, vidLength.timescale);

    CGImageRef imageRef = [imageGenerator copyCGImageAtTime:time actualTime:NULL error:NULL];
    UIImage *thumbnail = [UIImage imageWithCGImage:imageRef];
    CGImageRelease(imageRef);
    NSString* filename = [NSString stringWithFormat:@"Documents/frame_%d.png", i];
    NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename];

    [UIImagePNGRepresentation(thumbnail) writeToFile: pngPath atomically: YES];

    value += step;
}

这篇关于iOS采取多个屏幕截图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆