AVFoundation:导出时将视频正确地适合CALayer [英] AVFoundation: Fit Video to CALayer correctly when exporting

查看:277
本文介绍了AVFoundation:导出时将视频正确地适合CALayer的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

问题:

在使用AVFoundation创建的视频无法显示在VideoLayer中时,我遇到了问题, CALayer ,且尺寸正确。

I'm having issues getting videos I'm creating with AVFoundation to show in the VideoLayer, a CALayer, with correct dimensions.

示例:

这是视频的外观(如在应用程序中向用户显示)

Here is what the video should look like (as its displayed to the user in the app)

但是,这是导出后的最终视频:

However, here's the resulting video when it's exported:

详细信息

如您所见,它的意思是方形视频,背景为绿色,视频适合指定的帧。但是,生成的视频不适合用于包含它的 CALayer (请参见将视频拉伸到的黑色空间?)。

As you can see, its meant to be a square video, with green background, with the video fitting to a specified frame. However, the resulting video doesn't fit the CALayer used to contain it (see the black space where the video should be stretched to?).

有时,视频确实填充了图层,但超出了范围(宽度或高度过多),并且通常无法保持视频的自然宽高比。视频。

Sometimes the video does fill the layer but is stretched beyond the bounds (either too much width or too much height) and often doesn't maintain the natural aspect radio of the video.

代码

CGRect displayedFrame = [self adjustedVideoBoundsFromVideo:gifVideo];//the cropped frame
CGRect renderFrame = [self renderSizeForGifVideo:gifVideo]; //the full rendersize
AVAsset * originalAsset = self.videoAsset;

AVAssetTrack * videoTrack = [[originalAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableComposition * mainComposition = [AVMutableComposition composition];

AVMutableCompositionTrack * compositionTrack = [mainComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];

[compositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, originalAsset.duration) ofTrack:videoTrack atTime:kCMTimeZero error:nil];

CALayer * parentLayer = [CALayer layer];
CALayer * backgroundLayer = [CALayer layer];
CALayer * videoLayer = [CALayer layer];
parentLayer.frame = renderFrame;
backgroundLayer.frame = parentLayer.bounds;
backgroundLayer.backgroundColor = self.backgroundColor.CGColor;
videoLayer.frame = displayedFrame;
[parentLayer addSublayer:backgroundLayer];
[parentLayer addSublayer:videoLayer];


AVMutableVideoComposition * videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.renderSize = CGSizeMake(renderFrame.size.width, renderFrame.size.height);




videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool
                         videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];



AVMutableVideoCompositionInstruction * instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mainComposition.duration);

AVMutableVideoCompositionLayerInstruction * layerInstruction = [AVMutableVideoCompositionLayerInstruction
                                                                videoCompositionLayerInstructionWithAssetTrack:videoTrack];


instruction.layerInstructions = @[layerInstruction];
videoComposition.instructions = @[instruction];



NSString* videoName = @"myNewGifVideo.mp4";

NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:videoName];
NSURL    *exportUrl = [NSURL fileURLWithPath:exportPath];
if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath])
{
    [[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}

AVAssetExportSession * exporter = [[AVAssetExportSession alloc] initWithAsset:mainComposition presetName:AVAssetExportPresetHighestQuality];
exporter.videoComposition = videoComposition;
exporter.outputFileType = AVFileTypeMPEG4;
exporter.outputURL = exportUrl;

[exporter exportAsynchronouslyWithCompletionHandler:^
 {
     dispatch_async(dispatch_get_main_queue(), ^{
         self.finalVideo = exportUrl;
         [self.delegate shareManager:self didCreateVideo:self.finalVideo];
         if (completionBlock){
             completionBlock();
         }
     });
 }];

我尝试过的操作:

我尝试调整 videoLayer 框架边界 contentGravity 都没用。

I tried adjusting the videoLayer's frame, bounds, and contentGravity which did nothing of use.

我尝试添加转换 AVMutableVideoCompositionLayerInstruction ,将视频缩放到 displayRect 的大小(可以选择许多不同的视频,并且它们的宽度和高度是可变的。每个视频在结果视频中的显示方式都不相同,但没有一个正确显示)转换有时会得到一个正确的尺寸(通常是宽度),但会弄乱另一个尺寸。如果我以稍微不同的方式裁剪/缩放视频,就永远无法获得一维正确的尺寸。

I tried adding a transform to the AVMutableVideoCompositionLayerInstruction to scale the video to the size of the displayRect (many different videos can be chosen from, and their width and height are variable. Each video shows differently in the resulting video, none of them correctly) Transforming would sometimes get one dimension right (usually the width), but mess up the other one. And it would never get one dimension consistently right if I cropped/scaled the video in a slightly different way.

我尝试更改 renderSize videoComposition 但这毁了方形作物。

I've tried changing the renderSize of the videoComposition but that ruins the square crop.

我似乎使它正确。如何使视频完全填充 videoLayer displayLayer 框架(最后注:<$ c $视频的c> naturalSize 与 displayedFrame 有所不同,这就是为什么我尝试对其进行转换的原因)?

I can't seem to get it right. How can I get the video to perfectly fill the videoLayer with the displayedFrame frame (final note: the naturalSize of the video differs from the displayedFrame which is why I tried transforming it)?

推荐答案

当视频在 videoLayer 中呈现时,它具有一个隐式变换 t 已应用(我们无权使用它,但这是渲染工具内部应用于视频的一些初始转换)。为了使视频在导出时完全填充该层,我们必须了解初始转换的来源。 t 表现出一种奇怪的行为:它取决于视频组成的 renderSize (在您的示例中,广场)。您会看到,如果将 renderSize 设置为其他值,则在 videoLayer 中渲染的视频的比例和宽高比即使您完全没有更改 videoLayer 框架,也可以进行更改。我看不到这种行为的合理性(合成的框架和组成合成的视频层的框架应该完全独立),所以我认为这是 AVVideoCompositionCoreAnimationTool

When the video is rendered in your videoLayer, it has an implicit transform t applied (we don't have access to it, but that's some initial transform that the render tool internally applies to the video). To make the video exactly fill that layer on export, we have to understand where that initial transform comes from. t shows a strange behavior: It is dependent on the renderSize of your video composition (In your example that would be a square). You can see that if you set the renderSize to anything else, the scale and aspect ratio of the video rendered in videoLayer changes too - even if you didn't change the videoLayer's frame at all. I don't see how this behavior would make sense (the frame of the composition and the frame of the video layer that's part of the composition should be completely independent), so I think it's a bug in AVVideoCompositionCoreAnimationTool.

要纠正 t 的有害行为,请将以下转换应用于您的 videoInstruction

To correct the ominous behavior of t, apply the following transform to your videoInstruction:

let bugFixTransform = CGAffineTransform(scaleX: renderSize.width/videoTrack.naturalSize.width,
                                        y: renderSize.height/videoTrack.naturalSize.height)
videoLayerInstruction.setTransform(bugFixTransform, at: .zero)

视频随后将完全填充 videoLayer

如果视频的标准方向不正确,则必须再应用两次转换才能确定方向和缩放比例:

If the video doesn't have the standard orientation, two more transforms have to be applied to fix the orientation and scale:

let orientationAspectTransform: CGAffineTransform
let sourceVideoIsRotated: Bool = videoTrack.preferredTransform.a == 0
if sourceVideoIsRotated {
  orientationAspectTransform = CGAffineTransform(scaleX: videoTrack.naturalSize.width/videoTrack.naturalSize.height,
                                                 y: videoTrack.naturalSize.height/videoTrack.naturalSize.width)
} else {
  orientationAspectTransform = .identity
}

let bugFixTransform = CGAffineTransform(scaleX: compositionSize.width/videoTrack.naturalSize.width,
                                        y: compositionSize.height/videoTrack.naturalSize.height)
let transform =
  videoTrack.preferredTransform
    .concatenating(bugFixTransform)
    .concatenating(orientationAspectTransform)
videoLayerInstruction.setTransform(transform, at: .zero)

这篇关于AVFoundation:导出时将视频正确地适合CALayer的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆