使用AVMutableComposition拼接(合并)视频时固定方向 [英] Fixing orientation when stitching (merging) videos using AVMutableComposition

查看:847
本文介绍了使用AVMutableComposition拼接(合并)视频时固定方向的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

TLDR - SEE EDIT



我正在Swift中创建一个测试应用程序,我希望使用从我的应用程序文档目录中拼接多个视频AVMutableComposition



我在某种程度上成功完成了这项工作,我的所有视频拼接在一起,所有内容都显示正确尺寸的肖像和风景。



但是,我的问题是所有视频都显示在编辑中最后一个视频的方向



我知道为了解决这个问题,我需要为我添加的每个音轨添加图层说明,但是我似乎无法做到这一点,我找到了整个答案编辑似乎以纵向方式出现,景观视频只是缩放到适合纵向视图,所以当我转动手机侧面观看风景视频时,它们仍然很小,因为它们已经缩放到纵向尺寸。 / p>

这不是我看的结果为了,我想要预期的功能,即如果视频是横向的,它在纵向模式下显示缩放但如果手机旋转我希望横向视频填满屏幕(就像在照片中观看风景视频时那样)和相同的肖像,以便在纵向观看时,它是全屏,当侧向转动时,视频缩放到横向大小(就像在照片中查看肖像视频时一样)。



总之,我想要的结果是,当查看具有横向和纵向视频的编辑时,我可以使用我的手机侧面查看整个编辑,并且横向视频是全屏的,纵向是缩放的,或者在查看时纵向相同的视频肖像视频全屏,风景视频按尺寸缩放。



我发现所有答案都不是这样,他们似乎都是从照片导入视频以添加到compil时有非常意外的行为添加使用前置摄像头拍摄的视频时的相同随机行为(要清楚我从库中导入的当前实施视频和自拍视频的大小正确,没有这些问题)。



我正在寻找一种旋转/缩放这些视频的方法,以便它们始终以正确的方向和比例显示,具体取决于用户持有他们的方向电话。



编辑:我现在知道我不能在一个视频中同时拥有横向和纵向,所以预期的结果我正在寻找最终的横向视频。我已经想出如何切换所有的方向和比例,以获得一切相同的方式,但我的输出是一个肖像视频,如果有人可以帮助我改变这,所以我的输出是景观,将不胜感激。



以下是我获取每个视频说明的功能:

  func videoTransformForTrack(asset:AVAsset) - > CGAffineTransform 
{
var return_value:CGAffineTransform?

let assetTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0]

let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform)

var scaleToFitRatio = UIScreen.mainScreen()。bounds.width / assetTrack.naturalSize.width
if assetInfo.isPortrait
{
scaleToFitRatio = UIScreen.mainScreen()。bounds.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio,scaleToFitRatio)
return_value = CGAffineTransformConcat(assetTrack.preferredTransform,scaleFactor)
}
else
{
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio,scaleToFitRatio)
var concat = CGAffineTransformConcat(CGAffineTransformConcat(assetTrack.preferredTransform,scaleFactor),CGAffineTransformMakeTranslation(0,UIScreen.mainScreen()。bounds.width / 2))
if assetInfo.orientation ==。向下
{
让fixUpsideDown = CGAffineTransformMakeRotation(CGFloat(M_PI))
让windowBounds = UIScreen.mainScreen()。bounds
让yFix = assetTrack.naturalSize.height + windowBounds.height
let centerFix = CGAffineTransformMakeTranslation(assetTrack.naturalSize.width,yFix)
concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown,centerFix),scaleFactor)
}
return_value = concat
}
返回return_value!
}

出口商:

  //创建AVMutableComposition以包含所有AVMutableComposition轨道
let mix_composition = AVMutableComposition()
var total_time = kCMTimeZero

// Loop通过视频和创建曲目,保持递增总持续时间
让video_track = mix_composition.addMutableTrackWithMediaType(AVMediaTypeVideo,preferredTrackID:CMPersistentTrackID())

var instruction = AVMutableVideoCompositionLayerInstruction(assetTrack:video_track)
视频中的视频
{
let shortened_duration = CMTimeSubtract(video.duration,CMTimeMake(1,10));
让videoAssetTrack = video.tracksWithMediaType(AVMediaTypeVideo)[0]

do
{
尝试video_track.insertTimeRange(CMTimeRangeMake(kCMTimeZero,shortened_duration),
ofTrack:videoAssetTrack,
atTime:total_time)

video_track.preferredTransform = videoAssetTrack.preferredTransform

}
catch _
{
}

instruction.setTransform(videoTransformForTrack(video),atTime:total_time)

//将视频时长添加到总时间
total_time = CMTimeAdd(total_time,shortened_duration )
}

//为视频合成创建主要内容
let main_instruction = AVMutableVideoCompositionInstruction()
main_instruction.timeRange = CMTimeRangeMake(kCMTimeZero,total_time)
main_instruction.layerInstructions = [指令]
main_composition.i nstructions = [main_instruction]
main_composition.frameDuration = CMTimeMake(1,30)
main_composition.renderSize = CGSize(width:UIScreen.mainScreen()。bounds.width,height:UIScreen.mainScreen()。bounds .height)

let exporter = AVAssetExportSession(asset:mix_composition,presetName:AVAssetExportPreset640x480)
exporter!.outputURL = final_url
exporter!.outputFileType = AVFileTypeMPEG4
exporter! .shouldOptimizeForNetworkUse = true
exporter!.videoComposition = main_composition

// 6 - 执行Export
exporter!.exportAsynchronouslyWithCompletionHandler()
{
//根据导出成功分配返回值
dispatch_async(dispatch_get_main_queue(),{() - >无效
self.exportDidFinish(出口商!)
})
}

很抱歉,我只是想确保我对所要求的内容非常清楚,因为其他答案对我没用。

解决方案

我不确定你的 orientationFromTransform()给你正确的方向。



我认为您尝试修改它或尝试以下任何内容:

  extension AVAsset {

func videoOrientation() - > (orientation:UIInterfaceOrientation,device:AVCaptureDevicePosition){
var orientation:UIInterfaceOrientation = .Unknown
var device:AVCaptureDevicePosition = .Unspecified

let tracks:[AVAssetTrack] = self.tracksWithMediaType (AVMediaTypeVideo)
如果让videoTrack = tracks.first {

let t = videoTrack.preferredTransform

if(ta == 0&& tb == 1.0&& td == 0){
orientation = .Portrait

if tc == 1.0 {
device = .Front
}否则如果tc = = -1.0 {
device = .Back
}
}
else if(ta == 0&& tb == -1.0&& td == 0 ){
orientation = .PortraitUpsideDown

if tc == -1.0 {
device = .Front
}否则如果tc == 1.0 {
device = .Back
}
}
else if(t.a == 1.0&& t.b == 0&& tc == 0){
orientation = .LandscapeRight

if td == -1.0 {
device = .Front
}否则如果td == 1.0 {
device = .Back
}
}
else if(ta == -1.0&& tb == 0&& tc == 0){
orientation = .LandscapeLeft

如果td == 1.0 {
device = .Front
}否则如果td == -1.0 {
device = .Back
}
}
}

返回(方向,设备)
}
}


TLDR - SEE EDIT

I am creating a test app in Swift where I want to stitch multiple videos together from my apps documents directory using AVMutableComposition.

I have had success in doing this to some degree, all my videos are stitched together and everything is showing the correct size portrait and landscape.

My issue is, however, that all the videos are showing in the orientation of the last video in the compilation.

I know that to fix this I will need to add layer instructions for each track I add, however I can't seem to get this right, with the answers I have found the entire compilation seems to come out in a portrait orientation with landscape videos simply scaled to fit in the portrait view, so when I turn my phone on its side to view the landscape videos they are still small since they have been scaled to a portrait size.

This is not the outcome I am looking for, I want the expected functionality i.e. if a video is landscape it shows scaled when in portrait mode but if the phone is rotated I want that landscape video to fill the screen (as it would when simply viewing a landscape video in photos) and the same for portrait so that when viewing in portrait it is full screen and when turned sideways the video is scaled to landscape size (like it does when viewing a portrait video in photos).

In summary the desired outcome I want is that when viewing a compilation that has landscape and portrait videos I can view the entire compilation with my phone on its side and the landscape videos are full screen and portrait is scaled, or when viewing the same video in portrait the portrait videos are full screen and the landscape videos are scaled to size.

With all the answers I found this was not the case and they all seemed to have very unexpected behaviour when importing a video from photos to add to the compilation, and the same random behaviour when adding videos that were shot with the front facing camera (to be clear with my current implementation videos imported from library and "selfie" videos appear at the correct size without these problems).

I'm looking for a way to rotate/scale these videos so that they are always showing in the correct orientation and scale depending on which way round the user is holding their phone.

EDIT: I am now know that i can't have both landscape and portrait orientations in one single video, so the expected outcome I'm looking for would be to have the final video in landscape orientation. i have figured out how to make switch all the orientations and scales to get everything the same way up but my output is a portrait video if anyone could help me change this so my output is landscape it would be appreciated.

Below is my function to get the instruction for each video:

func videoTransformForTrack(asset: AVAsset) -> CGAffineTransform
{
    var return_value:CGAffineTransform?

    let assetTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0]

    let transform = assetTrack.preferredTransform
    let assetInfo = orientationFromTransform(transform)

    var scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.width
    if assetInfo.isPortrait
    {
        scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.height
        let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
        return_value = CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor)
    }
    else
    {
        let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
        var concat = CGAffineTransformConcat(CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor), CGAffineTransformMakeTranslation(0, UIScreen.mainScreen().bounds.width / 2))
        if assetInfo.orientation == .Down
        {
            let fixUpsideDown = CGAffineTransformMakeRotation(CGFloat(M_PI))
            let windowBounds = UIScreen.mainScreen().bounds
            let yFix = assetTrack.naturalSize.height + windowBounds.height
            let centerFix = CGAffineTransformMakeTranslation(assetTrack.naturalSize.width, yFix)
            concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown, centerFix), scaleFactor)
        }
        return_value = concat
    }
    return return_value!
}

And the exporter:

    // Create AVMutableComposition to contain all AVMutableComposition tracks
    let mix_composition = AVMutableComposition()
    var total_time = kCMTimeZero

    // Loop over videos and create tracks, keep incrementing total duration
    let video_track = mix_composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())

    var instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: video_track)
    for video in videos
    {
        let shortened_duration = CMTimeSubtract(video.duration, CMTimeMake(1,10));
        let videoAssetTrack = video.tracksWithMediaType(AVMediaTypeVideo)[0]

        do
        {
            try video_track.insertTimeRange(CMTimeRangeMake(kCMTimeZero, shortened_duration),
                ofTrack: videoAssetTrack ,
                atTime: total_time)

            video_track.preferredTransform = videoAssetTrack.preferredTransform

        }
        catch _
        {
        }

        instruction.setTransform(videoTransformForTrack(video), atTime: total_time)

        // Add video duration to total time
        total_time = CMTimeAdd(total_time, shortened_duration)
    }

    // Create main instrcution for video composition
    let main_instruction = AVMutableVideoCompositionInstruction()
    main_instruction.timeRange = CMTimeRangeMake(kCMTimeZero, total_time)
    main_instruction.layerInstructions = [instruction]
    main_composition.instructions = [main_instruction]
    main_composition.frameDuration = CMTimeMake(1, 30)
    main_composition.renderSize = CGSize(width: UIScreen.mainScreen().bounds.width, height: UIScreen.mainScreen().bounds.height)

    let exporter = AVAssetExportSession(asset: mix_composition, presetName: AVAssetExportPreset640x480)
    exporter!.outputURL = final_url
    exporter!.outputFileType = AVFileTypeMPEG4
    exporter!.shouldOptimizeForNetworkUse = true
    exporter!.videoComposition = main_composition

    // 6 - Perform the Export
    exporter!.exportAsynchronouslyWithCompletionHandler()
    {
        // Assign return values based on success of export
        dispatch_async(dispatch_get_main_queue(), { () -> Void in
                self.exportDidFinish(exporter!)
        })
    }

Sorry for the long explanation I just wanted to make sure I was very clear with what I was asking because other answers have not worked for me.

解决方案

Im not sure your orientationFromTransform() give you the correct orientation.

I think you try to modify it or try anything like:

extension AVAsset {

    func videoOrientation() -> (orientation: UIInterfaceOrientation, device: AVCaptureDevicePosition) {
        var orientation: UIInterfaceOrientation = .Unknown
        var device: AVCaptureDevicePosition = .Unspecified

        let tracks :[AVAssetTrack] = self.tracksWithMediaType(AVMediaTypeVideo)
        if let videoTrack = tracks.first {

            let t = videoTrack.preferredTransform

            if (t.a == 0 && t.b == 1.0 && t.d == 0) {
                orientation = .Portrait

                if t.c == 1.0 {
                    device = .Front
                } else if t.c == -1.0 {
                    device = .Back
                }
            }
            else if (t.a == 0 && t.b == -1.0 && t.d == 0) {
                orientation = .PortraitUpsideDown

                if t.c == -1.0 {
                    device = .Front
                } else if t.c == 1.0 {
                    device = .Back
                }
            }
            else if (t.a == 1.0 && t.b == 0 && t.c == 0) {
                orientation = .LandscapeRight

                if t.d == -1.0 {
                    device = .Front
                } else if t.d == 1.0 {
                    device = .Back
                }
            }
            else if (t.a == -1.0 && t.b == 0 && t.c == 0) {
                orientation = .LandscapeLeft

                if t.d == 1.0 {
                    device = .Front
                } else if t.d == -1.0 {
                    device = .Back
                }
            }
        }

        return (orientation, device)
    }
}

这篇关于使用AVMutableComposition拼接(合并)视频时固定方向的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆