Swift 将音频和视频文件合并为一个视频 [英] Swift Merge audio and video files into one video
问题描述
我用 Swift 编写了一个程序.我想将视频与音频文件合并,但出现此错误.
I wrote a program in Swift.I want to merge a video with an audio file, but got this error.
"失败错误域=AVFoundationErrorDomain Code=-11838 "操作停止" UserInfo=0x17da4230 {NSLocalizedDescription=操作停止,NSLocalizedFailureReason=此媒体不支持该操作.}"
"failed Error Domain=AVFoundationErrorDomain Code=-11838 "Operation Stopped" UserInfo=0x17da4230 {NSLocalizedDescription=Operation Stopped, NSLocalizedFailureReason=The operation is not supported for this media.}"
代码
func mergeAudio(audioURL: NSURL, moviePathUrl: NSURL, savePathUrl: NSURL) {
var composition = AVMutableComposition()
let trackVideo:AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
let trackAudio:AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
let option = NSDictionary(object: true, forKey: "AVURLAssetPreferPreciseDurationAndTimingKey")
let sourceAsset = AVURLAsset(URL: moviePathUrl, options: option as [NSObject : AnyObject])
let audioAsset = AVURLAsset(URL: audioURL, options: option as [NSObject : AnyObject])
let tracks = sourceAsset.tracksWithMediaType(AVMediaTypeVideo)
let audios = audioAsset.tracksWithMediaType(AVMediaTypeAudio)
if tracks.count > 0 {
let assetTrack:AVAssetTrack = tracks[0] as! AVAssetTrack
let assetTrackAudio:AVAssetTrack = audios[0] as! AVAssetTrack
let audioDuration:CMTime = assetTrackAudio.timeRange.duration
let audioSeconds:Float64 = CMTimeGetSeconds(assetTrackAudio.timeRange.duration)
trackVideo.insertTimeRange(CMTimeRangeMake(kCMTimeZero,audioDuration), ofTrack: assetTrack, atTime: kCMTimeZero, error: nil)
trackAudio.insertTimeRange(CMTimeRangeMake(kCMTimeZero,audioDuration), ofTrack: assetTrackAudio, atTime: kCMTimeZero, error: nil)
}
var assetExport: AVAssetExportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetPassthrough)
assetExport.outputFileType = AVFileTypeMPEG4
assetExport.outputURL = savePathUrl
self.tmpMovieURL = savePathUrl
assetExport.shouldOptimizeForNetworkUse = true
assetExport.exportAsynchronouslyWithCompletionHandler { () -> Void in
switch assetExport.status {
case AVAssetExportSessionStatus.Completed:
let assetsLib = ALAssetsLibrary()
assetsLib.writeVideoAtPathToSavedPhotosAlbum(savePathUrl, completionBlock: nil)
println("success")
case AVAssetExportSessionStatus.Failed:
println("failed (assetExport.error)")
case AVAssetExportSessionStatus.Cancelled:
println("cancelled (assetExport.error)")
default:
println("complete")
}
}
}
在我看来,像 mpeg4 这样的媒体类型是错误的.哪里有问题?我错过了什么?
In my idea media type like mpeg4 is wrong. Where is the problem? What am i missing?
推荐答案
改进 代码(Govind 的回答) 一些额外的功能:
Improved code (of Govind's answer) with some additional features:
- 合并视频的音频 + 外部音频(最初的答案是删除视频的声音)
- 水平翻转视频如果需要(我个人在用户使用前置摄像头拍摄时使用它,顺便说一下 instagram 也会翻转它)
- 正确应用
preferredTransform
,这解决了视频旋转保存时的问题(视频是外部的:由其他设备捕获/由其他应用生成) - 删除了一些未使用的代码与 VideoComposition.
- 向该方法添加了一个完成处理程序,以便可以从不同的类中调用它.
- 更新到 Swift 4.
- Merge audio of the video + external audio (the initial answer was dropping the sound of the video)
- Flip video horizontally if needed (I personally use it when user captures using frontal camera, btw instagram flips it too)
- Apply
preferredTransform
correctly which solves the issue when video was saved rotated (video is external: captured by other device/generated by other app) - Removed some unused code with VideoComposition.
- Added a completion handler to the method so that it can be called from a different class.
- Update to Swift 4.
第一步.
import UIKit
import AVFoundation
import AVKit
import AssetsLibrary
第 2 步.
/// Merges video and sound while keeping sound of the video too
///
/// - Parameters:
/// - videoUrl: URL to video file
/// - audioUrl: URL to audio file
/// - shouldFlipHorizontally: pass True if video was recorded using frontal camera otherwise pass False
/// - completion: completion of saving: error or url with final video
func mergeVideoAndAudio(videoUrl: URL,
audioUrl: URL,
shouldFlipHorizontally: Bool = false,
completion: @escaping (_ error: Error?, _ url: URL?) -> Void) {
let mixComposition = AVMutableComposition()
var mutableCompositionVideoTrack = [AVMutableCompositionTrack]()
var mutableCompositionAudioTrack = [AVMutableCompositionTrack]()
var mutableCompositionAudioOfVideoTrack = [AVMutableCompositionTrack]()
//start merge
let aVideoAsset = AVAsset(url: videoUrl)
let aAudioAsset = AVAsset(url: audioUrl)
let compositionAddVideo = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo,
preferredTrackID: kCMPersistentTrackID_Invalid)
let compositionAddAudio = mixComposition.addMutableTrack(withMediaType: AVMediaTypeAudio,
preferredTrackID: kCMPersistentTrackID_Invalid)
let compositionAddAudioOfVideo = mixComposition.addMutableTrack(withMediaType: AVMediaTypeAudio,
preferredTrackID: kCMPersistentTrackID_Invalid)
let aVideoAssetTrack: AVAssetTrack = aVideoAsset.tracks(withMediaType: AVMediaTypeVideo)[0]
let aAudioOfVideoAssetTrack: AVAssetTrack? = aVideoAsset.tracks(withMediaType: AVMediaTypeAudio).first
let aAudioAssetTrack: AVAssetTrack = aAudioAsset.tracks(withMediaType: AVMediaTypeAudio)[0]
// Default must have tranformation
compositionAddVideo.preferredTransform = aVideoAssetTrack.preferredTransform
if shouldFlipHorizontally {
// Flip video horizontally
var frontalTransform: CGAffineTransform = CGAffineTransform(scaleX: -1.0, y: 1.0)
frontalTransform = frontalTransform.translatedBy(x: -aVideoAssetTrack.naturalSize.width, y: 0.0)
frontalTransform = frontalTransform.translatedBy(x: 0.0, y: -aVideoAssetTrack.naturalSize.width)
compositionAddVideo.preferredTransform = frontalTransform
}
mutableCompositionVideoTrack.append(compositionAddVideo)
mutableCompositionAudioTrack.append(compositionAddAudio)
mutableCompositionAudioOfVideoTrack.append(compositionAddAudioOfVideo)
do {
try mutableCompositionVideoTrack[0].insertTimeRange(CMTimeRangeMake(kCMTimeZero,
aVideoAssetTrack.timeRange.duration),
of: aVideoAssetTrack,
at: kCMTimeZero)
//In my case my audio file is longer then video file so i took videoAsset duration
//instead of audioAsset duration
try mutableCompositionAudioTrack[0].insertTimeRange(CMTimeRangeMake(kCMTimeZero,
aVideoAssetTrack.timeRange.duration),
of: aAudioAssetTrack,
at: kCMTimeZero)
// adding audio (of the video if exists) asset to the final composition
if let aAudioOfVideoAssetTrack = aAudioOfVideoAssetTrack {
try mutableCompositionAudioOfVideoTrack[0].insertTimeRange(CMTimeRangeMake(kCMTimeZero,
aVideoAssetTrack.timeRange.duration),
of: aAudioOfVideoAssetTrack,
at: kCMTimeZero)
}
} catch {
print(error.localizedDescription)
}
// Exporting
let savePathUrl: URL = URL(fileURLWithPath: NSHomeDirectory() + "/Documents/newVideo.mp4")
do { // delete old video
try FileManager.default.removeItem(at: savePathUrl)
} catch { print(error.localizedDescription) }
let assetExport: AVAssetExportSession = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)!
assetExport.outputFileType = AVFileTypeMPEG4
assetExport.outputURL = savePathUrl
assetExport.shouldOptimizeForNetworkUse = true
assetExport.exportAsynchronously { () -> Void in
switch assetExport.status {
case AVAssetExportSessionStatus.completed:
print("success")
completion(nil, savePathUrl)
case AVAssetExportSessionStatus.failed:
print("failed (assetExport.error?.localizedDescription ?? "error nil")")
completion(assetExport.error, nil)
case AVAssetExportSessionStatus.cancelled:
print("cancelled (assetExport.error?.localizedDescription ?? "error nil")")
completion(assetExport.error, nil)
default:
print("complete")
completion(assetExport.error, nil)
}
}
}
再次感谢@Govind 的回答!对我帮助很大!
Again thanks to @Govind's answer! It helped me a lot!
希望此更新也能帮助到某人:)
Hope this update helps someone too:)
这篇关于Swift 将音频和视频文件合并为一个视频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!