使用Swift从UIImage构建视频 [英] build video from UIImage using Swift
问题描述
我正在构建一个可以实时获取视图的IP摄像头应用程序,现在我想使用Swift录制MJPEG格式的视频。
I am building a IP camera app which can get the view in real time now, and now I want to record the video which is in MJPEG format using Swift.
let imageData = receivedData , imageData.length > 0,
let receivedImage = UIImage(data: imageData as Data)
这里我有接收并保存为UIImage的图像,问题是如何记录图像流?我在Github找到了一个有用的资源,但是我失败了,链接很糟糕: https://gist.github .com / acj / 6ae90aa1ebb8cad6b47b
here I have every image that received and save them as UIImage, the problem is how can I record the image stream? I have found a useful resource in Github but I failed, the link is blow:https://gist.github.com/acj/6ae90aa1ebb8cad6b47b
任何人都可以给我一些提示,或者你们有样品项目吗?非常感谢,谢谢!
Can any one give me some hint or do you guys have sample project? Would really appreciate that, thanks!
更新:我使用Amrit Tiwari的答案中的代码,但得到此错误:
为640.0x640.0创建资产编写者视频
将图像转换为视频时出错:pixelBufferPool在启动会话后为零
update: I use the code from Amrit Tiwari's answer, but get this error: Created asset writer for 640.0x640.0 video Error converting images to video: pixelBufferPool nil after starting session
if let imageData = receivedData , imageData.length > 0,
let receivedImage = UIImage(data: imageData as Data){
let size = CGSize(width: 640, height: 640)
writeImagesAsMovie([receivedImage], videoPath: "test.mp4", videoSize: size, videoFPS: 2)
}
我不确定是否路径参数是正确的(我想将它保存在Documents目录中)。请帮帮我,谢谢!
I am not sure whether it is correct for path argument(I want to save it in Documents directory). Please help me, thanks!
推荐答案
更新了Swift 4:
// MARK: - Write Images as Movie -
func writeImagesAsMovie(allImages: [UIImage], videoPath: String, videoSize: CGSize, videoFPS: Int32) {
// Create AVAssetWriter to write video
guard let assetWriter = createAssetWriter(path: videoPath, size: videoSize) else {
print("Error converting images to video: AVAssetWriter not created")
return
}
// If here, AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
let writerInput = assetWriter.inputs.filter { $0.mediaType == AVMediaType.video }.first!
let sourceBufferAttributes: [String: AnyObject] = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB) as AnyObject,
kCVPixelBufferWidthKey as String: videoSize.width as AnyObject,
kCVPixelBufferHeightKey as String: videoSize.height as AnyObject
]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)
// Start writing session
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: kCMTimeZero)
if pixelBufferAdaptor.pixelBufferPool == nil {
print("Error converting images to video: pixelBufferPool nil after starting session")
return
}
// -- Create queue for <requestMediaDataWhenReadyOnQueue>
let mediaQueue = DispatchQueue.init(label: "mediaInputQueue")
// -- Set video parameters
let frameDuration = CMTimeMake(1, videoFPS)
var frameCount = 0
// -- Add images to video
let numImages = allImages.count
writerInput.requestMediaDataWhenReady(on: mediaQueue, using: { () -> Void in
// Append unadded images to video but only while input ready
while writerInput.isReadyForMoreMediaData && frameCount < numImages {
let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
if !self.appendPixelBufferForImageAtURL(image: allImages[frameCount], pixelBufferAdaptor: pixelBufferAdaptor, presentationTime: presentationTime) {
print("Error converting images to video: AVAssetWriterInputPixelBufferAdapter failed to append pixel buffer")
return
}
frameCount += 1
}
// No more images to add? End video.
if frameCount >= numImages {
writerInput.markAsFinished()
assetWriter.finishWriting {
if assetWriter.error != nil {
print("Error converting images to video: \(assetWriter.error?.localizedDescription ?? "")")
} else {
self.saveVideoToLibrary(videoURL: URL.init(string: videoPath)!)
print("Converted images to movie @ \(videoPath)")
}
}
}
})
}
// MARK: - Create Asset Writer -
func createAssetWriter(path: String, size: CGSize) -> AVAssetWriter? {
// Convert <path> to NSURL object
let pathURL = URL.init(fileURLWithPath: path)
// Return new asset writer or nil
do {
// Create asset writer
let newWriter = try AVAssetWriter(outputURL: pathURL, fileType: AVFileType.mp4)
// Define settings for video input
let videoSettings: [String: AnyObject] = [
AVVideoCodecKey: AVVideoCodecType.h264 as AnyObject,
AVVideoWidthKey: size.width as AnyObject,
AVVideoHeightKey: size.height as AnyObject
]
// Add video input to writer
let assetWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoSettings)
newWriter.add(assetWriterVideoInput)
// Return writer
print("Created asset writer for \(size.width)x\(size.height) video")
return newWriter
} catch {
print("Error creating asset writer: \(error)")
return nil
}
}
// MARK: - Append Pixel Buffer -
func appendPixelBufferForImageAtURL(image: UIImage, pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor, presentationTime: CMTime) -> Bool {
var appendSucceeded = false
autoreleasepool {
if let pixelBufferPool = pixelBufferAdaptor.pixelBufferPool {
let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: 1)
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
kCFAllocatorDefault,
pixelBufferPool,
pixelBufferPointer
)
if let pixelBuffer = pixelBufferPointer.pointee, status == 0 {
fillPixelBufferFromImage(image: image, pixelBuffer: pixelBuffer)
appendSucceeded = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
pixelBufferPointer.deinitialize()
} else {
NSLog("Error: Failed to allocate pixel buffer from pool")
}
pixelBufferPointer.deallocate(capacity: 1)
}
}
return appendSucceeded
}
// MARK: - Fill Pixel Buffer -
func fillPixelBufferFromImage(image: UIImage, pixelBuffer: CVPixelBuffer) {
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
// Create CGBitmapContext
let context = CGContext(
data: pixelData,
width: Int(image.size.width),
height: Int(image.size.height),
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer),
space: rgbColorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue
)
// Draw image into context"
context?.draw(image.cgImage!, in: CGRect.init(x: 0, y: 0, width: image.size.width, height: image.size.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
}
// MARK: - Save Video -
func saveVideoToLibrary(videoURL: URL) {
PHPhotoLibrary.requestAuthorization { status in
// Return if unauthorized
guard status == .authorized else {
print("Error saving video: unauthorized access")
return
}
// If here, save video to library
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
}, completionHandler: { success, error in
if !success {
print("Error saving video: \(error?.localizedDescription ?? "")")
}
})
}
}
这篇关于使用Swift从UIImage构建视频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!