Swift:以一种尺寸录制视频但以错误的尺寸呈现 [英] Swift: video records at one size but renders at wrong size

查看:17
本文介绍了Swift:以一种尺寸录制视频但以错误的尺寸呈现的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目标是使用 Swift 在设备上捕获全屏视频.在下面的代码中,视频捕获似乎是在全屏下进行的(而录制相机预览时使用的是全屏),但视频的渲染发生在不同的分辨率下.具体来说,对于 5S,看起来捕获发生在 320x568,但渲染发生在 320x480.

The goal is to capture full screen video on a device with Swift. In the code below, video capture appears to happen at full screen (while recording the camera preview uses the full screen), but the rendering of the video happens at a different resolution. For a 5S specifically, it appears like capture happens at 320x568 but rendering occurs at 320x480.

如何捕捉和渲染全屏视频?

How can you capture and render full screen video?

视频捕获代码:

private func initPBJVision() {
    // Store PBJVision in var for convenience
    let vision = PBJVision.sharedInstance()

    // Configure PBJVision
    vision.delegate = self
    vision.cameraMode = PBJCameraMode.Video
    vision.cameraOrientation = PBJCameraOrientation.Portrait
    vision.focusMode = PBJFocusMode.ContinuousAutoFocus
    vision.outputFormat = PBJOutputFormat.Preset
    vision.cameraDevice = PBJCameraDevice.Back

    // Let taps start/pause recording
    let tapHandler = UITapGestureRecognizer(target: self, action: "doTap:")
    view.addGestureRecognizer(tapHandler)

    // Log status
    print("Configured PBJVision")
}


private func startCameraPreview() {
    // Store PBJVision in var for convenience
    let vision = PBJVision.sharedInstance()

    // Connect PBJVision camera preview to <videoView>
    // -- Get preview width
    let deviceWidth = CGRectGetWidth(view.frame)
    let deviceHeight = CGRectGetHeight(view.frame)

    // -- Configure PBJVision's preview layer
    let previewLayer = vision.previewLayer
    previewLayer.frame = CGRectMake(0, 0, deviceWidth, deviceHeight)
    previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
    ...
}

视频渲染代码:

func exportVideo(fileUrl: NSURL) {
    // Create main composition object
    let videoAsset = AVURLAsset(URL: fileUrl, options: nil)
    let mainComposition = AVMutableComposition()
    let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
    let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))

    // -- Extract and apply video & audio tracks to composition
    let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
    let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
    do {
        try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
    } catch {
        print("Error with insertTimeRange. Video error: (error).")
    }
    do {
        try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
    } catch {
        print("Error with insertTimeRange. Audio error: (error).")
    }

    // Add text to video
    // -- Create video composition object
    let renderSize = compositionVideoTrack.naturalSize
    let videoComposition = AVMutableVideoComposition()
    videoComposition.renderSize = renderSize
    videoComposition.frameDuration = CMTimeMake(Int64(1), Int32(videoFrameRate))

    // -- Add instruction to  video composition object
    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
    let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
    instruction.layerInstructions = [videoLayerInstruction]
    videoComposition.instructions = [instruction]

    // -- Define video frame
    let videoFrame = CGRectMake(0, 0, renderSize.width, renderSize.height)
    print("Video Frame: (videoFrame)")  // <-- Prints frame of 320x480 so render size already wrong here 
    ...

推荐答案

如果我没猜错的话,您似乎误解了设备屏幕宽度不等于相机预览(和捕获)尺寸这一事实.

If I get you right, it seems that you have misunderstood the fact that device screen width is'n equal to camera preview (and capture) size.

previewLayervideoGravity 属性指示如何在图层内拉伸/适合您的预览.它不影响捕获输出.

The videoGravity property of your previewLayer indicates how to stretch/fit your preview inside your layer. It doesn't affect capture output.

输出的实际帧大小取决于AVCaptureSession 的 >sessionPreset 属性.正如我通过阅读 PBJVision lib 的 GitHub 存储库所了解的那样,它的单例具有用于此的设置器(称为 captureSessionPreset).您可以在 initPBJVision 方法中更改它.

Actual frame size of output depends on sessionPreset property of your current AVCaptureSession. And as I can understand by reading GitHub repository of PBJVision lib, its singleton has setter for this (called captureSessionPreset). You can change it inside your initPBJVision method.

在那里你可以找到会话预设的可能值.

There you can find possible values of session presets.

这篇关于Swift:以一种尺寸录制视频但以错误的尺寸呈现的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆