Swift:一种尺寸的视频记录,但呈现的大小错误 [英] Swift: video records at one size but renders at wrong size

查看:290
本文介绍了Swift:一种尺寸的视频记录,但呈现的大小错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目标是使用Swift在设备上捕获全屏视频。在下面的代码中,视频捕获似乎全屏发生(录制相机预览时使用全屏),但视频的渲染以不同的分辨率发生。对于特定的5S,似乎捕获发生在 320x568 ,但渲染发生在 320x480

The goal is to capture full screen video on a device with Swift. In the code below, video capture appears to happen at full screen (while recording the camera preview uses the full screen), but the rendering of the video happens at a different resolution. For a 5S specifically, it appears like capture happens at 320x568 but rendering occurs at 320x480.

如何捕获和呈现全屏视频?

How can you capture and render full screen video?

视频捕获代码:

private func initPBJVision() {
    // Store PBJVision in var for convenience
    let vision = PBJVision.sharedInstance()

    // Configure PBJVision
    vision.delegate = self
    vision.cameraMode = PBJCameraMode.Video
    vision.cameraOrientation = PBJCameraOrientation.Portrait
    vision.focusMode = PBJFocusMode.ContinuousAutoFocus
    vision.outputFormat = PBJOutputFormat.Preset
    vision.cameraDevice = PBJCameraDevice.Back

    // Let taps start/pause recording
    let tapHandler = UITapGestureRecognizer(target: self, action: "doTap:")
    view.addGestureRecognizer(tapHandler)

    // Log status
    print("Configured PBJVision")
}


private func startCameraPreview() {
    // Store PBJVision in var for convenience
    let vision = PBJVision.sharedInstance()

    // Connect PBJVision camera preview to <videoView>
    // -- Get preview width
    let deviceWidth = CGRectGetWidth(view.frame)
    let deviceHeight = CGRectGetHeight(view.frame)

    // -- Configure PBJVision's preview layer
    let previewLayer = vision.previewLayer
    previewLayer.frame = CGRectMake(0, 0, deviceWidth, deviceHeight)
    previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
    ...
}

视频呈现代码:

func exportVideo(fileUrl: NSURL) {
    // Create main composition object
    let videoAsset = AVURLAsset(URL: fileUrl, options: nil)
    let mainComposition = AVMutableComposition()
    let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
    let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))

    // -- Extract and apply video & audio tracks to composition
    let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
    let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
    do {
        try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
    } catch {
        print("Error with insertTimeRange. Video error: \(error).")
    }
    do {
        try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
    } catch {
        print("Error with insertTimeRange. Audio error: \(error).")
    }

    // Add text to video
    // -- Create video composition object
    let renderSize = compositionVideoTrack.naturalSize
    let videoComposition = AVMutableVideoComposition()
    videoComposition.renderSize = renderSize
    videoComposition.frameDuration = CMTimeMake(Int64(1), Int32(videoFrameRate))

    // -- Add instruction to  video composition object
    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
    let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
    instruction.layerInstructions = [videoLayerInstruction]
    videoComposition.instructions = [instruction]

    // -- Define video frame
    let videoFrame = CGRectMake(0, 0, renderSize.width, renderSize.height)
    print("Video Frame: \(videoFrame)")  // <-- Prints frame of 320x480 so render size already wrong here 
    ...


推荐答案

如果我找对你,似乎你误解了设备屏幕宽度等于相机预览(和捕捉)大小的事实。

If I get you right, it seems that you have misunderstood the fact that device screen width is'n equal to camera preview (and capture) size.

previewLayer videoGravity 属性指示如何在图层中拉伸/调整预览。它不会影响捕获输出。

The videoGravity property of your previewLayer indicates how to stretch/fit your preview inside your layer. It doesn't affect capture output.

输出的实际帧大小取决于 AVCaptureSession 的/ AVFoundation / Reference / AVCaptureSession_Class /#// apple_ref / occ / instp / AVCaptureSession / sessionPreset> sessionPreset 属性。正如我通过阅读PBJVision lib的GitHub存储库可以理解的那样,它的单例具有此设置器(称为 captureSessionPreset )。您可以在 initPBJVision 方法中更改它。

Actual frame size of output depends on sessionPreset property of your current AVCaptureSession. And as I can understand by reading GitHub repository of PBJVision lib, its singleton has setter for this (called captureSessionPreset). You can change it inside your initPBJVision method.

那里你可以找到会话预设的可能值。

There you can find possible values of session presets.

这篇关于Swift:一种尺寸的视频记录,但呈现的大小错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆