使用增强现实录制视频的最佳方式是什么 [英] What is the best way to record a video with augmented reality

查看:13
本文介绍了使用增强现实录制视频的最佳方式是什么的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用增强现实录制视频的最佳方式是什么?(在 iPhone/iPad 相机的框架中添加文本、图像徽标)

What is the best way to record a video with augmented reality? (adding text, images logo to frames from iPhone/iPad camera)

以前我试图弄清楚如何绘制到 CIImage (如何将文本绘制到 CIImage 中?)并将 CIImage 转换回 CMSampleBuffer(CIImage 返回到 CMSampleBuffer)

Previously I was trying to figure out how to draw into CIImage (How to draw text into CIImage?) and convert CIImage back to CMSampleBuffer (CIImage back to CMSampleBuffer)

我几乎做了所有的事情,只有在 AVAssetWriterInput

I almost did everything, only have problem with recording video using new CMSampleBuffer in AVAssetWriterInput

但是这个解决方案无论如何都不是很好,它在将 CIImage 转换为 CVPixelBuffer (ciContext.render(ciImage!,to: aBuffer))

But this solution anyway isn't good at all, it eats a lot of CPU while converting CIImage to CVPixelBuffer (ciContext.render(ciImage!, to: aBuffer))

所以我想在这里停下来寻找一些其他方法来录制具有增强现实的视频(例如,在将视频编码为 mp4 文件的同时在帧内动态添加(绘制)文本)

So I want to stop here and find some other ways to record a video with augmented reality (for example dynamically add (draw) text inside frames while encoding video into mp4 file)

这里是我尝试过但不想再使用的...

Here what I've tried and don't want to use anymore...

// convert original CMSampleBuffer to CIImage, 
// combine multiple `CIImage`s into one (adding augmented reality -  
// text or some additional images)
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage : CIImage = CIImage(cvPixelBuffer: pixelBuffer)
var outputImage: CIImage?
let images : Array<CIImage> = [ciimage, ciimageSec!] // add all your CIImages that you'd like to combine
for image in images {
    outputImage = outputImage == nil ? image : image.composited(over: outputImage!)
}

// allocate this class variable once         
if pixelBufferNew == nil {
    CVPixelBufferCreate(kCFAllocatorSystemDefault, CVPixelBufferGetWidth(pixelBuffer),  CVPixelBufferGetHeight(pixelBuffer), kCVPixelFormatType_32BGRA, nil, &pixelBufferNew)
}

// convert CIImage to CVPixelBuffer
let ciContext = CIContext(options: nil)
if let aBuffer = pixelBufferNew {
    ciContext.render(outputImage!, to: aBuffer) // >>> IT EATS A LOT OF <<< CPU
}

// convert new CVPixelBuffer to new CMSampleBuffer
var sampleTime = CMSampleTimingInfo()
sampleTime.duration = CMSampleBufferGetDuration(sampleBuffer)
sampleTime.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
sampleTime.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
var videoInfo: CMVideoFormatDescription? = nil
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBufferNew!, &videoInfo)
var oBuf: CMSampleBuffer?
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBufferNew!, true, nil, nil, videoInfo!, &sampleTime, &oBuf)

/*
try to append new CMSampleBuffer into a file (.mp4) using 
AVAssetWriter & AVAssetWriterInput... (I met errors with it, original buffer works ok 
- "from func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)")
*/*

有没有更好的解决方案?

Is there any better solution?

推荐答案

现在我自己回答问题

最好使用 Objective-C++ 类 (.mm),我们可以在其中使用 OpenCV 并轻松/快速地从 CMSampleBuffer 转换到 cv::Mat 并在处理后返回 CMSampleBuffer

the best would be to use Objective-C++ class (.mm) where we can use OpenCV and easily/fast convert from CMSampleBuffer to cv::Mat and back to CMSampleBuffer after processing

我们可以很容易地从 Swift 调用 Objective-C++ 函数

we can easily call Objective-C++ functions from Swift

这篇关于使用增强现实录制视频的最佳方式是什么的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆