使用增强现实录制视频的最佳方式是什么? [英] What is the best way to record a video with augmented reality

查看:226
本文介绍了使用增强现实录制视频的最佳方式是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用增强现实录制视频的最佳方式是什么? (从iPhone / iPad相机中添加文字,图像徽标到框架)



以前我试图找出如何绘制成 CIImage 如何在CIImage中绘制文字?)并转换 CIImage 返回 CMSampleBuffer CIImage回到CMSampleBuffer



我几乎做了所有事情,只是在使用新的 CMSampleBuffer录制视频时遇到问题 in AVAssetWriterInput



但这个解决方案总体上并不好,它吃的很多将 CIImage 转换为 CVPixelBuffer ciContext.render(ciImage!,to) :aBuffer)



所以我想在此停下来找一些其他方法来录制带有增强现实的视频(例如动态添加(画框里面的文字将视频编码为mp4文件)



这是我尝试过的,不想再使用了...

  //将原始CMSampleBuffer转换为CIImage,
//将多个`CIImage'合并为一个(添加增强现实 -
//文本或其他一些图像)
let pixelBuffer:CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
让ciimage:CIImage = CIImage(cvPixelBuffer:pixelBuffer)
var outputImage:CIImage?
let images:Array< CIImage> = [ciimage,ciimageSec!] //添加你想要组合的所有CIImages
图像中的图像{
outputImage = outputImage == nil? image:image.composited(over:outputImage!)
}

//如果pixelBufferNew == nil {
CVPixelBufferCreate(kCFAllocatorSystemDefault,CVPixelBufferGetWidth),则将此类变量分配为
(pixelBuffer),CVPixelBufferGetHeight(pixelBuffer),kCVPixelFormatType_32BGRA,nil,& pixelBufferNew)
}

//将CIImage转换为CVPixelBuffer
让ciContext = CIContext(选项:nil)
如果让aBuffer = pixelBufferNew {
ciContext.render(outputImage!,to:aBuffer)//>>>它吃了很多<<< CPU
}

//将新的CVPixelBuffer转换为新的CMSampleBuffer
var sampleTime = CMSampleTimingInfo()
sampleTime.duration = CMSampleBufferGetDuration(sampleBuffer)
sampleTime。 presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
sampleTime.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
var videoInfo:CMVideoFormatDescription? = nil
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault,pixelBufferNew!,& videoInfo)
var oBuf:CMSampleBuffer?
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,pixelBufferNew!,true,nil,nil,videoInfo!,& sampleTime,& oBuf)

/ *
尝试将新的CMSampleBuffer附加到文件中(.mp4)使用
AVAssetWriter& AVAssetWriterInput ...(我遇到错误,原始缓冲区工作正常
- 来自func captureOutput(_输出:AVCaptureOutput,didOutput sampleBuffer:CMSampleBuffer,来自连接:AVCaptureConnection))
* / *

还有更好的解决方案吗?

现在我回答我自己的问题



最好使用 Objective-C ++ class( .mm )我们可以使用OpenCV并轻松/快速地从 CMSampleBuffer 转换为 cv :: Mat 并在处理后返回 CMSampleBuffer



我们可以轻松从Swift调用Objective-C ++函数


What is the best way to record a video with augmented reality? (adding text, images logo to frames from iPhone/iPad camera)

Previously I was trying to figure out how to draw into CIImage (How to draw text into CIImage?) and convert CIImage back to CMSampleBuffer (CIImage back to CMSampleBuffer)

I almost did everything, only have problem with recording video using new CMSampleBuffer in AVAssetWriterInput

But this solution anyway isn't good at all, it eats a lot of CPU while converting CIImage to CVPixelBuffer (ciContext.render(ciImage!, to: aBuffer))

So I want to stop here and find some other ways to record a video with augmented reality (for example dynamically add (draw) text inside frames while encoding video into mp4 file)

Here what I've tried and don't want to use anymore...

// convert original CMSampleBuffer to CIImage, 
// combine multiple `CIImage`s into one (adding augmented reality -  
// text or some additional images)
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage : CIImage = CIImage(cvPixelBuffer: pixelBuffer)
var outputImage: CIImage?
let images : Array<CIImage> = [ciimage, ciimageSec!] // add all your CIImages that you'd like to combine
for image in images {
    outputImage = outputImage == nil ? image : image.composited(over: outputImage!)
}

// allocate this class variable once         
if pixelBufferNew == nil {
    CVPixelBufferCreate(kCFAllocatorSystemDefault, CVPixelBufferGetWidth(pixelBuffer),  CVPixelBufferGetHeight(pixelBuffer), kCVPixelFormatType_32BGRA, nil, &pixelBufferNew)
}

// convert CIImage to CVPixelBuffer
let ciContext = CIContext(options: nil)
if let aBuffer = pixelBufferNew {
    ciContext.render(outputImage!, to: aBuffer) // >>> IT EATS A LOT OF <<< CPU
}

// convert new CVPixelBuffer to new CMSampleBuffer
var sampleTime = CMSampleTimingInfo()
sampleTime.duration = CMSampleBufferGetDuration(sampleBuffer)
sampleTime.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
sampleTime.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
var videoInfo: CMVideoFormatDescription? = nil
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBufferNew!, &videoInfo)
var oBuf: CMSampleBuffer?
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBufferNew!, true, nil, nil, videoInfo!, &sampleTime, &oBuf)

/*
try to append new CMSampleBuffer into a file (.mp4) using 
AVAssetWriter & AVAssetWriterInput... (I met errors with it, original buffer works ok 
- "from func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)")
*/*

Is there any better solution?

解决方案

now I answer my own question

the best would be to use Objective-C++ class (.mm) where we can use OpenCV and easily/fast convert from CMSampleBuffer to cv::Mat and back to CMSampleBuffer after processing

we can easily call Objective-C++ functions from Swift

这篇关于使用增强现实录制视频的最佳方式是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆