iOS-创建多个延时的实时摄像机预览视图 [英] iOS - Creating multiple time-delayed live camera preview views

查看:66
本文介绍了iOS-创建多个延时的实时摄像机预览视图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

由于很多原因,我已经做了很多研究,但仍未能找到可行的解决方案,我将在下面概述.

I have done a ton of research, and haven't yet been able to find a viable solution, for many reasons, which I will outline below.

在我的iOS应用中,我想要三个视图,这些视图可以无限期地显示设备相机的实时预览.

In my iOS app, I want three views that indefinitely show a delayed-live preview of the device's camera.

例如,视图1将显示一个摄像机视图,延迟5s,视图2将显示相同的摄像机视图,延迟20s,视图3将显示相同的摄像机视图,延迟30s.

For example, view 1 will show a camera view, delayed for 5s, view 2 will show the same camera view, delayed for 20s, and view 3 will show the same camera view, delayed 30s.

这将用于记录自己进行某种活动(例如锻炼运动),然后在几秒钟后观看自己,以完善给定运动的形式.

This would be used to record yourself performing some kind of activity, such as a workout exercise, and then watch yourself a few seconds later in order to perfect your form of a given exercise.

我尝试并研究了几种不同的解决方案,但是都存在问题.

I have tried and researched a couple different solutions, but all have problems.

  • 使用AVCaptureSessionAVCaptureMovieFileOutput将短片记录到设备存储中.需要短片,因为您不能从URL播放视频,而不能同时写入相同的URL.
  • 具有3个AVPlayerAVPlayerLayer实例,它们均以所需的时间延迟播放录制的简短剪辑.
  • 问题:
  • Use AVCaptureSession and AVCaptureMovieFileOutput to record short clips to device storage. Short clips are required because you cannot play video from a URL, and write to that same URL simultaneously.
  • Have 3 AVPlayer and AVPlayerLayer instances, all playing the short recorded clips at their desired time-delays.
  • Problems:
  1. 使用AVPlayer.replaceCurrentItem(_:)切换剪辑时,剪辑之间会有非常明显的延迟.这需要平稳过渡.
  2. 尽管很旧,但在此处的注释中,建议不要由于设备限制而创建多个AVPlayer实例.我无法找到确认或否认此声明的信息. E:根据Jake G的评论-10个AVPlayer实例对于iPhone 5和更高版本是可以的.
  1. When switching clips using AVPlayer.replaceCurrentItem(_:), there is a very noticeable delay between clips. This needs to be a smooth transition.
  2. Although old, a comment here suggests not to create multiple AVPlayer instances due to a device limit. I haven't been able to find information confirming or denying this statement. E: From Jake G's comment - 10 AVPlayer instances is okay for an iPhone 5 and newer.

  • 使用AVCaptureSessionAVCaptureVideoDataOutput使用didOutputSampleBuffer委托方法来流传输和处理相机供稿的每一帧.
  • 在OpenGL视图(例如GLKViewWithBounds)上绘制每一帧.这解决了来自Solution 1.的多个AVPlayer实例的问题.
  • 问题::存储每一帧以便以后显示时,需要大量内存(在iOS设备上不可行)或磁盘空间.如果我想以每秒30帧的速度存储2分钟的视频,则为3600帧,如果直接从didOutputSampleBuffer复制,则总计超过12GB.也许有一种方法可以压缩x1000的每个帧而不会降低质量,这将使我可以将这些数据保留在内存中.如果存在这种方法,我将找不到它.
  • Use AVCaptureSession and AVCaptureVideoDataOutput to stream and process each frame of the camera's feed using the didOutputSampleBuffer delegate method.
  • Draw each frame on an OpenGL view (such as GLKViewWithBounds). This solves the problem of multiple AVPlayer instances from Solution 1..
  • Problem: Storing each frame so they can be displayed later requires copious amounts of memory (which just isn't viable on an iOS device), or disk space. If I want to store a 2 minute video at 30 frames per second, that's 3600 frames, totalling over 12GB if copied directly from didOutputSampleBuffer. Maybe there is a way to compress each frame x1000 without losing quality that would allow me to keep this data in memory. If such a method exists, I haven't been able to find it.

如果有一种可以同时读写文件的方法,我相信以下解决方案将是理想的选择.

If there is a way to read and write to a file simultaneously, I believe the following solution would be ideal.

  • 将视频录制为循环流.例如,对于2分钟的视频缓冲区,我将创建一个文件输出流,该流将写入两分钟的帧.达到2分钟标记后,流将从头开始重新播放,并覆盖原始帧.
  • 在此文件输出流不断运行的情况下,同一记录的视频文件上将有3个输入流.每个流将指向该流中的不同帧(实际上是写入流之后X秒).然后,每个帧将显示在相应UIView的输入流上.
  • Record video as a circular stream. For example, for a video buffer of 2 minutes, I would create a file output stream that will write frames for two minutes. Once the 2 minute mark is hit, the stream will restart from the beginning, overriding the original frames.
  • With this file output stream constantly running, I would have 3 input streams on the same recorded video file. Each stream would point to a different frame in the stream (effectively X seconds behind the writing stream). Then each frame would be displayed on the input streams respective UIView.

当然,这仍然存在存储空间问题.如果将帧存储为压缩的JPEG图像,那就是说质量较低的2分钟视频需要多GB的存储空间.

Of course, this still has an issue of storage space. Event if frames were stored as compressed JPEG images, we're talking about multiple GBs of storage required for a lower quality, 2 minute video.

  1. 有人知道有效的方法来实现我想要的吗?
  2. 如何解决已经尝试过的解决方案中的某些问题?

推荐答案

    在iOS AVCaptureMovieFileOutput上的
  1. 切换文件时会丢帧.在osx上,这不会发生.头文件中对此进行了讨论,请参见captureOutputShouldProvideSampleAccurateRecordingStart.
  1. on iOS AVCaptureMovieFileOutput drops frames when switching files. On osx this doesn't happen. There's a discussion around this in the header file, see captureOutputShouldProvideSampleAccurateRecordingStart.

将您的2.和3.结合使用即可.您需要使用AVCaptureVideoDataOutputAVAssetWriter而不是AVCaptureMovieFileOutput逐块写入视频文件,这样才不会丢帧.添加3个具有足够存储空间的环形缓冲区,以跟上播放的速度,使用GLES或金属显示缓冲区(使用YUV而不是RGBA,使用少4/1.5倍的内存).

A combination of your 2. and 3. should work. You need to write the video file in chunks using AVCaptureVideoDataOutput and AVAssetWriter instead of AVCaptureMovieFileOutput so you don't drop frames. Add 3 ring buffers with enough storage to keep up with playback, use GLES or metal to display your buffers (use YUV instead of RGBA use 4/1.5 times less memory).

在强大的iPhone 4s和iPad 2的时代,我尝试了这种版本的比较适度的版本.它显示了(我认为)现在和过去的10s.我估计,因为您可以以3倍的实时速度编码30fps,所以我应该能够仅使用2/3的硬件容量对数据块进行编码并读取之前的数据块.不幸的是,要么我的想法错了,要么硬件出现非线性问题,或者代码错了,编码器一直落后.

I tried a more modest version of this back in the days of the mighty iPhone 4s and iPad 2. It showed (I think) now and 10s in the past. I guestimated that because you could encode 30fps at 3x realtime, that I should be able to encode the chunks and read the previous ones using only 2/3 of the hardware capacity. Sadly, either my idea was wrong or there was a non-linearity with the hardware, or the code was wrong and the encoder kept falling behind.

这篇关于iOS-创建多个延时的实时摄像机预览视图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆