简化的屏幕截图:仅记录UIView层中出现的内容的视频? [英] Simplified screen capture: record video of only what appears within the layers of a UIView?

查看:140
本文介绍了简化的屏幕截图:仅记录UIView层中出现的内容的视频?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这个SO答案解决了如何进行 UIView 的屏幕截图。我们需要类似的东西,但不是单个图像,目标是在60秒内生成出现在 UIView 内的所有内容的视频 - 概念上就像仅记录层 UIView ,忽略其他图层。

This SO answer addresses how to do a screen capture of a UIView. We need something similar, but instead of a single image, the goal is to produce a video of everything appearing within a UIView over 60 seconds -- conceptually like recording only the layers of that UIView, ignoring other layers.

我们的视频应用程序在用户录制的任何内容上叠加图层,并且最终目标是制作与原始视频合并这些图层的主视频。但是,使用 AVVideoCompositionCoreAnimationTool 将图层与原始视频合并非常非常慢:导出60秒的视频需要10-20秒。

Our video app superimposes layers on whatever the user is recording, and the ultimate goal is to produce a master video merging those layers with the original video. However, using AVVideoCompositionCoreAnimationTool to merge layers with the original video is very, very, very slow: exporting a 60-second video takes 10-20 seconds.

我们发现合并两个视频(即仅使用 AVMutableComposition ,不含 AVVideoCompositionCoreAnimationTool )非常快:~1秒。希望创建一个独立的图层视频,然后仅使用 AVMutableComposition将其与原始视频相结合。

What we found is combining two videos (i.e., only using AVMutableComposition without AVVideoCompositionCoreAnimationTool) is very fast: ~ 1 second. The hope is to create an independent video of the layers and then combine that with the original video only using AVMutableComposition.

Swift中的答案是理想的但不是必需的。

An answer in Swift is ideal but not required.

推荐答案

这听起来像你的快速合并不涉及(重新) - 编码帧,即它是微不足道的,基本上是一个美化的文件串联,这就是为什么它实时得到60倍。我问过这个问题,因为你的非常慢导出是实时的3-6倍,实际上并不是那么糟糕(至少它不是在旧硬件上)。

It sounds like your "fast" merge doesn't involve (re)-encoding frames, i.e. it's trivial and basically a glorified file concatenation, which is why it's getting 60x realtime. I asked about that because your "very slow" export is from 3-6 times realtime, which actually isn't that terrible (at least it wasn't on older hardware).

使用 AVAssetWriter 编码框架可以让您了解尽可能快的非平凡导出,这可能会显示在现代硬件上您可以减半或缩短导出时间。

Encoding frames with an AVAssetWriter should give you an idea of the fastest possible non-trivial export and this may reveal that on modern hardware you could halve or quarter your export times.

这是一个很长的路要说可能没有那么多的性能。如果你考虑一下典型的iOS视频编码用例,它可能会记录1920p @ 120 fps或240 fps,你的编码在@ 6x实时@ 30fps是你的典型iOS设备需要能够可以。

This is a long way of saying that there might not be that much more performance to be had. If you think about the typical iOS video encoding use case, which would probably be recording 1920p @ 120 fps or 240 fps, your encoding at ~6x realtime @ 30fps is in the ballpark of what your typical iOS device "needs" to be able to do.

您可以使用优化(例如较低/可变帧速率),但这些可能会让您失去捕获 CALayer的便利秒。

There are optimisations available to you (like lower/variable framerates), but these may lose you the convenience of being able to capture CALayers.

这篇关于简化的屏幕截图:仅记录UIView层中出现的内容的视频?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆