你能“流"吗?图像到 ffmpeg 以构建视频,而不是将它们保存到磁盘? [英] Can you "stream" images to ffmpeg to construct a video, instead of saving them to disk?

查看:22
本文介绍了你能“流"吗?图像到 ffmpeg 以构建视频,而不是将它们保存到磁盘?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我最近的工作涉及以编程方式制作视频.在 python 中,典型的工作流程如下所示:

My work recently involves programmatically making videos. In python, the typical workflow looks something like this:

import subprocess, Image, ImageDraw

for i in range(frames_per_second * video_duration_seconds):
    img = createFrame(i)
    img.save("%07d.png" % i)

subprocess.call(["ffmpeg","-y","-r",str(frames_per_second),"-i", "%07d.png","-vcodec","mpeg4", "-qscale","5", "-r", str(frames_per_second), "video.avi"])

此工作流程为视频中的每一帧创建一个图像并将其保存到磁盘.保存完所有图片后,调用 ffmpeg 将所有图片构建成视频.

This workflow creates an image for each frame in the video and saves it to disk. After all images have been saved, ffmpeg is called to construct a video from all of the images.

将图像保存到磁盘(而不是在内存中创建图像)消耗了这里的大部分周期,并且似乎没有必要.有没有办法执行相同的功能,但不将图像保存到磁盘?因此,ffmpeg 将被调用,图像将被构建并在构建后立即提供给 ffmpeg.

Saving the images to disk (not the creation of the images in memory) consumes the majority of the cycles here, and does not appear to be necessary. Is there some way to perform the same function, but without saving the images to disk? So, ffmpeg would be called and the images would be constructed and fed to ffmpeg immediately after being constructed.

推荐答案

好的,我搞定了.感谢 LordNeckbeard 建议使用 image2pipe.我不得不使用 jpg 编码而不是 png 因为带有 png 的 image2pipe 不适用于我的 ffmpeg 版本.第一个脚本与您问题的代码基本相同,只是我实现了一个简单的图像创建,该图像创建只是创建从黑色到红色的图像.我还添加了一些代码来计时.

Ok I got it working. thanks to LordNeckbeard suggestion to use image2pipe. I had to use jpg encoding instead of png because image2pipe with png doesn't work on my verision of ffmpeg. The first script is essentially the same as your question's code except I implemented a simple image creation that just creates images going from black to red. I also added some code to time the execution.

串行执行

import subprocess, Image

fps, duration = 24, 100
for i in range(fps * duration):
    im = Image.new("RGB", (300, 300), (i, 1, 1))
    im.save("%07d.jpg" % i)
subprocess.call(["ffmpeg","-y","-r",str(fps),"-i", "%07d.jpg","-vcodec","mpeg4", "-qscale","5", "-r", str(fps), "video.avi"])

并行执行(没有图像保存到磁盘)

import Image
from subprocess import Popen, PIPE

fps, duration = 24, 100
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'mjpeg', '-r', '24', '-i', '-', '-vcodec', 'mpeg4', '-qscale', '5', '-r', '24', 'video.avi'], stdin=PIPE)
for i in range(fps * duration):
    im = Image.new("RGB", (300, 300), (i, 1, 1))
    im.save(p.stdin, 'JPEG')
p.stdin.close()
p.wait()

结果很有趣,我对每个脚本运行了 3 次来比较性能:系列:

the results are interesting, I ran each script 3 times to compare performance: serial:

12.9062321186
12.8965060711
12.9360799789

平行:

8.67797684669
8.57139396667
8.38926696777

所以似乎并行版本快了大约 1.5 倍.

So it seems the parallel version is faster about 1.5 times faster.

这篇关于你能“流"吗?图像到 ffmpeg 以构建视频,而不是将它们保存到磁盘?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆