我可以使用Gstreamer API合并2个视频吗? [英] Can I use the Gstreamer API to merge 2 videos?

查看:446
本文介绍了我可以使用Gstreamer API合并2个视频吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想编写一个简单的linux CLI应用程序,它可以接收2个视频源(1个主持人正在讲话,1个有幻灯片但没有音频)并合并它们.

I'd like to write a simple linux CLI application that can take 2 video sources (1 of a presenter talking and 1 with their slides and no audio) and merge them.

我希望整个输出视频并排成为两个原始视频.如果不这样做,我的第二个最佳选择就是画中画"风格的视频,演示者位于角落的小框中.

I'd like the entire output video to be the two original videos, side by side. Failing that, my second best option would be a "picture in picture" style video, with the presenter in a small frame in the corner.

从几个小时的研究来看,GStreamer看起来可能能够做到这一点.在我花更多时间尝试之前,有人可以确认吗?

From a few hours research, GStreamer looks like it might be able to do this. Can anyone confirm it before I spend more time trying it out?

如果不能,那我可以使用其他API吗?

If it can't, are there other APIs out there that I might be able to use?

推荐答案

事实证明,gstreamer可以合并两个视频,并使用videomixer过滤器将它们并排放置到输出视频中.

It turns out gstreamer can merge two videos, placing them side by side into an output video using the videomixer filter.

一个基本的管道需要两个输入文件,将它们缩放为相同大小,然后将它们合并并编码为theora视频,如下所示:

A basic pipeline that takes two input files, scales them to be the same size, then merges them and encodes them into a theora video might look like this:

filesrc -> decodebin -> ffmpegcolourspace -> videoscale ->  videobox -> videorate
                                                                                  \
filesrc -> decodebin ->  ffmpegcolourspace  ->  videoscale  ->  videorate   ->    videomixer -> ffmpegcolourspace -> theoraenc -> oggmux -> filesink

如何实现此管道取决于语言.我使用Ruby绑定进行原型设计,并且效果很好.

How you implement this pipeline depends on the language. I prototyped with the Ruby bindings, and it works really well.

这篇关于我可以使用Gstreamer API合并2个视频吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆