使用gstreamer/Python剪切视频的一部分(gnonlin?) [英] cut parts of a video using gstreamer/Python (gnonlin?)

查看:359
本文介绍了使用gstreamer/Python剪切视频的一部分(gnonlin?)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个视频文件,我想剪切出一些场景(通过时间位置或帧来标识).据我了解,使用gnonlin应该可以实现,但是到目前为止,我还没有找到如何实现这一点的示例(理想情况下是使用Python).我不想尽可能修改视频/音频部分(但可以转换为mp4/webm).

I have a video file and I'd like to cut out some scenes (either identified by a time position or a frame). As far as I understand that should be possible with gnonlin but so far I wasn't able to find a sample how to that (ideally using Python). I don't want to modify the video/audio parts if possible (but conversion to mp4/webm would be acceptable).

我是否正确认为gnonlin是gstreamer宇宙中执行此操作的正确组件?我也很高兴能找到一些指示/方法来解决这个问题(gstreamer新手).

Am I correct that gnonlin is the right component in the gstreamer universe to do that? Also I'd be glad for some pointers/recipes how to approach the problem (gstreamer newbie).

推荐答案

实际上,事实证明"gnonlin"级别太低,并且仍然需要大量gstreamer知识.幸运的是,这里有"gstreamer-editing-services"( gst-editing-services )这是一个 库在gstreamer和gnonlin之上提供了更高级别的API.

Actually it turns out that "gnonlin" is too low-level and still requires a lot of gstreamer knowledge. Luckily there is "gstreamer-editing-services" (gst-editing-services) which is a library offering a higher level API on top of gstreamer and gnonlin.

只需 RTFM阅读有用的博客文章中,一个Python示例我能够解决我的基本问题:

With a tiny bit of RTFM reading and a helpful blog post with a Python example I was able to solve my basic problem:

  1. 加载资产(视频)
  2. 创建具有单个图层的时间轴
  3. 将资产多次添加到图层中,调整开始,目标和持续时间,以便在输出视频中仅显示视频的相关部分

我的大部分代码直接来自上面引用的博客文章,因此我不想将所有内容都转储到这里.相关的东西是这样的:

Most of my code is directly taken from the referenced blog post above so I don't want to dump all of that here. The relevant stuff is this:

    asset = GES.UriClipAsset.request_sync(source_uri)
    timeline = GES.Timeline.new_audio_video()
    layer = timeline.append_layer()

    start_on_timeline = 0
    start_position_asset = 10 * 60 * Gst.SECOND
    duration = 5 * Gst.SECOND
    # GES.TrackType.UNKNOWN => add every kind of stream to the timeline
    clip = layer.add_asset(asset, start_on_timeline, start_position_asset,
        duration, GES.TrackType.UNKNOWN)

    start_on_timeline = duration
    start_position_asset = start_position_asset + 60 * Gst.SECOND
    duration = 20 * Gst.SECOND
    clip2 = layer.add_asset(asset, start_on_timeline, start_position_asset,
        duration, GES.TrackType.UNKNOWN)
    timeline.commit()

生成的视频包括10:00–10:05和11:05-11:25片段,因此本质上有两个剪切片段:一个在开头,一个在中间.

The resulting video includes the segments 10:00–10:05 and 11:05-11:25 so essentially there are two cuts: One in the beginning and one in the middle.

从我所看到的情况来看,它工作得很好,音频和视频同步,不用担心关键帧和其他问题.剩下的唯一部分就是找出是否可以将帧号"转换为GST编辑服务的时序参考.

From what I have seen this worked perfectly fine, audio and video in sync, no worries about key frames and whatnot. The only part left is to find out if I can translate the "frame number" into a timing reference for gst editing services.

这篇关于使用gstreamer/Python剪切视频的一部分(gnonlin?)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆