在python中的GStreamer管道中动态调整图像大小 [英] Dynamically re-sizing images in a GStreamer pipeline in python

查看:744
本文介绍了在python中的GStreamer管道中动态调整图像大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试创建一个程序,以便同时对不同的图像进行各种动画处理,而我试图实现的效果之一就是放大图片,这是通过保持固定大小的基本框架和增加图片大小来实现的,减少.但是,当我尝试动态更改图像的大小时,会导致错误,我尝试在网络上搜索,但找不到正确的解决方案.下面是我的代码.有人可以建议我从中学习到正确的例子(最好是python例子).

I am trying to create a program to do various animations on different images simultaneously and one of the effects I am trying to achieve is zooming into a picture which is achieved by keeping base frame of a fixed size and image size to increase and decrease. But when I try to dynamically change the size of an image it causes error I tried searching in the web but couldn't find the right solution to it. Below is my code. Could anyone suggest me the right examples from which I can learn it will be grateful (Preferably python examples).

#!/usr/bin/python
import gobject
import time

gobject.threads_init()
import pygst

pygst.require("0.10")
import gst

p = gst.parse_launch("""uridecodebin uri=file:///home/jango/Pictures/3.jpg name=src1 ! queue ! videoscale ! ffmpegcolorspace !
imagefreeze ! capsfilter name=vfps caps="video/x-raw-yuv, framerate=60/1, width=200, height=150" ! mix.
    uridecodebin uri=file:///home/jango/Pictures/2.jpg name=src2 ! queue ! videoscale ! ffmpegcolorspace !
    imagefreeze ! video/x-raw-yuv, framerate=60/1, width=200, height=150 ! mix.
uridecodebin uri=file:///home/jango/Pictures/1.jpg name=src ! queue ! videoscale ! ffmpegcolorspace !
imagefreeze ! video/x-raw-yuv, framerate=60/1, width=200, height=150 ! mix.
uridecodebin uri=file:///home/jango/Pictures/mia_martine.jpg ! queue ! videoscale ! ffmpegcolorspace !
imagefreeze ! video/x-raw-yuv, framerate=60/1, width=200, height=150 ! mix.
uridecodebin uri=file:///home/jango/Pictures/4.jpg ! queue ! videoscale ! ffmpegcolorspace !
imagefreeze ! video/x-raw-yuv, framerate=60/1, width=200, height=150 ! mix.
uridecodebin uri=file:///home/jango/Pictures/mia_marina1.jpg ! queue ! videoscale ! ffmpegcolorspace !
imagefreeze ! video/x-raw-yuv, framerate=60/1, width=200, height=150 ! mix.
    videotestsrc pattern=2 ! video/x-raw-yuv, framerate=10/1, width=1024, height=768 ! videomixer name=mix sink_6::zorder=0 ! ffmpegcolorspace ! theoraenc ! oggmux name=mux !
    filesink location=1.ogg
    filesrc location=/home/jango/Music/mp3/flute_latest.mp3 ! decodebin ! audioconvert ! vorbisenc ! queue ! mux.
""")

m = p.get_by_name("mix")
s0 = m.get_pad("sink_0")
s0.set_property("zorder", 1)
q = s0.get_caps()
q.make_writable()

control11 = gst.Controller.props
control = gst.Controller(s0, "ypos", "alpha", "xpos")
control.set_interpolation_mode("ypos", gst.INTERPOLATE_LINEAR)
control.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control.set("ypos", 0, 0)
control.set("ypos", 5 * gst.SECOND, 600)
control.set("xpos", 0, 0)
control.set("xpos", 5 * gst.SECOND, 500)
control.set("alpha", 0, 0)
control.set("alpha", 5 * gst.SECOND, 1.0)

s1 = m.get_pad("sink_1")
s1.set_property("zorder", 2)


control1 = gst.Controller(s1, "xpos", "alpha")
control1.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control1.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control1.set("xpos", 0, 0)
control1.set("xpos", 5 * gst.SECOND, 500)
control1.set("alpha", 0, 0)
control1.set("alpha", 5 * gst.SECOND, 1.0)
#

s2 = m.get_pad("sink_2")
s2.set_property("zorder", 3)

control2 = gst.Controller(s2, "ypos", "alpha", "xpos")
control2.set_interpolation_mode("ypos", gst.INTERPOLATE_LINEAR)
control2.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control2.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control2.set("xpos", 0, 0)
control2.set("xpos", 5 * gst.SECOND, 500)
control2.set("ypos", 0, 0)
control2.set("ypos", 5 * gst.SECOND, 300)
control2.set("alpha", 0, 0)
control2.set("alpha", 5 * gst.SECOND, 1.0)

s3 = m.get_pad("sink_3")
s3.set_property("zorder", 4)

control3 = gst.Controller(s3, "ypos", "alpha", "xpos")
control3.set_interpolation_mode("ypos", gst.INTERPOLATE_LINEAR)
control3.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control3.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control3.set("ypos", 0, 0)
control3.set("ypos", 5 * gst.SECOND, 600)
control3.set("xpos", 0, 0)
control3.set("xpos", 5 * gst.SECOND, 200)
control3.set("alpha", 0, 0)
control3.set("alpha", 5 * gst.SECOND, 1.0)

s4 = m.get_pad("sink_4")
s4.set_property("zorder", 5)

control4 = gst.Controller(s4, "ypos", "alpha", "xpos")
control4.set_interpolation_mode("ypos", gst.INTERPOLATE_LINEAR)
control4.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control4.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control4.set("ypos", 0, 0)
control4.set("ypos", 5 * gst.SECOND, 300)
control4.set("xpos", 0, 0)
control4.set("xpos", 5 * gst.SECOND, 200)
control4.set("alpha", 0, 0)
control4.set("alpha", 5 * gst.SECOND, 1.0)

s5 = m.get_pad("sink_5")
s5.set_property("zorder", 6)

control5 = gst.Controller(s5, "ypos", "alpha", "xpos")
control5.set_interpolation_mode("ypos", gst.INTERPOLATE_LINEAR)
control5.set_interpolation_mode("alpha", gst.INTERPOLATE_LINEAR)
control5.set_interpolation_mode("xpos", gst.INTERPOLATE_LINEAR)
control5.set("ypos", 0, 0)
control5.set("ypos", 5 * gst.SECOND, 0)
control5.set("xpos", 0, 0)
control5.set("xpos", 5 * gst.SECOND, 200)
control5.set("alpha", 0, 0)
control5.set("alpha", 5 * gst.SECOND, 1.0)

p.set_state(gst.STATE_PLAYING)
time.sleep(3)
p.set_state(gst.STATE_READY)
m = p.get_by_name("mix")
s0 = m.get_pad("sink_0")
q = s0.get_caps()
print q
if q.is_fixed():
    print "not doable"
else:
    caps = gst.caps_from_string("video/x-raw-yuv, framerate=60/1, width=1000, height=1000")
    s0.set_caps(caps)
p.set_state(gst.STATE_PLAYING)
gobject.MainLoop().run()

如果有人能向我展示python开发人员的GStreamer教程的好地方,那将是很棒的事情.

It will be great if anyone could show me a good learning spot for GStreamer tutorials for python developers.

推荐答案

您可以动态更改图像大小,但是必须具有一些条件

You can change image size dynamically, but for that you must have some condition

首先,您的管道应该构建类似该源代码的东西!视频率! ffvideoscale!色彩空间 ! capsfilter caps ="caps" ....

Firstly your pipeline should be build something like that source ! videorate ! ffvideoscale ! colorspace ! capsfilter caps="caps" ....

第二次在python中,您从capsfilters元素获取了caps属性,并更改了caps的分辨率.

Secondly in python you get caps property from capsfilters element and you change the resolution in caps.

这应该可以正常工作,如果我记得必须在两次分辨率更改之间添加一个大于100毫秒的gobject.timeout_add,则会发出警告.

This should be working, warning if I remember you must add a gobject.timeout_add than 100 ms between resolution change.

这篇关于在python中的GStreamer管道中动态调整图像大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆