使用gstreamer将YUVj420p像素格式转换为RGB888 [英] Convert YUVj420p pixel format to RGB888 using gstreamer

查看:1110
本文介绍了使用gstreamer将YUVj420p像素格式转换为RGB888的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用gstreamer 1.2将帧从我的IP摄像机馈送到opencv程序

im using gstreamer 1.2 to feed frames from my IP camera to opencv program

该流是(640 * 368 YUVj420p),我想将其转换为RBG888以能够在我的opencv程序中使用它

the stream is (640*368 YUVj420p) and i want to convert it to RBG888 to be able to use it in my opencv program

有没有办法使用gstreamer进行转换?

so is there a way to use gstreamer to do that conversion ?

还是我必须自己做?

如果可以的话,请给我进行转换的方程式

if so please give me the equation that do this conversion

推荐答案

在对gstreamer进行了一些试验之后,我决定自己进行转换,并且转换成功

After some trials with gstreamer i decided to do the conversion myself and it worked

首先,我们必须了解YUVj420p像素格式

First we have to understand the YUVj420p pixel format

如上图所示,Y'UV420中的Y',U和V分量分别在顺序块中编码.为每个像素存储一个Y'值,然后为每个2×2正方形像素块存储一个U值,最后为每个2×2块存储一个V值.上图中,使用相同的颜色显示了相应的Y',U和V值.从设备逐行读取字节流,Y'块位于位置0,U块位于位置x×y(在此示例中为6×4 = 24),V块位于位置x ×y +(x×y)/4(这里6×4 +(6×4)/4 = 30).(复制)

As shown in the above image, the Y', U and V components in Y'UV420 are encoded separately in sequential blocks. A Y' value is stored for every pixel, followed by a U value for each 2×2 square block of pixels, and finally a V value for each 2×2 block. Corresponding Y', U and V values are shown using the same color in the diagram above. Read line-by-line as a byte stream from a device, the Y' block would be found at position 0, the U block at position x×y (6×4 = 24 in this example) and the V block at position x×y + (x×y)/4 (here, 6×4 + (6×4)/4 = 30).(copied)

这是执行此操作的代码(python)

here is the code to do it (python)

这段代码将展示如何使用gstreamer向opencv注入帧并进行对话

This code will show how to inject frame to opencv using gstreamer and make the converstion

import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
import numpy as np
import cv2

GObject.threads_init()
Gst.init(None)

def YUV_stream2RGB_frame(data):

    w=640
    h=368
    size=w*h

    stream=np.fromstring(data,np.uint8) #convert data form string to numpy array

    #Y bytes  will start form 0 and end in size-1 
    y=stream[0:size].reshape(h,w) # create the y channel same size as the image

    #U bytes will start from size and end at size+size/4 as its size = framesize/4 
    u=stream[size:(size+(size/4))].reshape((h/2),(w/2))# create the u channel its size=framesize/4

    #up-sample the u channel to be the same size as the y channel and frame using pyrUp func in opencv2
    u_upsize=cv2.pyrUp(u)

    #do the same for v channel 
    v=stream[(size+(size/4)):].reshape((h/2),(w/2))
    v_upsize=cv2.pyrUp(v)

    #create the 3-channel frame using cv2.merge func watch for the order
    yuv=cv2.merge((y,u_upsize,v_upsize))

    #Convert TO RGB format
    rgb=cv2.cvtColor(yuv,cv2.cv.CV_YCrCb2RGB)

    #show frame
    cv2.imshow("show",rgb)
    cv2.waitKey(5)

def on_new_buffer(appsink):

   sample = appsink.emit('pull-sample')
   #get the buffer
   buf=sample.get_buffer()
   #extract data stream as string
   data=buf.extract_dup(0,buf.get_size())
   YUV_stream2RGB_frame(data)
   return False

def Init():

   CLI="rtspsrc name=src location=rtsp://192.168.1.20:554/live/ch01_0 latency=10 !decodebin ! appsink name=sink"

   #simplest way to create a pipline
   pipline=Gst.parse_launch(CLI)

   #getting the sink by its name set in CLI
   appsink=pipline.get_by_name("sink")

   #setting some important properties of appsnik
   appsink.set_property("max-buffers",20) # prevent the app to consume huge part of memory
   appsink.set_property('emit-signals',True) #tell sink to emit signals
   appsink.set_property('sync',False) #no sync to make decoding as fast as possible

   appsink.connect('new-sample', on_new_buffer) #connect signal to callable func

def run():
    pipline.set_state(Gst.State.PLAYING)
    GObject.MainLoop.run()


Init()
run()

这篇关于使用gstreamer将YUVj420p像素格式转换为RGB888的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆