从机器人通过UDP发送的图像在服务器(PC)上的直播视频流 [英] Live video stream on server (PC) from images sent by robot through UDP

查看:681
本文介绍了从机器人通过UDP发送的图像在服务器(PC)上的直播视频流的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

嗯。我发现这似乎很有希望:



http: //sourceforge.net/projects/mjpg-streamer/






好的。我将尝试解释我正在努力做的清楚和细节。



我有一个小型人形机器人与相机和wifi棒(这是机器人)。机器人的wifi棒平均wifi传输速率是1769KB / s。机器人具有500Mhz的CPU和256MB RAM,因此对于任何严重的计算是不够的(此外,还有机器人上运行的几个模块,用于运动,视觉,声纳,语音等)。


$ b $我有一台PC控制机器人。我试图让机器人在房间里走动,看到机器人在PC上看到的直播视频。



我已经有工作了。机器人正在走路,因为我想要他做相机拍摄图像。图像正在通过UDP协议发送到我正在接收它们的PC(我已通过将输入的图像保存在磁盘上进行验证)。



相机返回图像在YUV442色彩空间中是640 x 480像素。我发送有损压缩(JPEG)的图像,因为我正在尝试在PC上获得最好的FPS。我正在使用PIL库对机器人进行JPEG压缩。



我的问题:


  1. 有人可以请给我一些关于如何将传入的JPEG图像转换成实时视频流的想法?我明白我需要一些视频编码器。你推荐哪个视频编码器? FFMPEG还是别的什么?我非常新的视频流,所以我想知道什么是最好的这个任务。我更喜欢使用Python来写这个,所以我更喜欢一些具有Python API的视频编码器或库。但是我猜,如果这个库有一些很好的命令行API,那么它不必在Python中。


  2. 我可以从这个?给出1769KB / s平均无线传输速率和图像尺寸?我应该使用不同的压缩比JPEG吗?


  3. 我会很高兴看到任何代码示例。


一些代码示例。以下是我将JPEG图像从机器人发送到PC(缩短的简化片段)。这在机器人上运行:

 #这里的代码很多

UDPSock =套接字(AF_INET,SOCK_DGRAM )

而1:
image = camProxy.getImageLocal(nameId)
size =(image [0],image [1])$ ​​b $ b data = image [6 ]
im = Image.fromstring(YCbCr,size,data)
s = StringIO.StringIO()
im.save(s,JPEG)

UDPSock.sendto(s.getvalue(),addr)

camProxy.releaseImage(nameId)

UDPSock.close()

#lot的代码这里

这是我如何在PC上接收图像。这在PC上运行:

 #这里的许多代码

UDPSock =套接字(AF_INET,SOCK_DGRAM )
UDPSock.bind(addr)

而1:
数据,addr = UDPSock.recvfrom(buf)
#这里我需要创建一个流数据
#,其中包含JPEG图像

UDPSock.close()

#这里的代码很多


解决方案

检查你的第一个问题。虽然这里的解决方案使用非流式传输的图片集。这可能有帮助该示例使用pyMedia。





一些沿着你想要的方向。





如果您有需要编辑二进制流:




Hmm. I found this which seems promising:

http://sourceforge.net/projects/mjpg-streamer/


Ok. I will try to explain what I am trying to do clearly and in much detail.

I have a small humanoid robot with camera and wifi stick (this is the robot). The robot's wifi stick average wifi transfer rate is 1769KB/s. The robot has 500Mhz CPU and 256MB RAM so it is not enough for any serious computations (moreover there are already couple modules running on the robot for motion, vision, sonar, speech etc).

I have a PC from which I control the robot. I am trying to have the robot walk around the room and see a live stream video of what the robot sees in the PC.

What I already have working. The robot is walking as I want him to do and taking images with the camera. The images are being sent through UDP protocol to the PC where I am receiving them (I have verified this by saving the incoming images on the disk).

The camera returns images which are 640 x 480 px in YUV442 colorspace. I am sending the images with lossy compression (JPEG) because I am trying to get the best possible FPS on the PC. I am doing the compression to JPEG on the robot with PIL library.

My questions:

  1. Could somebody please give me some ideas about how to convert the incoming JPEG images to a live video stream? I understand that I will need some video encoder for that. Which video encoder do you recommend? FFMPEG or something else? I am very new to video streaming so I want to know what is best for this task. I'd prefer to use Python to write this so I would prefer some video encoder or library which has Python API. But I guess if the library has some good command line API it doesn't have to be in Python.

  2. What is the best FPS I could get out from this? Given the 1769KB/s average wifi transfer rate and the dimensions of the images? Should I use different compression than JPEG?

  3. I will be happy to see any code examples. Links to articles explaining how to do this would be fine, too.

Some code samples. Here is how I am sending JPEG images from robot to the PC (shortened simplified snippet). This runs on the robot:

# lots of code here

UDPSock = socket(AF_INET,SOCK_DGRAM)

  while 1:
    image = camProxy.getImageLocal(nameId)
    size = (image[0], image[1])
    data = image[6]
    im = Image.fromstring("YCbCr", size, data)
    s = StringIO.StringIO()
    im.save(s, "JPEG")

    UDPSock.sendto(s.getvalue(), addr)

    camProxy.releaseImage(nameId)

  UDPSock.close()

  # lots of code here

Here is how I am receiving the images on the PC. This runs on the PC:

  # lots of code here

  UDPSock = socket(AF_INET,SOCK_DGRAM)
  UDPSock.bind(addr)

  while 1:
    data, addr = UDPSock.recvfrom(buf)
    # here I need to create a stream from the data
    # which contains JPEG image

  UDPSock.close()

  # lots of code here

解决方案

Checking out your first question. Though the solution here uses a non-streaming set of pictures. It might help. The example uses pyMedia.

Some along the lines of what you want.

If you have a need to edit a binary stream:

这篇关于从机器人通过UDP发送的图像在服务器(PC)上的直播视频流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆