使用Python解析和渲染Kinesis Video Streams并获得输入帧的图像表示 [英] Using Python to parse and render Kinesis Video Streams and get an image representation of the input frame

查看:108
本文介绍了使用Python解析和渲染Kinesis Video Streams并获得输入帧的图像表示的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我建立了一个管道,在其中将视频实时流传输到Kinesis Video Stream(KVS),Kinesis Video Stream(KVS)将帧发送到Amazon Rekognition以进行人脸识别,再将帧发送到Kinesis Data Stream(KDS).最后,KDS将结果发送到lambda.

对于进行了面部识别的框架,我得到以下格式的JSON: https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/examples-renderer.html

但是,我想知道Python中的解决方案.

如果我的问题陈述有误或没有意义,请随时向我询问更多有关此问题的信息.

感谢您的帮助.:)

解决方案

使用以下代码收到有效载荷后,

  kvs_stream = kvs_video_client.get_media(StreamARN ="ARN",StartSelector ={'StartSelectorType':'FRAGMENT_NUMBER','AfterFragmentNumber':decoded_json_from_stream ['InputInformation'] ['KinesisVideo'] ['FragmentNumber']}) 

您可以使用

 框架= kvs_stream ['有效负载'] .read() 

进行接收以从有效载荷中获取帧.现在,您可以打开一个mvi文件并将其写入其中,然后使用openCV从该mvi文件中提取特定的帧.

 ,其中open('/tmp/stream.avi','wb')为f:f.write(框架)cap = cv2.VideoCapture(file.mvi)#使用框架进行进一步处理 

I have set up a pipeline in which, I live stream the video to Kinesis Video Stream (KVS), which sends the frames to Amazon Rekognition for face recognition, which further sends them to Kinesis Data Stream (KDS). Finally, KDS sends the results to a lambda.

For a frame on which face recognition has been conducted, I get the JSON of the following format: https://docs.aws.amazon.com/rekognition/latest/dg/streaming-video-kinesis-output-reference.html

My AIM is: Using this JSON, I somehow want to get an image representation of the frame which was recorded by the KVS.

What have I tried:

This JSON provides me with the Fragment Number.

I use this fragment number and make a call to the get_media_for_fragment_list

The above call returns a key called Payload in response.

I have been trying to somehow render this payload into an image.

However, I fail to do this every time as I do not know how to make sense out of this payload and decode it.

Following is the code snippet.

    def getFrameFromFragment(fragment):
         client = boto3.client('kinesis-video-archived-media',endpoint_url=data_endpoint_for_kvs)
         response = client.get_media_for_fragment_list(
             StreamName='kvs1',
             Fragments=[
                fragment,
             ]
         )
         payload = response['Payload']
         print(payload.read())

How do I use this payload to get an image?

I know of parsers that exist in Java: https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/examples-renderer.html

However, I wanted to know of a solution in Python.

In case my question statement is wrong or doesn't make sense, feel free to ask me more about this issue.

Thanks for the help. :)

解决方案

After receiving the payload using the following code,

kvs_stream = kvs_video_client.get_media(
                 StreamARN="ARN", 
                 StartSelector= 
                              {'StartSelectorType':'FRAGMENT_NUMBER',
                               'AfterFragmentNumber': decoded_json_from_stream['InputInformation']['KinesisVideo']['FragmentNumber']
                              }
                                       )

you can use,

 frame = kvs_stream['Payload'].read()

to receive to get the frame from the payload. Now you can open an mvi file and write this frame to it and then extract a particular frame using openCV from this mvi file.

with open('/tmp/stream.avi', 'wb') as f:
                f.write(frame)
                cap = cv2.VideoCapture(file.mvi)
                #use frame for further processing

这篇关于使用Python解析和渲染Kinesis Video Streams并获得输入帧的图像表示的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆