MJPEG流和解码 [英] MJPEG streaming and decoding

查看:4557
本文介绍了MJPEG流和解码的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想从IP摄像机接收JPEG图像(通过RTSP)。为此,我在OpenCV中尝试了 cvCreateFileCapture_FFMPEG 。但ffmpeg似乎有一些问题与流的MJPEG格式(因为它自动尝试检测流信息),我结束了以下错误

  mjpeg:不支持的编码类型

然后,我决定使用live555流。到目前为止,我可以通过openRTSP成功建立流和捕获(非解码)图像。



问题是如何在我的应用程序中,例如在OpenCV中这样做。如何在OpenCV中使用openRTSP获取图像并将其保存为JPEG格式?



我听说openRTSP的数据可以发送到缓冲区(或命名管道),然后在OpenCV的 IplImage 。但我不知道该怎么做。



我真的很感谢任何关于这个问题的帮助/建议。我需要以下问题的答案:


  1. 如何禁用ffmpeg的自动流信息检测并指定自己的格式(mjpeg)或

  2. 如何在OpenCV中使用openRTSP?

b $ b

解决方案

这是一个Axis IP摄像机吗?无论哪种方式,大多数提供 MPEG4 RTSP流的IP摄像机都可以使用OpenCV使用 cvCreateFileCapture_FFMPEG 解码。然而,ffmpeg解码器的 MJPEG 编解码器有一个广为人知的未解决的问题。我确定您会收到类似

错误

  [ingenient @ 0x97d20c0]编解码器参数(视频:mjpeg)

选项1:使用opencv,libcurl和libjpeg



要在opencv中查看mjpeg流,请查看以下实现



http://www.eecs.ucf.edu/~rpatrick/code/onelinksys.c

http://cse.unl.edu/~ rpatrick / code / onelinksys.c



Option2:使用gstreamer(无opencv)



如果您的目标是查看或保存jpeg图片,我建议您查看gstreamer



查看可以执行如下的媒体管道字符串

  gst-launch -v souphttpsrc location =http:// [ip]:[port ] / [dir] /xxx.cgido-timestamp = true is_live = true! multipartdemux! jpegdec! ffmpegcolorspace! autovideosink 

For RTSP

  gst-launch -v rtspsrc location =rtsp:// [user]:[pass] @ [ip]:[port] / [dir] /xxx.ampdebug = 1! rtpmp4vdepay! mpeg4videoparse! ffdec_mpeg4! ffmpegcolorspace! autovideosink 

要使用C API,请参阅



http://wiki.maemo.org/Documentation/Maemo_5_Developer_Guide/Using_Multimedia_Components/Camera_API_Usage



对于一个简单的例子,看看我在rtsp上的其他帖子构造gstreamer C API媒体管道(这与gst-launch字符串相同,但实现为C API)



使用python-gstreamer播放RTSP



保存 MJPEG流作为多个图像管道(让我们将垂直翻转< > BIN ,并将 PADS 连接到上一个和下一个 BINS 以使其更好)

  gst-launch souphttpsrc location =http:// [ip]:[port] / [dir] /xxx.cgido-timestamp = true is_live = true! multipartdemux! jpegdec! videoflip方法=垂直翻转! jpegenc! multifilesink location = image-out-%05d.jpg 

也许值得看看 gst-opencv :使用gstreamer,命名管道和opencv



在Linux上可以得到mjpeg流,并将其转换为mpeg4并将其提供给命名管道。然后从opencv中的命名管道读取数据



步骤1.创建命名管道

  mkfifo stream_fifo 

步骤2.创建opencvvideo_test.c

  //使用gcc -ggdb编译pkg-config --cflags --libs opencv` opencvvideo_test.c -o opencvvideo_test 
#include< stdio.h>
#includehighgui.h
#includecv.h


int main(int argc,char ** argv){

IplImage * frame;
int key;

/ *提供要播放的AVI文件* /
assert(argc == 2);

/ *加载AVI文件* /
CvCapture * capture = cvCreateFileCapture(argv [1]); // cvCaptureFromAVI(argv [1]);

/ *总是检查* /
if(!capture)return 1;

/ * get fps,需要设置延迟* /
int fps =(int)cvGetCaptureProperty(capture,CV_CAP_PROP_FPS);

int frameH =(int)cvGetCaptureProperty(capture,CV_CAP_PROP_FRAME_HEIGHT);
int frameW =(int)cvGetCaptureProperty(capture,CV_CAP_PROP_FRAME_WIDTH);

/ * display video * /
cvNamedWindow(video,CV_WINDOW_AUTOSIZE);

while(key!='q'){

double t1 =(double)cvGetTickCount();
/ * get a frame * /
frame = cvQueryFrame(capture);
double t2 =(double)cvGetTickCount();
printf(time:%gms fps:%.2g \\\
,(t2-t1)/(cvGetTickFrequency()* 1000。),1000 ./((t2_t1)/(cvGetTickFrequency * 1000。)));

/ *总是检查* /
if(!frame)break;

/ *显示帧* /
cvShowImage(video,frame);

/ *如果用户按'q'* /
key = cvWaitKey(1000 / fps)
}

/ *可用内存* /
cvReleaseCapture(& capture);
cvDestroyWindow(video);

return 0;
}

步骤3.准备使用gstreamer从MJPEG转换为MPEG4 frames critical)

  gst-launch -v souphttpsrc location =http://< ip> / cgi_bin /< mjpeg> ; .cgido-timestamp = true is_live = true! multipartdemux! jpegdec!排队! videoscale! 'video / x-raw-yuv,width = 640,height = 480'!排队!视频! 'video / x-raw-yuv,framerate = 30/1'!排队! ffmpegcolorspace! 'video / x-raw-yuv,format =(fourcc)I420'! ffenc_mpeg4!排队! fileink location = stream_fifo 

步骤4.在OpenCV中显示流

  ./opencvvideo_test stream_fifo 


I want to receive JPEG images from an IP camera (over RTSP). For this, I tried cvCreateFileCapture_FFMPEG in OpenCV. But ffmpeg seems to have some problem with the MJPEG format of the streaming (since it automatically tries to detect the streaming info) and I end up with the following error

mjpeg: unsupported coding type

I, then, decided to use live555 for streaming. Till now, I can successfully establish streaming and capture (non-decoded) images through openRTSP.

The question is how can I do this in my application, e.g., in OpenCV. How can I use openRTSP in OpenCV to get images and save them in JPEG format?

I have heard that the data from openRTSP can be sent to a buffer (or a named pipe) and then read in OpenCV's IplImage. But I don't know how to do this.

I will really appreciate any help/suggestion in about this problem. I need answers of either of the following questions:

  1. How can I disable ffmpeg's automatic stream information detection and specify my own format (mjpeg), or
  2. How can I use openRTSP in OpenCV?

Regards,

解决方案

Is this an Axis IP camera? Either way, most IP cameras that provide MPEG4 RTSP stream that can be decoded using OpenCV using cvCreateFileCapture_FFMPEG. However, ffmpeg decoder's MJPEG codec has a widely known unresolved issues. I am sure you would have received an error similar to

[ingenient @ 0x97d20c0]Could not find codec parameters (Video: mjpeg)

Option1 : Using opencv, libcurl and libjpeg

To view mjpeg stream in opencv take a look at the following implementation

http://www.eecs.ucf.edu/~rpatrick/code/onelinksys.c or http://cse.unl.edu/~rpatrick/code/onelinksys.c

Option2: Using gstreamer (no opencv)

I would recommend looking at gstreamer if your goal is to just view or save jpeg images

To view MJPEG stream one may execute media pipeline string as follows

gst-launch -v souphttpsrc location="http://[ip]:[port]/[dir]/xxx.cgi" do-timestamp=true is_live=true ! multipartdemux ! jpegdec ! ffmpegcolorspace ! autovideosink

For RTSP

gst-launch -v rtspsrc location="rtsp://[user]:[pass]@[ip]:[port]/[dir]/xxx.amp" debug=1 ! rtpmp4vdepay ! mpeg4videoparse ! ffdec_mpeg4 ! ffmpegcolorspace! autovideosink

To work with C API see

http://wiki.maemo.org/Documentation/Maemo_5_Developer_Guide/Using_Multimedia_Components/Camera_API_Usage

For a simple example take a look at my other post on rtsp for constructing gstreamer C API media pipeline (This is same as gst-launch string but rather implemented as a C API)

Playing RTSP with python-gstreamer

To save MJPEG stream as multiple images the pipeline (Let us put a vertical flip BIN and connect the PADS to the previous and the next BINS to make it fancier)

gst-launch souphttpsrc location="http://[ip]:[port]/[dir]/xxx.cgi" do-timestamp=true is_live=true ! multipartdemux ! jpegdec !  videoflip method=vertical-flip ! jpegenc !  multifilesink location=image-out-%05d.jpg

Also maybe worthwhile have a look at gst-opencv

UPDATE:

Option3: Using gstreamer, Named Pipe and opencv

On Linux one may get mjpeg stream and convert it to mpeg4 and feed it to a named pipe. Then read the data from the named pipe in opencv

Step 1. Create Named Pipe

mkfifo stream_fifo

Step 2. Create opencvvideo_test.c

// compile with gcc -ggdb `pkg-config --cflags --libs opencv` opencvvideo_test.c -o opencvvideo_test
#include <stdio.h>
#include "highgui.h"
#include "cv.h"


int main( int argc, char** argv){

IplImage  *frame;
    int       key;

    /* supply the AVI file to play */
    assert( argc == 2 );

    /* load the AVI file */
    CvCapture *capture = cvCreateFileCapture(argv[1]) ;//cvCaptureFromAVI( argv[1] );

    /* always check */
    if( !capture ) return 1;    

    /* get fps, needed to set the delay */
    int fps = ( int )cvGetCaptureProperty( capture, CV_CAP_PROP_FPS );

    int frameH    = (int) cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT);
    int frameW    = (int) cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH);

    /* display video */
    cvNamedWindow( "video", CV_WINDOW_AUTOSIZE );

    while( key != 'q' ) {

    double t1=(double)cvGetTickCount();
    /* get a frame */
    frame = cvQueryFrame( capture );
    double t2=(double)cvGetTickCount();
    printf("time: %gms  fps: %.2g\n",(t2-t1)/(cvGetTickFrequency()*1000.), 1000./((t2-t1)/(cvGetTickFrequency()*1000.)));

    /* always check */
    if( !frame ) break;

    /* display frame */
    cvShowImage( "video", frame );

    /* quit if user press 'q' */
    key = cvWaitKey( 1000 / fps );
    }

    /* free memory */
    cvReleaseCapture( &capture );
    cvDestroyWindow( "video" );

    return 0;
}

Step 3. Prepare To Convert From MJPEG to MPEG4 using gstreamer (rate of incoming frames critical)

gst-launch -v souphttpsrc location="http://<ip>/cgi_bin/<mjpeg>.cgi" do-timestamp=true is_live=true ! multipartdemux ! jpegdec ! queue ! videoscale ! 'video/x-raw-yuv, width=640, height=480'! queue ! videorate ! 'video/x-raw-yuv,framerate=30/1' ! queue ! ffmpegcolorspace ! 'video/x-raw-yuv,format=(fourcc)I420' ! ffenc_mpeg4 ! queue ! filesink location=stream_fifo

Step 4. Display Stream in OpenCV

  ./opencvvideo_test stream_fifo

这篇关于MJPEG流和解码的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆