如何使用ffmpeg和c ++将自制流发布到rtmp服务器? [英] How to publish selfmade stream with ffmpeg and c++ to rtmp server?

查看:358
本文介绍了如何使用ffmpeg和c ++将自制流发布到rtmp服务器?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



我正在为Windows编写一个应用程序,捕获屏幕,并通过rtmp将流发送到Wowza服务器(for广播)。我的应用程序使用ffmpeg和Qt。
我使用WinApi捕获屏幕,将缓冲区转换为YUV444(因为它最简单)并按照decode_encoding.c(从FFmpeg示例)中所述编码帧:

  //////////////////////// 
//编码器初始化
///////////////////////
avcodec_register_all();
codec = avcodec_find_encoder(AV_CODEC_ID_H264);
c = avcodec_alloc_context3(codec);
c-> width = scr_width;
c-> height = scr_height;
c-> bit_rate = 400000;
int base_num = 1;
int base_den = 1; //每秒一帧
c-> time_base =(AVRational){base_num,base_den};
c-> gop_size = 10;
c-> max_b_frames = 1;
c-> pix_fmt = AV_PIX_FMT_YUV444P;
av_opt_set(c-> priv_data,preset,slow,0);

frame = avcodec_alloc_frame();
frame-> format = c-> pix_fmt;
frame-> width = c-> width;
frame-> height = c-> height; (int counter = 0; counter< 10; counter ++)


{
////////////////// ///////
//抓取画面
/////////////////////////
GetCapScr(shotbuf,scr_width,scr_height); //结果:shotbuf由HBITMAP中的screendata填充
//////////////////////// /
//将缓冲区转换为YUV444(标准公式)
//这是手工功能,因为从HBITMAP
////////////准备缓冲区的问题/////////////
RGBtoYUV(shotbuf,frame-> linesize,frame-> data,scr_width,scr_height); //导致frame-> data
/////////////////////////
//编码截图
//////// /////////////////
av_init_packet(& pkt);
pkt.data = NULL; //分组数据将由编码器
pkt.size = 0分配;
frame-> pts = counter;
avcodec_encode_video2(c,& pkt,frame,& got_output);
if(got_output)
{
//我认为通过rtmp发送数据包必须在这里!
av_free_packet(& pkt);

}

}
//获取延迟帧
(int get_output = 1,i = 0; got_output; i ++)
{
ret = avcodec_encode_video2(c,& pkt,NULL,& got_output);
if(ret< 0)
{
fprintf(stderr,Error encoding frame\\\
);
exit(1);
}
if(got_output)
{
//我认为通过rtmp发送数据包必须在这里!
av_free_packet(& pkt);
}
}

/////////////////////////
/ / deinitialize encoder
///////////////////////
avcodec_close(c);
av_free(c);
av_freep(& frame-> data [0]);
avcodec_free_frame(& frame);

我需要将此代码生成的视频流发送到RTMP服务器。
换句话说,我需要c / c模拟这个命令:

  ffmpeg -re -i示例。 h264-f flv rtmp://sample.url.com/screen/test_stream 

这很有用,但我不想保存流到文件,我想使用ffmpeg库实时编码屏幕捕获和发送编码帧到我自己的应用程序内的RTMP服务器。
请给我一个如何正确初始化AVFormatContext的一个例子,并将我的编码视频AVPackets发送到服务器。



谢谢。

解决方案

我的问题可以通过使用ffmpeg的源代码来解决。需要文件 muxing.c 。它位于ffmpeg源中的文件夹 ffmpeg\docs\examples 中。所有需要的源代码将样本流写入rtmp服务器或文件。我只能理解这些来源,并添加自己的流数据而不是示例流。
可能有意想不到的问题,但一般来说 - 有一个解决方案。


Have a nice day to you, people!

I am writing an application for Windows that will capture the screen and send the stream to Wowza server by rtmp (for broadcasting). My application use ffmpeg and Qt. I capture the screen with WinApi, convert a buffer to YUV444(because it's simplest) and encode frame as described at the file decoding_encoding.c (from FFmpeg examples):

///////////////////////////
//Encoder initialization
///////////////////////////
avcodec_register_all();
codec=avcodec_find_encoder(AV_CODEC_ID_H264);
c = avcodec_alloc_context3(codec);
c->width=scr_width;
c->height=scr_height;
c->bit_rate = 400000;
int base_num=1;
int base_den=1;//for one frame per second
c->time_base= (AVRational){base_num,base_den};
c->gop_size = 10;
c->max_b_frames=1;
c->pix_fmt = AV_PIX_FMT_YUV444P;
av_opt_set(c->priv_data, "preset", "slow", 0);

frame = avcodec_alloc_frame();
frame->format = c->pix_fmt;
frame->width  = c->width;
frame->height = c->height;

for(int counter=0;counter<10;counter++)
{
///////////////////////////
//Capturing Screen
///////////////////////////
    GetCapScr(shotbuf,scr_width,scr_height);//result: shotbuf is filled by screendata from HBITMAP
///////////////////////////
//Convert buffer to YUV444 (standard formula)
//It's handmade function because of problems with prepare buffer to swscale from HBITMAP
///////////////////////////
    RGBtoYUV(shotbuf,frame->linesize,frame->data,scr_width,scr_height);//result in frame->data
///////////////////////////
//Encode Screenshot
///////////////////////////
    av_init_packet(&pkt);
    pkt.data = NULL;    // packet data will be allocated by the encoder
    pkt.size = 0;
    frame->pts = counter;
    avcodec_encode_video2(c, &pkt, frame, &got_output);
    if (got_output) 
    {
        //I think that  sending packet by rtmp  must be here!
        av_free_packet(&pkt);             

    }

}
// Get the delayed frames
for (int got_output = 1,i=0; got_output; i++)
{
    ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
    if (ret < 0)
        {
            fprintf(stderr, "Error encoding frame\n");
            exit(1);
        }
        if (got_output)
        {
        //I think that  sending packet by rtmp  must be here!
        av_free_packet(&pkt);      
        }
}

///////////////////////////
//Deinitialize encoder
///////////////////////////
avcodec_close(c);
av_free(c);
av_freep(&frame->data[0]);
avcodec_free_frame(&frame);

I need to send video stream generated by this code to RTMP server. In other words, I need c++/c analog for this command:

ffmpeg -re -i "sample.h264" -f flv rtmp://sample.url.com/screen/test_stream

It's useful, but I don't want to save stream to file, I want to use ffmpeg libraries for realtime encoding screen capture and sending encoded frames to RTMP server inside my own application. Please give me a little example how to initialize AVFormatContext properly and to send my encoded video AVPackets to server.

Thanks.

解决方案

My problem can be solved by using example from the source of ffmpeg. File muxing.c is needed. It's located in the folder ffmpeg\docs\examples in the ffmpeg's sources. There are all needed source code for writing sample stream into a rtmp server or file. I must only understand those sources and add my own stream data instead of sample stream. There could be unexpected problems, but in general - there is a solution.

这篇关于如何使用ffmpeg和c ++将自制流发布到rtmp服务器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆