如何写一个Live555 FramedSource,让我流H.264的 [英] How to write a Live555 FramedSource to allow me to stream H.264 live

查看:1684
本文介绍了如何写一个Live555 FramedSource,让我流H.264的的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在试图写一个类,从Live555中的FramedSource派生,这将允许我流实时数据从我的D3D9应用程序到MP4或类似的。

I've been trying to write a class that derives from FramedSource in Live555 that will allow me to stream live data from my D3D9 application to an MP4 or similar.

我做的每个帧是抓取backbuffer到系统内存作为纹理,然后将其转换从RGB - > YUV420P,然后使用x264编码,然后理想情况下传递NAL数据包到Live555。我做了一个类叫H264FramedSource从FramedSource派生基本上通过复制DeviceSource文件。

What I do each frame is grab the backbuffer into system memory as a texture, then convert it from RGB -> YUV420P, then encode it using x264, then ideally pass the NAL packets on to Live555. I made a class called H264FramedSource that derived from FramedSource basically by copying the DeviceSource file. Instead of the input being an input file, I've made it a NAL packet which I update each frame.

我对编解码器和流媒体非常陌生,所以我把它作为一个NAL包,而不是一个输入文件。可能做的一切都完全错了。在每个doGetNextFrame()中我应该抓住NAL包并做一些像

I'm quite new to codecs and streaming, so I could be doing everything completely wrong. In each doGetNextFrame() should I be grabbing the NAL packet and doing something like

memcpy(fTo, nal->p_payload, nal->i_payload)

我假设有效载荷是我的帧数据的字节数?如果任何人有一个类的例子,他们从FramedSource派生,可能至少接近我想要做的,我很想看到它,这是对我来说是新的,有点棘手,弄清楚发生了什么。

I assume that the payload is my frame data in bytes? If anybody has an example of a class they derived from FramedSource that might at least be close to what I'm trying to do I would love to see it, this is all new to me and a little tricky to figure out what's happening. Live555's documentation is pretty much the code itself which doesn't exactly make it easy for me to figure out.

推荐答案

好吧,我的文档是一个很好的代码本身,终于得到了一些时间花在这,并得到它的工作!

Ok, I finally got some time to spend on this and got it working! I'm sure there are others who will be begging to know how to do it so here it is.

你需要自己的FramedSource来获取每一帧,编码,并准备它的流,我将提供一些这个很快的源代码。

You will need your own FramedSource to take each frame, encode, and prepare it for streaming, I will provide some of the source code for this soon.

基本上将你的FramedSource放到H264VideoStreamDiscreteFramer中,然后把它放到H264RTPSink中。像这样

Essentially throw your FramedSource into the H264VideoStreamDiscreteFramer, then throw this into the H264RTPSink. Something like this

scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);   

framedSource = H264FramedSource::createNew(*env, 0,0);

h264VideoStreamDiscreteFramer 
= H264VideoStreamDiscreteFramer::createNew(*env, framedSource);

// initialise the RTP Sink stuff here, look at 
// testH264VideoStreamer.cpp to find out how

videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink);

env->taskScheduler().doEventLoop();

现在在主渲染循环中,将保存到系统内存中的后台缓冲区FramedSource所以它可以编码等。有关如何设置编码的东西的更多信息检查出这个答案

Now in your main render loop, throw over your backbuffer which you've saved to system memory to your FramedSource so it can be encoded etc. For more info on how to setup the encoding stuff check out this answer How does one encode a series of images into H264 using the x264 C API?

我的实现非常在一个hacky状态,并且还有待优化,我的d3d应用程序运行在大约15fps,由于编码,所以我将不得不研究这个。但对于所有的意图和目的,这个StackOverflow问题被回答,因为我主要是在如何流之后。我希望这可以帮助其他人。

My implementation is very much in a hacky state and is yet to be optimised at all, my d3d application runs at around 15fps due to the encoding, ouch, so I will have to look into this. But for all intents and purposes this StackOverflow question is answered because I was mostly after how to stream it. I hope this helps other people.

至于我的FramedSource它看起来有点像这样

As for my FramedSource it looks a little something like this

concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out;


EventTriggerId H264FramedSource::eventTriggerId = 0;
unsigned H264FramedSource::FrameSize = 0;
unsigned H264FramedSource::referenceCount = 0;

int W = 720;
int H = 960;

H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
                                              unsigned preferredFrameSize, 
                                              unsigned playTimePerFrame) 
{
        return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
}

H264FramedSource::H264FramedSource(UsageEnvironment& env,
                                   unsigned preferredFrameSize, 
                                   unsigned playTimePerFrame)
    : FramedSource(env),
    fPreferredFrameSize(fMaxSize),
    fPlayTimePerFrame(playTimePerFrame),
    fLastPlayTime(0),
    fCurIndex(0)
{
        if (referenceCount == 0) 
        {

        }
        ++referenceCount;

        x264_param_default_preset(&param, "veryfast", "zerolatency");
        param.i_threads = 1;
        param.i_width = 720;
        param.i_height = 960;
        param.i_fps_num = 60;
        param.i_fps_den = 1;
        // Intra refres:
        param.i_keyint_max = 60;
        param.b_intra_refresh = 1;
        //Rate control:
        param.rc.i_rc_method = X264_RC_CRF;
        param.rc.f_rf_constant = 25;
        param.rc.f_rf_constant_max = 35;
        param.i_sps_id = 7;
        //For streaming:
        param.b_repeat_headers = 1;
        param.b_annexb = 1;
        x264_param_apply_profile(&param, "baseline");


        encoder = x264_encoder_open(&param);
        pic_in.i_type            = X264_TYPE_AUTO;   
        pic_in.i_qpplus1         = 0;
        pic_in.img.i_csp         = X264_CSP_I420;   
        pic_in.img.i_plane       = 3;


        x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920);

        convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);


        if (eventTriggerId == 0) 
        {
            eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
        }
}

H264FramedSource::~H264FramedSource() 
{
    --referenceCount;
    if (referenceCount == 0) 
    {
        // Reclaim our 'event trigger'
        envir().taskScheduler().deleteEventTrigger(eventTriggerId);
        eventTriggerId = 0;
    }
}

void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
    uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);

    memcpy(surfaceData, buf, surfaceSizeInBytes);

    int srcstride = W*3;
    sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride);
    x264_nal_t* nals = NULL;
    int i_nals = 0;
    int frame_size = -1;


    frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);

    static bool finished = false;

    if (frame_size >= 0)
    {
        static bool alreadydone = false;
        if(!alreadydone)
        {

            x264_encoder_headers(encoder, &nals, &i_nals);
            alreadydone = true;
        }
        for(int i = 0; i < i_nals; ++i)
        {
            m_queue.push(nals[i]);
        }   
    }
    delete [] surfaceData;
    surfaceData = NULL;

    envir().taskScheduler().triggerEvent(eventTriggerId, this);
}

void H264FramedSource::doGetNextFrame() 
{
    deliverFrame();
}

void H264FramedSource::deliverFrame0(void* clientData) 
{
    ((H264FramedSource*)clientData)->deliverFrame();
}

void H264FramedSource::deliverFrame() 
{
    x264_nal_t nalToDeliver;

    if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) {
        if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {
            // This is the first frame, so use the current time:
            gettimeofday(&fPresentationTime, NULL);
        } else {
            // Increment by the play time of the previous data:
            unsigned uSeconds   = fPresentationTime.tv_usec + fLastPlayTime;
            fPresentationTime.tv_sec += uSeconds/1000000;
            fPresentationTime.tv_usec = uSeconds%1000000;
        }

        // Remember the play time of this data:
        fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
        fDurationInMicroseconds = fLastPlayTime;
    } else {
        // We don't know a specific play time duration for this data,
        // so just record the current time as being the 'presentation time':
        gettimeofday(&fPresentationTime, NULL);
    }

    if(!m_queue.empty())
    {
        m_queue.wait_and_pop(nalToDeliver);

        uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;

        newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
        unsigned newFrameSize = nalToDeliver.i_payload;

        // Deliver the data here:
        if (newFrameSize > fMaxSize) {
            fFrameSize = fMaxSize;
            fNumTruncatedBytes = newFrameSize - fMaxSize;
        }
        else {
            fFrameSize = newFrameSize;
        }

        memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);

        FramedSource::afterGetting(this);
    }
}

哦,对于那些想知道我的并发队列是,在这里,它工作出色 http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html

Oh and for those who want to know what my concurrent queue is, here it is, and it works brilliantly http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html

享受和祝你好运!

这篇关于如何写一个Live555 FramedSource,让我流H.264的的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆