从RTP流解码h264帧 [英] Decoding h264 frames from RTP stream

查看:186
本文介绍了从RTP流解码h264帧的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用live555和ffmpeg库从服务器获取和解码RTP H264流;视频流由ffmpeg编码,使用基线配置文件和

  x264_param_default_preset(m_params,veryfast,zerolatency)

我读了本主题,并在网络中收到的每一帧中添加SPS和PPS数据;

  void ClientSink :: NewFrameHandler(unsigned frameSize,unsigned numTruncatedBytes,
timeval presentationTime,unsigned durationInMicroseconds)
{
...
EncodedFrame tmp;
tmp.m_frame = std :: vector< unsigned char>(m_tempBuffer.data(),m_tempBuffer.data()+ frameSize);
tmp.m_duration = durationInMicroseconds;
tmp.m_pts = presentationTime;

//将SPS和PPS数据添加到框架中; TODO:一些设备可能已经发送SPS和PPs数据;
tmp.m_frame.insert(tmp.m_frame.begin(),m_spsPpsData.cbegin(),m_spsPpsData.cend());

发出newEncodedFrame(SharedEncodedFrame(tmp));
m_frameCounter ++;

this-> continuePlaying();
}

我在解码器中收到这个帧。

  bool H264Decoder :: decodeFrame(SharedEncodedFrame orig_frame)
{
...
while(m_packet.size> 0 )
{
int got_picture;
int len = avcodec_decode_video2(m_decoderContext,m_picture,& got_picture,& m_packet);
if(len< 0)
{
emit criticalError(QString(Decoding error));
返回false;
}
if(got_picture)
{
std :: vector< unsigned char>结果;
this-> storePicture(result);

if(m_picture-> format == AVPixelFormat :: AV_PIX_FMT_YUV420P)
{
// QImage img = QImage(result.data(),m_picture-> width, m_picture-> height,QImage :: Format_RGB888);
Frame_t result_rgb;
if(!convert_yuv420p_to_rgb32(result,m_picture-> width,m_picture-> height,result_rgb)
{
emit criticalError(QString(无法将YUV420p图像转换为rgb32; can不要创建QImage!));
返回false;
}
unsigned char * copy_img = new unsigned char [result_rgb.size()];
//这是因为QImage共享缓冲区,使用它,如果我使用这个qimage,在result_rgb删除
std :: copy(result_rgb.cbegin(),result_rgb.cend()之后,它将崩溃, copy_img);
QImage img = QImage(copy_img,m_picture-> width,m_picture-> height,QImage :: Format_RGB32,
[](void * array)
{
delete [ ] array;
},copy_img);
img.save(QString(123.bmp));
发出newDecodedFrame(img);
}

avcodec_decode_video2解码帧没有任何错误消息,但解码后的帧,从yuv420p到rgb32)无效。 此链接的图片示例



你有什么想法我错了吗?

解决方案

我怀疑错误在convert_yuv420p_to_rgb32()代码中。尝试这样:

  static SwsContext * m_swsCtx = NULL; 
QImage frame = QImage(m_picture-> width,m_picture-> height,
QImage :: Format_RGB32);
m_swsCtx = sws_getCachedContext(m_swsCtx,m_picture-> width,
m_picture-> height,PIX_FMT_YUV420P,
m_picture-> width,m_picture-> height,
AV_PIX_FMT_RGB32, SWS_BICUBIC,
NULL,NULL,NULL);
uint8_t * dstSlice [] = {frame.bits()};
int dstStride = frame.width()* 4;
sws_scale(m_swsCtx,& m_picture.data,& m_picture.linesize,
0,m_picture-> height,dstSlice和& dstStride);

如果您还没有这样做,您将需要包含/链接swscale。



注意:每帧不需要SPS / PPS(关键帧足够好)。但也不会伤害。


I am using live555 and ffmpeg libraries to get and decode RTP H264 stream from server; Video stream was encoded by ffmpeg, using Baseline profile and

x264_param_default_preset(m_params, "veryfast", "zerolatency")

I read this topic and add SPS and PPS data in the every frame, which I receive from network;

void ClientSink::NewFrameHandler(unsigned frameSize, unsigned numTruncatedBytes,
    timeval presentationTime, unsigned durationInMicroseconds)
{
     ...
EncodedFrame tmp;
    tmp.m_frame = std::vector<unsigned char>(m_tempBuffer.data(), m_tempBuffer.data() + frameSize);
    tmp.m_duration = durationInMicroseconds;
    tmp.m_pts = presentationTime;

    //Add SPS and PPS data into the frame; TODO: some devices may send SPS and PPs data already into frame;
    tmp.m_frame.insert(tmp.m_frame.begin(), m_spsPpsData.cbegin(), m_spsPpsData.cend());

    emit newEncodedFrame( SharedEncodedFrame(tmp) );
    m_frameCounter++;

    this->continuePlaying();
}

And this frames I receive in the decoder.

bool H264Decoder::decodeFrame(SharedEncodedFrame orig_frame)
{
...
while(m_packet.size > 0)
    {
        int got_picture;
        int len = avcodec_decode_video2(m_decoderContext, m_picture, &got_picture, &m_packet);
        if (len < 0)
        {
            emit criticalError(QString("Decoding error"));
            return false;
        }
        if (got_picture)
        {
            std::vector<unsigned char> result;
            this->storePicture(result);

            if ( m_picture->format == AVPixelFormat::AV_PIX_FMT_YUV420P )
            {
                //QImage img = QImage(result.data(), m_picture->width, m_picture->height, QImage::Format_RGB888);
                Frame_t result_rgb;
                if (!convert_yuv420p_to_rgb32(result, m_picture->width, m_picture->height, result_rgb))
                {
                    emit criticalError( QString("Failed to convert YUV420p image into rgb32; can't create QImage!"));
                    return false;
                }
                unsigned char* copy_img = new unsigned char[result_rgb.size()];
//this needed because QImage shared buffer, which used, and it will crash, if i use this qimage after result_rgb deleting
                std::copy(result_rgb.cbegin(), result_rgb.cend(), copy_img);
                QImage img = QImage(copy_img, m_picture->width, m_picture->height, QImage::Format_RGB32,
                [](void* array)
                {
                    delete[] array;
                }, copy_img);
                img.save(QString("123.bmp"));
                emit newDecodedFrame(img);
            }

avcodec_decode_video2 decode frames without any error message, but decoded frames, after converting it (from yuv420p into rgb32) is invalid. Example of image available on this link

Do you have any ideas what I make wrong?

解决方案

I suspect the error is in the convert_yuv420p_to_rgb32() code. Try this:

static SwsContext *m_swsCtx = NULL;
QImage frame =  QImage ( m_picture->width, m_picture->height,
                         QImage::Format_RGB32 );
m_swsCtx = sws_getCachedContext ( m_swsCtx, m_picture->width,
                                  m_picture->height, PIX_FMT_YUV420P,
                                  m_picture->width, m_picture->height,
                                  AV_PIX_FMT_RGB32, SWS_BICUBIC,
                                  NULL, NULL, NULL );
uint8_t *dstSlice[] = { frame.bits() };
int dstStride = frame.width() * 4;
sws_scale ( m_swsCtx, &m_picture.data, &m_picture.linesize,
            0, m_picture->height, dstSlice, &dstStride );

You will need to include/link swscale if you have not does so already.

Note: you don't need SPS/PPS every frame (on keyframes is good enough). But it doesn't hurt either.

这篇关于从RTP流解码h264帧的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆