将OpenCV与较大的程序集成 [英] Integrating OpenCV with larger programs

查看:104
本文介绍了将OpenCV与较大的程序集成的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

谁能推荐使用指南或简要介绍将OpenCV与基于GUI的大型程序集成所涉及的内容?流行的方法有哪些?

Can anyone recommend a how-to guide or provide a brief overview of what's involved with integrating OpenCV with larger GUI-based programs? What are the popular ways to do it?

特别是,在使用HighGUI进行视频捕获/预览而没有 的情况下,使用OpenCV处理视频似乎特别不可思议.我希望有人可以揭开神秘的面纱.

Particularly, processing video with OpenCV while doing video capture/preview without using HighGUI seems especially arcane. I hope someone can demystify this.

我的特殊配置是使用Juce或Qt,具体取决于可以执行的操作.跨平台的事情并不重要-如果Windows中有一种很棒的方法可以做到这一点,我可能会说服.社区支持的可用性很重要.

My particular configuration is with either Juce or Qt depending on what can be done. The cross platform thing is not critical -- if there is an awesome way of doing this in Windows, I might be convinced. The availability of community support is important.

我听说HighGUI完全用于测试,不适合实际应用程序.有人推荐 VideoInput 库,但这是实验性的.

I have heard that HighGUI is entirely for testing and unsuitable for real applications. Someone recommended the VideoInput library, but it is experimental.

  • 使用Qt(因为Qt很棒并且社区很大).
  • 打开一个新线程以循环运行cv :: VideoCapture,并在捕获帧后发出emit信号.使用Qt的msleep机制,而不是OpenCV. 因此,我们仍在使用OpenCV highgui进行捕获.
  • 将cv :: Mat转换为QtImage:

  • Use Qt (because Qt is great and has a big community).
  • Open a new thread to run cv::VideoCapture in a loop and emit signal after frame capture. Use Qt's msleep mechanism, not OpenCV. So, we are still using OpenCV highgui for capture.
  • Convert cv::Mat to QtImage:

QImage qtFrame(cvFrame.data, cvFrame.size().width, cvFrame.size().height, cvFrame.step, QImage::Format_RGB888);

qtFrame = qtFrame.rgbSwapped();

可选:使用GLWidget渲染.使用Qt内置方法将QtImage转换为GLFormat:

Optional: Render with GLWidget. Convert QtImage to GLFormat with Qt built-in method:

m_GLFrame = QGLWidget::convertToGLFormat(frame);

this->updateGL();

推荐答案

这是我使用Qt的方法.欢迎您使用对您有用的任何东西:)

Here is how I am doing it with Qt. You are welcome to use whatever may be useful to you :)

/// OpenCV_GLWidget.h
#ifndef OPENCV_GLWIDGET_H_
#define OPENCV_GLWIDGET_H_

#include <qgl.h>
#include <QImage>

class OpenCV_GLWidget: public QGLWidget {
public:
    OpenCV_GLWidget(QWidget * parent = 0, const QGLWidget * shareWidget = 0, Qt::WindowFlags f = 0);
    virtual ~OpenCV_GLWidget();

    void renderImage(const QImage& frame);
protected:
    virtual void paintGL();
    virtual void resizeGL(int width, int height);

private:
    QImage m_GLFrame;
};

#endif /* OPENCV_GLWIDGET_H_ */

/// OpenCV_GLWidget.cpp
#include "OpenCV_GLWidget.h"

OpenCV_GLWidget::OpenCV_GLWidget(QWidget* parent, const QGLWidget* shareWidget, Qt::WindowFlags f) :
QGLWidget(parent, shareWidget, f)
{
    // TODO Auto-generated constructor stub

}

OpenCV_GLWidget::~OpenCV_GLWidget() {
    // TODO Auto-generated destructor stub
}

void OpenCV_GLWidget::renderImage(const QImage& frame)
{
    m_GLFrame = QGLWidget::convertToGLFormat(frame);
    this->updateGL();
}

void OpenCV_GLWidget::resizeGL(int width, int height)
{
    // Setup our viewport to be the entire size of the window
    glViewport(0, 0, width, height);

    // Change to the projection matrix and set orthogonal projection
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glOrtho(0, width, height, 0, 0, 1);
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
}

void OpenCV_GLWidget::paintGL() {
    glClear (GL_COLOR_BUFFER_BIT);
    glClearColor (0.0, 0.0, 0.0, 1.0);
    if (!m_GLFrame.isNull()) {
        m_GLFrame = m_GLFrame.scaled(this->size(), Qt::IgnoreAspectRatio, Qt::SmoothTransformation);

        glEnable(GL_TEXTURE_2D);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
        glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, m_GLFrame.width(), m_GLFrame.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, m_GLFrame.bits() );
        glBegin(GL_QUADS);
        glTexCoord2f(0, 0); glVertex2f(0, m_GLFrame.height());
        glTexCoord2f(0, 1); glVertex2f(0, 0);
        glTexCoord2f(1, 1); glVertex2f(m_GLFrame.width(), 0);
        glTexCoord2f(1, 0); glVertex2f(m_GLFrame.width(), m_GLFrame.height());
        glEnd();
        glDisable(GL_TEXTURE_2D);

        glFlush();
    }
}

此类处理图像在提升的QWidget上的呈现.接下来,我创建了一个线程来填充小部件. (我在这里使用Qt信号槽体系结构作弊,因为它很容易...可能不是本书中表现最好的,但是它应该可以帮助您入门.)

This class handles the rendering of the image onto a promoted QWidget. Next, I created a thread to feed the widget. (I cheated using the Qt signal-slot architecture here because it was easy...may not be the best performer in the book, but it should get you started).

void VideoThread::run()
{
    cv::VideoCapture video(0);

    while(!m_AbortCapture)
    {
        cv::Mat cvFrame;
        video >> cvFrame;

        cv::Mat gray(cvFrame.size(), CV_8UC1);
        cv::GaussianBlur(cvFrame, cvFrame, cv::Size(5, 5), 9.0, 3.0, cv::BORDER_REPLICATE);
        cv::cvtColor(cvFrame, gray, CV_RGB2GRAY);

        m_ThresholdLock.lock();
        double localThreshold = m_Threshold;
        m_ThresholdLock.unlock();

        if(localThreshold > 0.0)
        {
            qDebug() << "Threshold = " << localThreshold;
            cv::threshold(gray, gray, localThreshold, 255.0,  cv::THRESH_BINARY);
        }

        cv::cvtColor(gray, cvFrame, CV_GRAY2BGR);

        // convert the Mat to a QImage
        QImage qtFrame(cvFrame.data, cvFrame.size().width, cvFrame.size().height, cvFrame.step, QImage::Format_RGB888);
        qtFrame = qtFrame.rgbSwapped();

        // queue the image to the gui
        emit sendImage(qtFrame);
        msleep(20);
    }
}

请我花点时间弄清楚,希望能帮助您和其他人节省时间:D

Took me a bit to figure that out, so hopefully it will help you and others save some time :D

这篇关于将OpenCV与较大的程序集成的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆