录制Kinect流以供以后播放的最可靠方法是什么? [英] What is the most reliable way to record a Kinect stream for later playback?

查看:100
本文介绍了录制Kinect流以供以后播放的最可靠方法是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在与Processing和Cinder一起即时修改Kinect输入.但是,我还想记录完整的数据流(深度+颜色+加速度计值,以及其中的其他值).我正在录制,所以我可以在相同的材料上尝试不同的效果/处理.

I have been working with Processing and Cinder to modify Kinect input on the fly. However, I would also like to record the full stream (depth+color+accelerometer values, and whatever else is in there). I'm recording so I can try out different effects/treatments on the same material.

由于我仍然只是在学习Cinder,并且处理非常缓慢/缓慢,因此我很难找到有关捕获流的策略的建议-任何东西(最好是在Cinder,oF或Processing中)都将真正有用.

Because I am still just learning Cinder and Processing is quite slow/laggy, I have had trouble finding advice on a strategy for capturing the stream - anything (preferably in Cinder, oF, or Processing) would be really helpful.

推荐答案

我已经尝试了Processing和OpenFrameworks.同时显示图像(深度和彩色)时,处理速度较慢. OpenFrameworks在将数据写入磁盘时会减慢速度,但这是基本方法:

I've tried both Processing and OpenFrameworks. Processing is slower when displaying both images (depth and colour). OpenFrameworks slows a bit while writing the data to disk, but here's the basic approach:

  1. 设置Openframeworks (打开并编译任何示例以确保您可以正常运行)
  2. 下载 ofxKinect插件,然后复制github中描述的示例项目.
  3. 一旦您运行了OF并且运行了ofxKinect示例,只需添加一些变量来保存数据即可.
  1. Setup Openframeworks (open and compile any sample to make sure you're up and running)
  2. Download the ofxKinect addon and copy the example project as described on github.
  3. Once you've got OF and the ofxKinect example running, it's just a matter of adding a few variable to save your data:

在此基本设置中,我创建了两个ofImage实例和一个布尔值来切换保存.在示例中,深度和RGB缓冲区被保存到ofxCvGrayscaleImage实例中,但是我还没有使用OF和OpenCV来知道如何做像将图像保存到磁盘一样简单的事情,这就是为什么我使用了两个 ofImage 实例.

In this basic setup, I've created a couple of ofImage instances and a boolean to toggle saving. In the example the depth and RGB buffers are saved into ofxCvGrayscaleImage instances, but I haven't used OF and OpenCV enough to know how to do something as simple as saving an image to disk, which is why I've used two ofImage instances.

我不知道您对Processing,OF,Cinder的满意程度,因此,为了论证,我假设您知道您正在处理Processing,但是您仍在使用C ++.

I don't know how comfortable you are with Processing, OF, Cinder, so, for arguments' sake I'll assume you know you're way around Processing, but you're still tackling C++.

OF与处理"非常相似,但有一些区别:

OF is pretty similar to Processing, but there are a few differences:

  1. 在处理中,您具有变量声明,并且它们在同一文件中使用.在OF中,有一个.h文件用于声明变量,而有一个.cpp文件用于初始化和使用变量.
  2. 在Processing中,您具有setup()(初始化变量)和draw()(更新变量并绘制到屏幕)方法,而在OF中,您具有setup()(与Processing中相同),update()(更新变量)仅,没有视觉效果)和draw()(使用更新后的值绘制到屏幕上)
  3. 在处理图像时,由于您使用C ++进行编码,因此需要首先分配内存,而不是需要内存管理的Processing/Java.

还有更多区别,我将在这里详细介绍.请在Wiki上签出用于处理用户的OF

There's more differences that I won'te detail here. Do check out OF for Processing Users on the wiki

回到exampleKinect示例,这里是我的基本设置:

Back to the exampleKinect example, here my basic setup:

.h文件:

#pragma once

#include "ofMain.h"
#include "ofxOpenCv.h"
#include "ofxKinect.h"

class testApp : public ofBaseApp {
    public:

        void setup();
        void update();
        void draw();
        void exit();

        void drawPointCloud();

        void keyPressed  (int key);
        void mouseMoved(int x, int y );
        void mouseDragged(int x, int y, int button);
        void mousePressed(int x, int y, int button);
        void mouseReleased(int x, int y, int button);
        void windowResized(int w, int h);

        ofxKinect kinect;

        ofxCvColorImage     colorImg;

        ofxCvGrayscaleImage     grayImage;
        ofxCvGrayscaleImage     grayThresh;
        ofxCvGrayscaleImage     grayThreshFar;

        ofxCvContourFinder  contourFinder;

        ofImage             colorData;//to save RGB data to disk
        ofImage             grayData;//to save depth data to disk 

        bool                bThreshWithOpenCV;
        bool                drawPC;
        bool                saveData;//save to disk toggle

        int                 nearThreshold;
        int                 farThreshold;

        int                 angle;

        int                 pointCloudRotationY;
        int                 saveCount;//counter used for naming 'frames'
};

和.cpp文件:

#include "testApp.h"


//--------------------------------------------------------------
void testApp::setup() {
    //kinect.init(true);  //shows infrared image
    kinect.init();
    kinect.setVerbose(true);
    kinect.open();

    colorImg.allocate(kinect.width, kinect.height);
    grayImage.allocate(kinect.width, kinect.height);
    grayThresh.allocate(kinect.width, kinect.height);
    grayThreshFar.allocate(kinect.width, kinect.height);
    //allocate memory for these ofImages which will be saved to disk
    colorData.allocate(kinect.width, kinect.height, OF_IMAGE_COLOR);
    grayData.allocate(kinect.width, kinect.height, OF_IMAGE_GRAYSCALE);

    nearThreshold = 230;
    farThreshold  = 70;
    bThreshWithOpenCV = true;

    ofSetFrameRate(60);

    // zero the tilt on startup
    angle = 0;
    kinect.setCameraTiltAngle(angle);

    // start from the front
    pointCloudRotationY = 180;

    drawPC = false;

    saveCount = 0;//init frame counter
}

//--------------------------------------------------------------
void testApp::update() {
    ofBackground(100, 100, 100);

    kinect.update();
    if(kinect.isFrameNew()) // there is a new frame and we are connected
    {

        grayImage.setFromPixels(kinect.getDepthPixels(), kinect.width, kinect.height);

        if(saveData){
            //if toggled, set depth and rgb pixels to respective ofImage, save to disk and update the 'frame' counter 
            grayData.setFromPixels(kinect.getDepthPixels(), kinect.width, kinect.height,true);
            colorData.setFromPixels(kinect.getCalibratedRGBPixels(), kinect.width, kinect.height,true);
            grayData.saveImage("depth"+ofToString(saveCount)+".png");
            colorData.saveImage("color"+ofToString(saveCount)+".png");
            saveCount++;
        }

        //we do two thresholds - one for the far plane and one for the near plane
        //we then do a cvAnd to get the pixels which are a union of the two thresholds. 
        if( bThreshWithOpenCV ){
            grayThreshFar = grayImage;
            grayThresh = grayImage;
            grayThresh.threshold(nearThreshold, true);
            grayThreshFar.threshold(farThreshold);
            cvAnd(grayThresh.getCvImage(), grayThreshFar.getCvImage(), grayImage.getCvImage(), NULL);
        }else{

            //or we do it ourselves - show people how they can work with the pixels

            unsigned char * pix = grayImage.getPixels();
            int numPixels = grayImage.getWidth() * grayImage.getHeight();

            for(int i = 0; i < numPixels; i++){
                if( pix[i] < nearThreshold && pix[i] > farThreshold ){
                    pix[i] = 255;
                }else{
                    pix[i] = 0;
                }
            }
        }

        //update the cv image
        grayImage.flagImageChanged();

        // find contours which are between the size of 20 pixels and 1/3 the w*h pixels.
        // also, find holes is set to true so we will get interior contours as well....
        contourFinder.findContours(grayImage, 10, (kinect.width*kinect.height)/2, 20, false);
    }
}

//--------------------------------------------------------------
void testApp::draw() {
    ofSetColor(255, 255, 255);
    if(drawPC){
        ofPushMatrix();
        ofTranslate(420, 320);
        // we need a proper camera class
        drawPointCloud();
        ofPopMatrix();
    }else{
        kinect.drawDepth(10, 10, 400, 300);
        kinect.draw(420, 10, 400, 300);

        grayImage.draw(10, 320, 400, 300);
        contourFinder.draw(10, 320, 400, 300);
    }


    ofSetColor(255, 255, 255);
    stringstream reportStream;
    reportStream << "accel is: " << ofToString(kinect.getMksAccel().x, 2) << " / "
                                 << ofToString(kinect.getMksAccel().y, 2) << " / " 
                                 << ofToString(kinect.getMksAccel().z, 2) << endl
                 << "press p to switch between images and point cloud, rotate the point cloud with the mouse" << endl
                 << "using opencv threshold = " << bThreshWithOpenCV <<" (press spacebar)" << endl
                 << "set near threshold " << nearThreshold << " (press: + -)" << endl
                 << "set far threshold " << farThreshold << " (press: < >) num blobs found " << contourFinder.nBlobs
                    << ", fps: " << ofGetFrameRate() << endl
                 << "press c to close the connection and o to open it again, connection is: " << kinect.isConnected() << endl
                 << "press s to toggle saving depth and color data. currently saving:  " << saveData << endl
                 << "press UP and DOWN to change the tilt angle: " << angle << " degrees";
    ofDrawBitmapString(reportStream.str(),20,656);
}

void testApp::drawPointCloud() {
    ofScale(400, 400, 400);
    int w = 640;
    int h = 480;
    ofRotateY(pointCloudRotationY);
    float* distancePixels = kinect.getDistancePixels();
    glBegin(GL_POINTS);
    int step = 2;
    for(int y = 0; y < h; y += step) {
        for(int x = 0; x < w; x += step) {
            ofPoint cur = kinect.getWorldCoordinateFor(x, y);
            ofColor color = kinect.getCalibratedColorAt(x,y);
            glColor3ub((unsigned char)color.r,(unsigned char)color.g,(unsigned char)color.b);
            glVertex3f(cur.x, cur.y, cur.z);
        }
    }
    glEnd();
}

//--------------------------------------------------------------
void testApp::exit() {
    kinect.setCameraTiltAngle(0); // zero the tilt on exit
    kinect.close();
}

//--------------------------------------------------------------
void testApp::keyPressed (int key) {
    switch (key) {
        case ' ':
            bThreshWithOpenCV = !bThreshWithOpenCV;
        break;
        case'p':
            drawPC = !drawPC;
            break;

        case '>':
        case '.':
            farThreshold ++;
            if (farThreshold > 255) farThreshold = 255;
            break;
        case '<':       
        case ',':       
            farThreshold --;
            if (farThreshold < 0) farThreshold = 0;
            break;

        case '+':
        case '=':
            nearThreshold ++;
            if (nearThreshold > 255) nearThreshold = 255;
            break;
        case '-':       
            nearThreshold --;
            if (nearThreshold < 0) nearThreshold = 0;
            break;
        case 'w':
            kinect.enableDepthNearValueWhite(!kinect.isDepthNearValueWhite());
            break;
        case 'o':
            kinect.setCameraTiltAngle(angle);   // go back to prev tilt
            kinect.open();
            break;
        case 'c':
            kinect.setCameraTiltAngle(0);       // zero the tilt
            kinect.close();
            break;
        case 's'://s to toggle saving data
            saveData = !saveData;
            break;

        case OF_KEY_UP:
            angle++;
            if(angle>30) angle=30;
            kinect.setCameraTiltAngle(angle);
            break;

        case OF_KEY_DOWN:
            angle--;
            if(angle<-30) angle=-30;
            kinect.setCameraTiltAngle(angle);
            break;
    }
}

//--------------------------------------------------------------
void testApp::mouseMoved(int x, int y) {
    pointCloudRotationY = x;
}

//--------------------------------------------------------------
void testApp::mouseDragged(int x, int y, int button)
{}

//--------------------------------------------------------------
void testApp::mousePressed(int x, int y, int button)
{}

//--------------------------------------------------------------
void testApp::mouseReleased(int x, int y, int button)
{}

//--------------------------------------------------------------
void testApp::windowResized(int w, int h)
{}

这是一个非常基本的设置.随意修改(向保存的数据添加倾斜角度等) 我很确定有一些方法可以提高速度(例如,不要更新xCvGrayscaleImage实例,在保存时不要在屏幕上绘制图像,或者堆叠几帧并以一定间隔而不是每帧写入它们,等等. )

This is a very basic setup. Feel free to modify (add tilt angle to the saved data, etc.) I'm pretty sure there are ways to improve this speedwise (e.g. don't update ofxCvGrayscaleImage instances and don't draw images to screen while saving, or stack a few frames and write them at interval as opposed to on every frame, etc.)

祝你好运

这篇关于录制Kinect流以供以后播放的最可靠方法是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆