kinect/处理/简单的openni-点云数据未正确输出 [英] kinect/ processing / simple openni - point cloud data not being output properly

查看:116
本文介绍了kinect/处理/简单的openni-点云数据未正确输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建了一个处理草图,该结构将点云数据的每帧从kinect保存到文本文件,其中文件的每一行都是kinect已注册的点(或顶点).我计划将数据提取到3d程序中以可视化3d空间中的动画并应用各种效果.问题是,当我这样做时,第一帧看起来合适,而其余帧似乎会吐出看起来像第一张图像的图像,外加一堆随机噪声.这是我的全部代码.它需要简单的openni才能正常工作.您可以看到评论

I've created a processing sketch which saves each frame of point cloud data from the kinect to a text file, where each line of the file is a point (or vertex) that the kinect has registered. I plan to pull the data into a 3d program to visualize the animation in 3d space and apply various effects. The problem is, when I do this, the first frame seems proper, and the rest of the frames seem to be spitting out what looks like the first image, plus a bunch of random noise. This is my code, in its entirety. It requires simple openni to work properly. You can see the comments

import SimpleOpenNI.*;
//import processing.opengl.*;

SimpleOpenNI context;
float        zoomF =0.5f;
float        rotX = radians(180);  // by default rotate the hole scene 180deg around the x-axis, 
float        rotY = radians(0); // the data from openni comes upside down

int maxZ = 2000;
Vector <Object> recording = new Vector<Object>(); 
boolean isRecording = false;
boolean canDraw = true;
boolean mouseMode = false;
int currentFile = 0;
int depthWidth = 640; //MH - assuming this is static?
int depthHeight = 480;
int steps = 5;
int arrayLength = (depthWidth/steps) * (depthHeight/steps); //total lines in each output file


void setup()
{
  size(1024,768,P3D);  // strange, get drawing error in the cameraFrustum if i use P3D, in opengl there is no problem
  //size(1024,768,OPENGL); 

  context = new SimpleOpenNI(this);
  context.setMirror(true);
  depthWidth = context.depthWidth();
  depthHeight = context.depthHeight();

  // enable depthMap generation 
  if(context.enableDepth() == false)
  {
     println("Can't open the depthMap, maybe the camera is not connected!"); 
     exit();
     return;
  }

  stroke(255,255,255);
  smooth();

  perspective(radians(45),
  float(width)/float(height),
  10.0f,150000.0f);
 }

void draw()
{

  //println(isRecording);

  // update the cam
  context.update();

  background(0,0,0);

  // set the scene pos
  translate(width/2, height/2, 0);
  rotateX(rotX);
  rotateY(rotY);
  scale(zoomF);

  // draw the 3d point depth map
  int[]   depthMap = context.depthMap();
  int     index = 0;
  PVector realWorldPoint;
  PVector[] frame = new PVector[arrayLength];

  translate(0,0,-1000);  // set the rotation center of the scene 1000 infront of the camera
  stroke(200); 
  for(int y=0;y < context.depthHeight();y+=steps)
  {
    for(int x=0;x < context.depthWidth();x+=steps)
    {
      int offset = x + y * context.depthWidth();
      realWorldPoint = context.depthMapRealWorld()[offset];
      if (isRecording == true){
        if (realWorldPoint.z < maxZ){
          frame[index] = realWorldPoint;
        } else {
          frame[index] = new PVector(-0.0,-0.0,0.0); 
        }
        index++;
      } else {
        if (realWorldPoint.z < maxZ){
          if (canDraw == true){
            point(realWorldPoint.x,realWorldPoint.y,realWorldPoint.z);
          }
        }
      }
    } 
  }

  if (isRecording == true){
   recording.add(frame); 
  }

 if (mouseMode == true){
   float rotVal = map (mouseX,0,1024,-1,1); //comment these out to disable mouse orientation
   float rotValX = map (mouseY,0,768,2,4);
   rotY = rotVal;
   rotX = rotValX;
 } 

}

// -----------------------------------------------------------------
// Keyboard event
void keyPressed()
{
  switch(key)
  {
    case ' ':
      context.setMirror(!context.mirror());
      break;
    case 'm':
      mouseMode = !mouseMode;
      break;
    case 'r':
      isRecording = !isRecording;
      break;
    case 's':
      if (isRecording == true){
        isRecording = false;
        canDraw = false;
        println("Stopped Recording");
        Enumeration e = recording.elements();
        int i = 0;
        while (e.hasMoreElements()) {

          // Create one directory
          boolean success = (new File("out"+currentFile)).mkdir(); 
          PrintWriter output = createWriter("out"+currentFile+"/frame" + i++ +".txt");
          PVector [] frame = (PVector []) e.nextElement();

          for (int j = 0; j < frame.length; j++) {
           output.println(j + ", " + frame[j].x + ", " + frame[j].y + ", " + frame[j].z );
          }
          output.flush(); // Write the remaining data
          output.close();
          //exit();
        }
        canDraw = true;
        println("done recording");
      }
      currentFile++;
      break;
  }

  switch(keyCode)
  {
    case LEFT:
      if(keyEvent.isShiftDown())
        maxZ -= 100;
      else
        rotY += 0.1f;
      break;
    case RIGHT:
      if(keyEvent.isShiftDown())
        maxZ += 100;
      else
        rotY -= 0.1f;
      break;
    case UP:
      if(keyEvent.isShiftDown())
        zoomF += 0.01f;
      else
        rotX += 0.1f;
      break;
    case DOWN:
      if(keyEvent.isShiftDown())
      {
        zoomF -= 0.01f;
        if(zoomF < 0.01)
          zoomF = 0.01;
      }
      else
        rotX -= 0.1f;
      break;
  }
}

我想象循环是问题开始发生的地方:for(int y = 0; y< context.depthHeight(); y + = steps) {,等等.尽管这可能是我为3d程序编写的python脚本的问题.无论如何,这是一个很酷的草图,对于想进行3d效果以指向云数据(或构建模型等)的人来说,这将是非常有用的,但是我暂时还停留在此.感谢您的帮助!

I imagine the loop is where the problems begin occurring: for(int y=0;y < context.depthHeight();y+=steps) { , etc. although it could just be a problem with the python script I wrote for the 3d program. Anyway, this is a cool sketch, and I think would be super useful for anyone wanting to do 3d effects to point cloud data (or build models, etc), but I'm stuck at the moment. Thanks for your help!

推荐答案

不幸的是,我现在不能解释太多,但是几个月前我已经保存了

Unfortunately I can't explain a lot right now, but I've sone something similar a few months back saving to PLY and CSV:

import processing.opengl.*;
import SimpleOpenNI.*;


SimpleOpenNI context;
float        zoomF =0.5f;
float        rotX = radians(180);  
float        rotY = radians(0);

boolean recording = false;
ArrayList<PVector> pts = new ArrayList<PVector>();//points for one frame

float minZ = 100,maxZ = 150;

void setup()
{
  size(1024,768,OPENGL);  

  context = new SimpleOpenNI(this);
  context.setMirror(false);
  context.enableDepth();
  context.enableScene();

  stroke(255);
  smooth();  
  perspective(95,float(width)/float(height), 10,150000);
 }

void draw()
{
  context.update();
  background(0);

  translate(width/2, height/2, 0);
  rotateX(rotX);
  rotateY(rotY);
  scale(zoomF);

  int[]   depthMap = context.depthMap();
  int[]   sceneMap = context.sceneMap();
  int     steps   = 10;  
  int     index;
  PVector realWorldPoint;
  pts.clear();//reset points
  translate(0,0,-1000);  
  //*
  //stroke(100); 
  for(int y=0;y < context.depthHeight();y+=steps)
  {
    for(int x=0;x < context.depthWidth();x+=steps)
    {
      index = x + y * context.depthWidth();
      if(depthMap[index] > 0)
      { 
        realWorldPoint = context.depthMapRealWorld()[index];
        if(realWorldPoint.z > minZ && realWorldPoint.z < maxZ){//if within range
          stroke(0,255,0);
          point(realWorldPoint.x,realWorldPoint.y,realWorldPoint.z);
          pts.add(realWorldPoint.get());//store each point
        }
      }
    } 
  } 
  if(recording){
      savePLY(pts);//save to disk as PLY
      saveCSV(pts);//save to disk as CSV
  }
  //*/
}

// -----------------------------------------------------------------
// Keyboard events

void keyPressed()
{
  if(key == 'q') minZ += 10;
  if(key == 'w') minZ -= 10;
  if(key == 'a') maxZ += 10;
  if(key == 's') maxZ -= 10;

  switch(key)
  {
    case ' ':
      context.setMirror(!context.mirror());
    break;
    case 'r':
      recording = !recording;
    break;
  }

  switch(keyCode)
  {
    case LEFT:
      rotY += 0.1f;
      break;
    case RIGHT:
      // zoom out
      rotY -= 0.1f;
      break;
    case UP:
      if(keyEvent.isShiftDown())
        zoomF += 0.01f;
      else
        rotX += 0.1f;
      break;
    case DOWN:
      if(keyEvent.isShiftDown())
      {
        zoomF -= 0.01f;
        if(zoomF < 0.01)
          zoomF = 0.01;
      }
      else
        rotX -= 0.1f;
      break;
  }
}
void savePLY(ArrayList<PVector> pts){
  String ply = "ply\n";
  ply += "format ascii 1.0\n";
  ply += "element vertex " + pts.size() + "\n";
  ply += "property float x\n";
  ply += "property float y\n";
  ply += "property float z\n";
  ply += "end_header\n";
  for(PVector p : pts)ply += p.x + " " + p.y + " " + p.z + "\n";
  saveStrings("frame_"+frameCount+".ply",ply.split("\n"));
}
void saveCSV(ArrayList<PVector> pts){
  String csv = "x,y,z\n";
  for(PVector p : pts) csv += p.x + "," + p.y + "," + p.z + "\n";
  saveStrings("frame_"+frameCount+".csv",csv.split("\n"));
}

我正在使用if语句仅保存某个Z阈值内的点,但可以随意更改/使用. 后处理的想法使人联想起用于Catalina的Moullinex视频.检查一下,它有充分的文档记录,还包括源代码.

I'm using an if statement to save only the points within a certain Z threshold, but feel free to alter/use as you see fit. The post processing idea reminds of the Moullinex video for Catalina. Check it out, it's well documented and includes source code as well.

更新 张贴的代码每帧保存1个文件.即使回放速度很慢,草图仍应为每帧保存一个文件.该代码可以简化一点:

Update The posted code saves 1 file per frame. Even though the playback speed would be low, the sketch should still save a file for each frame. The code be simplified a bit:

import processing.opengl.*;
import SimpleOpenNI.*;


SimpleOpenNI context;
float        zoomF =0.5f;
float        rotX = radians(180);  
float        rotY = radians(0);

boolean recording = false;
String csv;

void setup()
{
  size(1024,768,OPENGL);  

  context = new SimpleOpenNI(this);
  context.setMirror(false);
  context.enableDepth();

  stroke(255);
  smooth();  
  perspective(95,float(width)/float(height), 10,150000);
 }

void draw()
{
  csv = "x,y,z\n";//reset csv for this frame
  context.update();
  background(0);

  translate(width/2, height/2, 0);
  rotateX(rotX);
  rotateY(rotY);
  scale(zoomF);

  int[]   depthMap = context.depthMap();
  int[]   sceneMap = context.sceneMap();
  int     steps   = 10;  
  int     index;
  PVector realWorldPoint;
  translate(0,0,-1000);  
  //*
  beginShape(POINTS);
  for(int y=0;y < context.depthHeight();y+=steps)
  {
    for(int x=0;x < context.depthWidth();x+=steps)
    {
      index = x + y * context.depthWidth();
      if(depthMap[index] > 0)
      { 
        realWorldPoint = context.depthMapRealWorld()[index];
        vertex(realWorldPoint.x,realWorldPoint.y,realWorldPoint.z);
        if(recording) csv += realWorldPoint.x + "," + realWorldPoint.y + "," + realWorldPoint.z + "\n";
      }
    } 
  }
  endShape(); 
  if(recording) saveStrings("frame_"+frameCount+".csv",csv.split("\n"));
  frame.setTitle((int)frameRate + " fps");
  //*/
}

// -----------------------------------------------------------------
// Keyboard events

void keyPressed()
{

  switch(key)
  {
    case ' ':
      context.setMirror(!context.mirror());
    break;
    case 'r':
      recording = !recording;
    break;
  }

  switch(keyCode)
  {
    case LEFT:
      rotY += 0.1f;
      break;
    case RIGHT:
      // zoom out
      rotY -= 0.1f;
      break;
    case UP:
      if(keyEvent.isShiftDown())
        zoomF += 0.01f;
      else
        rotX += 0.1f;
      break;
    case DOWN:
      if(keyEvent.isShiftDown())
      {
        zoomF -= 0.01f;
        if(zoomF < 0.01)
          zoomF = 0.01;
      }
      else
        rotX -= 0.1f;
      break;
  }
}

可以通过不同的循环将预览与录音分开,您可以使用低分辨率的预览,但是仍然可以保存更多数据,这会很慢.

The preview can be separated from the recording with different loops and you could have a low res preview, but save more data, still, it would be slow.

我还有一个建议:记录为 .oni格式.如果您已经安装了OpenNI,则可以使用几个示例,例如 NiViewer NiBackRecorder . SimpleOpenNI还公开了此功能,请查看 RecorderPlay 示例.

I've got another suggestion: Record to the .oni format instead. If you've installed OpenNI, you could make use of a couple of samples like NiViewer and NiBackRecorder. SimpleOpenNI also exposes this functionality, have a look at the RecorderPlay sample.

我建议尝试这样的事情:

I suggest trying something like this:

  1. 将场景记录到.oni文件中.它应该是快速/响应的
  2. 对.oni录制感到满意时,处理每一帧(将深度转换为x,y,z点/根据需要过滤/保存为所需的格式等)

这是另一个说明这个想法的草图:

Here's another sketch to illustrate the idea:

import SimpleOpenNI.*;

SimpleOpenNI  context;
boolean       recordFlag = true;

int frames = 0;

void setup(){
  context = new SimpleOpenNI(this);

  if(! recordFlag){
    if(! context.openFileRecording("test.oni") ){
      println("can't find recording !!!!");
      exit();
    }
    context.enableDepth();
  }else{  
    // recording
    context.enableDepth();
    // setup the recording 
    context.enableRecorder(SimpleOpenNI.RECORD_MEDIUM_FILE,"test.oni");
    // select the recording channels
    context.addNodeToRecording(SimpleOpenNI.NODE_DEPTH,SimpleOpenNI.CODEC_16Z_EMB_TABLES);
  }
  // set window size 
  if((context.nodes() & SimpleOpenNI.NODE_DEPTH) != 0)
    size(context.depthWidth() , context.depthHeight());
  else 
    exit();
}
void draw()
{
  background(0);
  context.update();
  if((context.nodes() & SimpleOpenNI.NODE_DEPTH) != 0) image(context.depthImage(),0,0);
  if(recordFlag) frames++;
}
void keyPressed(){
  if(key == ' '){
    if(recordFlag){
      saveStrings(dataPath("frames.txt"),split(frames+" ",' '));
      exit();
    }else saveONIToPLY();
  }
}
void saveONIToPLY(){
  frames = int(loadStrings(dataPath("frames.txt"))[0]);
  println("recording " + frames + " frames");
  int w = context.depthWidth();
  int h = context.depthHeight();
  noLoop();
  for(int i = 0 ; i < frames; i++){
    PrintWriter output = createWriter(dataPath("frame_"+i+".ply"));
    output.println("ply");
    output.println("format ascii 1.0");
    output.println("element vertex " + (w*h));
    output.println("property float x");
    output.println("property float y");
    output.println("property float z");
    output.println("end_header\n");
    context.update();
    int[]   depthMap = context.depthMap();
    int     index;
    PVector realWorldPoint;
    for(int y=0;y < h;y++){
      for(int x=0;x < w;x++){
        index = x + y * w;
        realWorldPoint = context.depthMapRealWorld()[index];
        output.println(realWorldPoint.x + " " + realWorldPoint.y + " " + realWorldPoint.z);
      }
    }
    output.flush();
    output.close();
    println("saved " + (i+1) + " of " + frames);
  }
  loop();
  println("recorded " + frames + " frames");
}

recordFlag设置为true时,数据将保存到.oni文件中. 我没有在文档中找到任何内容来读取.oni文件中的帧数,因此,为了快速解决此问题,我添加了frame计数器.如果您碰到空格,录制将停止,但也会将帧数保存在txt文件中,然后退出应用程序.以后会有用.

When the recordFlag is set to true, data will be saved to an .oni file. I haven't found anything in the docs to read how many frames there are in an .oni file so as a quick workaround I've added the frame counter. If you hit space, the recording will stop, but will also save the number of frames in a txt file then exit the app. This will be useful later.

recordFlag设置为false时,如果已经有录音,它将回放. 如果您在此模式"下命中空格,则绘图将停止,将从.txt文件中为每个帧加载帧号:

When the recordFlag is set to false, if there is a recording already, it will playback. If you hit space in this 'mode', drawing will stop, the frame number will be load from the .txt file and for each frame:

  1. 上下文将更新(移至下一帧)
  2. 深度图中的每个像素都将转换为一个点
  3. 所有点都将写入一个.ply文件(您可以使用 meshlab 处理)
  1. The context will be updated (moving to the next frame)
  2. Each pixel in the depth map will be converted to a point
  3. ALL the points will be written to a .ply file (you can process with meshlab)

保存所有框架后,草图将恢复绘制.由于没有3D绘图并且草图非常简单,因此性能应该会更好,但是请记住,大的.oni文件将需要大量RAM.可以根据需要随意修改草图(例如,过滤掉不想保存的信息,等等).

After all frames were saved, the sketch will resume drawing. Since there's no 3D drawing and the sketch is fairly simple, performance should be better, but bare in mind that large .oni file will require a lot of RAM. Feel free to modify the sketch to your needs (e.g. filter out the information you don't want saved, etc.).

还请注意,以上内容虽然应将每个单独的帧保存到PLY,但也保存相同的帧.似乎在调用noLoop()时上下文没有update().这是使用3s的修改后的hacky版本.延迟(希望.ply fille届时将被写入磁盘).

Also note that the above, although should save to PLY each separate frame, it saves the same. It seems the context doesn't update() when noLoop() has been called. Here's a modified hacky version that uses a 3s. delay (hopefully the .ply fille will be written to disk by then).

import SimpleOpenNI.*;

SimpleOpenNI  context;
boolean       recordFlag = false;
boolean       saving = false;
int frames = 0;
int savedFrames = 0;

void setup(){
  context = new SimpleOpenNI(this);

  if(! recordFlag){
    if(! context.openFileRecording("test.oni") ){
      println("can't find recording !!!!");
      exit();
    }
    context.enableDepth();
  }else{  
    // recording
    context.enableDepth();
    // setup the recording 
    context.enableRecorder(SimpleOpenNI.RECORD_MEDIUM_FILE,"test.oni");
    // select the recording channels
    context.addNodeToRecording(SimpleOpenNI.NODE_DEPTH,SimpleOpenNI.CODEC_16Z_EMB_TABLES);
  }
  // set window size 
  if((context.nodes() & SimpleOpenNI.NODE_DEPTH) != 0)
    size(context.depthWidth() , context.depthHeight());
  else 
    exit();
}
void draw()
{
  background(0);
  context.update();
  if((context.nodes() & SimpleOpenNI.NODE_DEPTH) != 0) image(context.depthImage(),0,0);
  if(recordFlag) frames++;
  if(saving && savedFrames < frames){
      delay(3000);//hack
      int i = savedFrames;
      int w = context.depthWidth();
      int h = context.depthHeight();
      PrintWriter output = createWriter(dataPath("frame_"+i+".ply"));
      output.println("ply");
      output.println("format ascii 1.0");
      output.println("element vertex " + (w*h));
      output.println("property float x");
      output.println("property float y");
      output.println("property float z");
      output.println("end_header\n");
      rect(random(width),random(height),100,100);
      int[]   depthMap = context.depthMap();
      int     index;
      PVector realWorldPoint;
      for(int y=0;y < h;y++){
        for(int x=0;x < w;x++){
          index = x + y * w;
          realWorldPoint = context.depthMapRealWorld()[index];
          output.println(realWorldPoint.x + " " + realWorldPoint.y + " " + realWorldPoint.z);
        }
      }
      output.flush();
      output.close();
      println("saved " + (i+1) + " of " + frames);
      savedFrames++;
  }
}
void keyPressed(){
  if(key == ' '){
    if(recordFlag){
      saveStrings(dataPath("frames.txt"),split(frames+" ",' '));
      exit();
    }else saveONIToPLY();
  }
}
void saveONIToPLY(){
  frames = int(loadStrings(dataPath("frames.txt"))[0]);
  saving = true;
  println("recording " + frames + " frames");
}

我不确定帧和文件是否同步并且深度数据是否以中等质量保存,但是我希望我的回答能提供一些想法.

I'm not sure frames and files sync and the depth data is saved at medium quality, but I hope my answer provides some ideas.

这篇关于kinect/处理/简单的openni-点云数据未正确输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆