基本空间雕刻算法 [英] Basic space carving algorithm

查看:27
本文介绍了基本空间雕刻算法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有如下问题,如图所示.我有点云和由四面体算法生成的网格.我将如何使用该算法雕刻网格?地标是点云吗?

算法伪代码:

对于每个 3D 特征点将其转换为 2D 投影坐标对于每个 2D 特征点向网格的多边形投射光线得到交点如果 zintersection <3D 特征点的 z对于(每个三角形顶点)剔除那个三角形.

这是 Guru Spektre 提到的算法的后续实现 :)

更新算法代码:

 int i;for (i = 0; i " <<u1 <<"u2 ==>"<<u2<<"u3 ==>"<<u3<<std::endl;updatedIndices.erase(updatedIndices.begin()+ u1);updatedIndices.erase(updatedIndices.begin()+ u2);updatedIndices.erase(updatedIndices.begin()+ u3);}}}}if ( updatedIndices.size() <= out.numberoftrifaces){std::cout <<当前人脸列表===>"<"<<开始(点云",Ogre::RenderOperation::OT_TRIANGLE_LIST);for (int n = 0; n 位置(顶点转换);}for (int n = 0 ; n < updatedIndices.size(); n++){int n0 = updatedIndices[n+0];int n1 = updatedIndices[n+1];int n2 = updatedIndices[n+2];如果( n0 <0 || n1 <0 || n2 <0 ){std::cout<<负指数"<<std::endl;休息;}手动->三角形(n0,n1,n2);}手动->结束();

跟进算法:

我现在有两个版本,一个是三角版,另一个是雕刻版.

它不是表面网格.这是两个文件

因此,您从具有已知

我使用硬编码的表格网格(灰色)和点云(浅绿色)和简单的相机控制创建了小型应用程序.我可以在哪里保存任意数量的视图(屏幕截图 + 相机直接矩阵).加载回来后,它与网格本身对齐(黄色光线穿过图像中的水蓝色点,也穿过表格网格).蓝线从相机焦点投射到它的角落.这将模拟您获得的输入.应用程序的第二部分将仅使用这些图像和矩阵与点云(不再有网格表面)将其四边形(已完成)现在只需通过每个视图中的每个地标(浅绿色点)投射光线并删除目标顶点之前的所有四边形在 pointcloud 中被击中(这个东西甚至还没有开始,可能在周末)......最后只存储表面三角形(简单地使用所有只使用过一次的三角形也已经完成,除了保存部分,但要从这很容易......).

[进度更新 2]

我添加了地标检测并与点云匹配

正如你所看到的,只有有效的光线被投射(那些在图像上可见的)所以点云上的一些点不会投射光线(单一的浅绿色点)).所以现在只缺少射线/三角形相交和从列表中删除四面体的内容......

I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ?

Pseudo code of the algorithm:

for every 3D feature point
 convert it 2D projected coordinates

for every 2D feature point
cast a ray toward the polygons of the mesh 
get intersection point
if zintersection < z of 3D feature point
for ( every triangle vertices )
cull that triangle.

Here is a follow up implementation of the algorithm mentioned by the Guru Spektre :)

Update code for the algorithm:

 int i;
        for (i = 0; i < out.numberofpoints; i++)
        {
            Ogre::Vector3 ray_pos = pos; // camera position);
            Ogre::Vector3 ray_dir = (Ogre::Vector3 (out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]) - pos).normalisedCopy();  // vertex - camea pos ;

            Ogre::Ray ray;
            ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z));
            ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z));
            Ogre::Vector3 result;
            unsigned int u1;
            unsigned int u2;
            unsigned int u3;
            bool rayCastResult = RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3);

            if ( rayCastResult )
            {
                Ogre::Vector3 targetVertex(out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]);
                float distanceTargetFocus = targetVertex.squaredDistance(pos);
                float distanceIntersectionFocus = result.squaredDistance(pos);
                if(abs(distanceTargetFocus) >= abs(distanceIntersectionFocus))
                {
                    if ( u1 != -1 && u2 != -1 && u3 != -1)
                    {
                        std::cout << "Remove index "<< "u1 ==> " <<u1 << "u2 ==>"<<u2<<"u3 ==> "<<u3<< std::endl;
                        updatedIndices.erase(updatedIndices.begin()+ u1);
                        updatedIndices.erase(updatedIndices.begin()+ u2);
                        updatedIndices.erase(updatedIndices.begin()+ u3);

                    }
                }
            }

            }


            if ( updatedIndices.size() <= out.numberoftrifaces)
            {
                std::cout << "current face list===> "<< out.numberoftrifaces << std::endl;

                std::cout << "deleted face list===> "<< updatedIndices.size() << std::endl;
                manual->begin("Pointcloud", Ogre::RenderOperation::OT_TRIANGLE_LIST);

                for (int n = 0; n < out.numberofpoints; n++)
                {
                    Ogre::Vector3 vertexTransformed = Ogre::Vector3( out.pointlist[3*n+0], out.pointlist[3*n+1], out.pointlist[3*n+2]) - mReferencePoint;
                    vertexTransformed *=1000.0 ;
                    vertexTransformed = mDeltaYaw * vertexTransformed;

                    manual->position(vertexTransformed);

                }

                for (int n = 0 ; n < updatedIndices.size(); n++)
                {
                     int n0 = updatedIndices[n+0];
                     int n1 = updatedIndices[n+1];
                     int n2 = updatedIndices[n+2];

                    if ( n0 < 0 || n1 <0 || n2 <0 )
                    {
                        std::cout<<"negative indices"<<std::endl;
                        break;
                    }
                    manual->triangle(n0, n1, n2);

                }

                manual->end();

Follow up with the algorithm:

I have now two versions one is the triangulated one and the other is the carved version.

It's not not a surface mesh. Here are the two files http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_non_triangulated.obj http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_triangulated.obj

解决方案

I see it like this:

So you got image from camera with known matrix and FOV and focal length.

From that you know where exactly the focal point is and where the image is proected onto the camera chip (Z_near plane). So any vertex, its corresponding pixel and focal point lies on the same line.

So for each view cas ray from focal point to each visible vertex of the pointcloud. and test if any face of the mesh hits before hitting face containing target vertex. If yes remove it as it would block the visibility.

Landmark in this context is just feature point corresponding to vertex from pointcloud. It can be anything detectable (change of intensity, color, pattern whatever) usually SIFT/SURF is used for this. You should have them located already as that is the input for pointcloud generation. If not you can peek pixel corresponding to each vertex and test for background color.

Not sure how you want to do this without the input images. For that you need to decide which vertex is visible from which side/view. May be it is doable form nearby vertexes somehow (like using vertex density points or corespondence to planar face...) or the algo is changed somehow for finding unused vertexes inside mesh.

To cast a ray do this:

ray_pos=tm_eye*vec4(imgx/aspect,imgy,0.0,1.0);
ray_dir=ray_pos-tm_eye*vec4(0.0,0.0,-focal_length,1.0);

where tm_eye is camera direct transform matrix, imgx,imgy is the 2D pixel position in image normalized to <-1,+1> where (0,0) is the middle of image. The focal_length determines the FOV of camera and aspect ratio is ratio of image resolution image_ys/image_xs

Ray triangle intersection equation can be found here:

If I extract it:

vec3 v0,v1,v2; // input triangle vertexes
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) no intersection;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) no intersection;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) no intersection;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt))  // tt is distance to target vertex
    {
    // intersection
    }              

Follow ups:

To move between normalized image (imgx,imgy) and raw image (rawx,rawy) coordinates for image of size (imgxs,imgys) where (0,0) is top left corner and (imgxs-1,imgys-1) is bottom right corner you need:

imgx = (2.0*rawx / (imgxs-1)) - 1.0
imgy = 1.0 - (2.0*rawy / (imgys-1))

rawx = (imgx + 1.0)*(imgxs-1)/2.0
rawy = (1.0 - imgy)*(imgys-1)/2.0

[progress update 1]

I finally got to the point I can compile sample test input data for this to get even started (as you are unable to share valid data at all):

I created small app with hard-coded table mesh (gray) and pointcloud (aqua) and simple camera control. Where I can save any number of views (screenshot + camera direct matrix). When loaded back it aligns with the mesh itself (yellow ray goes through aqua dot in image and goes through the table mesh too). The blue lines are casted from camera focal point to its corners. This will emulate the input you got. The second part of the app will use only these images and matrices with the point cloud (no mesh surface anymore) tetragonize it (already finished) now just cast ray through each landmark in each view (aqua dot) and remove all tetragonals before target vertex in pointcloud is hit (this stuff is not even started yet may be in weekend)... And lastly store only surface triangles (easy just use all triangles which are used just once also already finished except the save part but to write wavefront obj from it is easy ...).

[Progress update 2]

I added landmark detection and matching with the point cloud

as you can see only valid rays are cast (those that are visible on image) so some points on point cloud does not cast rays (singular aqua dots)). So now just the ray/triangle intersection and tetrahedron removal from list is what is missing...

这篇关于基本空间雕刻算法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆