平面拟合在3d点云 [英] Plane fitting in a 3d point cloud
问题描述
我试图在3d点云中找到飞机,使用回归公式
你只需要添加一个标量字段就可以了:
is_floor = cloud.add_scalar_field(plane_fit)
我们将为飞机的点添加一个值为1的新列。
您可以看到标量字段:
旧回答
I t狡猾地说,您可以轻松使用 PCA 将飞机适合3D点而不是回归。
这是一个简单的PCA实现:
def PCA(data, correlation = False,sort = True):
将主成分分析应用于数据
参数
----------
数据:数组
包含数据的数组。该数组必须具有NxM维度,其中N行中的每
表示一个不同的单独记录,并且M列中的每一列
表示为该单独记录记录的不同变量。
array([
[V11,...,V1m],
...,
[Vn1,...,Vnm]])
correlation(可选):bool
设置要计算的矩阵的类型(请参阅注释):
如果True计算相关矩阵。
如果False(默认)计算协方差矩阵。
sort(可选):bool
设置特征值/向量将具有的顺序
如果为True(默认),它们将被排序(从较高值到较低值)。
如果错误,他们不会。
返回
-------
eigenvalues:(1,M)数组
对应矩阵的特征值。
eigenvector:(M,M)数组
对应矩阵的特征向量。
注意
-----
当表示M变量的不同幅度
时,相关矩阵是一个更好的选择。其他情况下使用协方差矩阵。
mean = np.mean(data,axis = 0)
data_adjust =数据 - 均值
#:数据由np.cov / corrcoef语法转置
如果相关:
matrix = np.corrcoef(data_adjust.T)
else :
matrix = np.cov(data_adjust.T)
eigenvalues,eigenvectors = np.linalg.eig(矩阵)
如果排序:
#:排序特征值和特征向量
sort = eigenvalues.argsort()[:: - 1]
eigenvalues = eigenvalues [sort]
eigenvectors = eigenvectors [:,sort]
返回特征值,特征向量
以下是如何将点与平面拟合: / p>
def best_fitting_plane(points,equation = False):
计算给定点的最佳拟合平面
参数
----------
points:数组
与我们想要的$ b相对应的x,y,z坐标$ b来定义最佳拟合平面。预期格式:
数组(
[x1,y1,z1],
...,
[xn,yn,zn]])
等式(可选):bool
设置oputput平面格式:
如果True返回平面的a,b,c,d系数。
如果False(默认)返回1点和1法向量。
返回
-------
a,b,c,d:float
求解平面方程的系数。
或
点,正常:数组
由1点和1法向量定义的平面。格式:
array([Px,Py,Pz]),array([Nx,Ny,Nz])
w,v = PCA(点数)
#:平面的正常值是最后的特征向量
normal = v [:,2]
#:从如果方程:
a,b,c =正常
d = - (np),则
$ b $ .dot(normal,point))
返回a,b,c,d
else:
返回点,普通
然而,由于此方法对异常值敏感,您可以使用 RANSAC ,以使其适合健壮的异常值。
有一个Python实现的ransac here 。
你应该只需要定义一个平面模型类,以便使用它将飞机装配到3D点。
无论如何,如果您可以清洁3D点从异常值(也许你可以使用KD-Tree SOR过滤器),PCA应该会得到相当不错的结果。
这是一个 SOR :
def statistical_outilier_removal(kdtree,k = 8,z_max = 2):
计算给定KD树上的统计异常值移除过滤器。
参数
----------
kdtree:scipy的KDTree实例
将用于
的KDTree结构计算过滤。
k(可选):int
将用于估计从每个点到最近邻居的
平均距离的最近邻居数。
默认值:8
z_max(可选):int
最大的Z值确定该点是否是异常值或不是
。
返回
-------
sor_filter:布尔数组
布尔型掩码,指示哪个点应该被保存或不保存。
布尔掩码的大小将与KDTree中
的点数相同。
注意
-----
应使用2个可选参数(k和z_max),以便将
过滤器调整为所需的结果。
较高的'k'值将导致(正常情况下)在较高的点数内修剪。
较低的'z_max'值将导致(正常情况下)在修整的较高点数中。
距离,i = kdtree.query(kdtree.data,k = k,n_jobs = -1)
z_distances = stats.zscore(np.mean(距离,轴= 1))
sor_filter = abs(z_distances)
返回sor_filter
您可以使用您的3D点的KD树来计算函数,可以使用这个实现
I am trying to find planes in a 3d point cloud, using the regression formula Z= aX + bY +C
I implemented least squares and ransac solutions, but the 3 parameters equation limits the plane fitting to 2.5D- the formula can not be applied on planes parallel to the Z-axis.
My question is how can I generalize the plane fitting to full 3d? I want to add the fourth parameter in order to get the full equation aX +bY +c*Z + d how can I avoid the trivial (0,0,0,0) solution?
Thanks!
The Code I'm using:
from sklearn import linear_model
def local_regression_plane_ransac(neighborhood):
"""
Computes parameters for a local regression plane using RANSAC
"""
XY = neighborhood[:,:2]
Z = neighborhood[:,2]
ransac = linear_model.RANSACRegressor(
linear_model.LinearRegression(),
residual_threshold=0.1
)
ransac.fit(XY, Z)
inlier_mask = ransac.inlier_mask_
coeff = model_ransac.estimator_.coef_
intercept = model_ransac.estimator_.intercept_
Update
This functionality is now integrated in https://github.com/daavoo/pyntcloud and makes the plane fitting process much simplier:
Given a point cloud:
You just need to add a scalar field like this:
is_floor = cloud.add_scalar_field("plane_fit")
Wich will add a new column with value 1 for the points of the plane fitted.
You can visualize the scalar field:
Old answer
I think that you could easily use PCA to fit the plane to the 3D points instead of regression.
Here is a simple PCA implementation:
def PCA(data, correlation = False, sort = True):
""" Applies Principal Component Analysis to the data
Parameters
----------
data: array
The array containing the data. The array must have NxM dimensions, where each
of the N rows represents a different individual record and each of the M columns
represents a different variable recorded for that individual record.
array([
[V11, ... , V1m],
...,
[Vn1, ... , Vnm]])
correlation(Optional) : bool
Set the type of matrix to be computed (see Notes):
If True compute the correlation matrix.
If False(Default) compute the covariance matrix.
sort(Optional) : bool
Set the order that the eigenvalues/vectors will have
If True(Default) they will be sorted (from higher value to less).
If False they won't.
Returns
-------
eigenvalues: (1,M) array
The eigenvalues of the corresponding matrix.
eigenvector: (M,M) array
The eigenvectors of the corresponding matrix.
Notes
-----
The correlation matrix is a better choice when there are different magnitudes
representing the M variables. Use covariance matrix in other cases.
"""
mean = np.mean(data, axis=0)
data_adjust = data - mean
#: the data is transposed due to np.cov/corrcoef syntax
if correlation:
matrix = np.corrcoef(data_adjust.T)
else:
matrix = np.cov(data_adjust.T)
eigenvalues, eigenvectors = np.linalg.eig(matrix)
if sort:
#: sort eigenvalues and eigenvectors
sort = eigenvalues.argsort()[::-1]
eigenvalues = eigenvalues[sort]
eigenvectors = eigenvectors[:,sort]
return eigenvalues, eigenvectors
And here is how you could fit the points to a plane:
def best_fitting_plane(points, equation=False):
""" Computes the best fitting plane of the given points
Parameters
----------
points: array
The x,y,z coordinates corresponding to the points from which we want
to define the best fitting plane. Expected format:
array([
[x1,y1,z1],
...,
[xn,yn,zn]])
equation(Optional) : bool
Set the oputput plane format:
If True return the a,b,c,d coefficients of the plane.
If False(Default) return 1 Point and 1 Normal vector.
Returns
-------
a, b, c, d : float
The coefficients solving the plane equation.
or
point, normal: array
The plane defined by 1 Point and 1 Normal vector. With format:
array([Px,Py,Pz]), array([Nx,Ny,Nz])
"""
w, v = PCA(points)
#: the normal of the plane is the last eigenvector
normal = v[:,2]
#: get a point from the plane
point = np.mean(points, axis=0)
if equation:
a, b, c = normal
d = -(np.dot(normal, point))
return a, b, c, d
else:
return point, normal
However as this method is sensitive to outliers you could use RANSAC to make the fit robust to outliers.
There is a Python implementation of ransac here.
And you should only need to define a Plane Model class in order to use it for fitting planes to 3D points.
In any case if you can clean the 3D points from outliers (maybe you could use a KD-Tree S.O.R filter to that) you should get pretty good results with PCA.
Here is an implementation of an S.O.R:
def statistical_outilier_removal(kdtree, k=8, z_max=2 ):
""" Compute a Statistical Outlier Removal filter on the given KDTree.
Parameters
----------
kdtree: scipy's KDTree instance
The KDTree's structure which will be used to
compute the filter.
k(Optional): int
The number of nearest neighbors wich will be used to estimate the
mean distance from each point to his nearest neighbors.
Default : 8
z_max(Optional): int
The maximum Z score wich determines if the point is an outlier or
not.
Returns
-------
sor_filter : boolean array
The boolean mask indicating wherever a point should be keeped or not.
The size of the boolean mask will be the same as the number of points
in the KDTree.
Notes
-----
The 2 optional parameters (k and z_max) should be used in order to adjust
the filter to the desired result.
A HIGHER 'k' value will result(normally) in a HIGHER number of points trimmed.
A LOWER 'z_max' value will result(normally) in a HIGHER number of points trimmed.
"""
distances, i = kdtree.query(kdtree.data, k=k, n_jobs=-1)
z_distances = stats.zscore(np.mean(distances, axis=1))
sor_filter = abs(z_distances) < z_max
return sor_filter
You could feed the function with a KDtree of your 3D points computed maybe using this implementation
这篇关于平面拟合在3d点云的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!