如何获得手机的方位与罗盘读数和陀螺仪的读数? [英] How to get a phone's azimuth with compass readings and gyroscope readings?

查看:907
本文介绍了如何获得手机的方位与罗盘读数和陀螺仪的读数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我希望通过下面的方法来让我的手机当前的方向:

  1. 通过 getRotationMatrix() getOrientation()
  2. 在加入陀螺仪读数的整合随着时间的推移它来获得当前的方向。

手机定位:

这款手机的xy平面与地平面平行固定。也就是说,是在发短信,而行走的方向。

getOrientation()归途:

Android的API,可以让我轻松搞定了方向,即方位角,俯仰,滚转,从 getOrientation()

注意,该方法的总是返回范围内的值: [0,-PI] [0,PI]

我的问题:

由于陀螺仪读数的整合,记为 D R ,可能是比较大的,所以当我做 CurrentOrientation + =的dR CurrentOrientation 可能超过 [0,-PI] [0,PI] 的范围。

需要什么样的操作,这样我可以随时获得内部的 [0,-PI] [澳当前方向,PI] 的范围?

我曾尝试在Python以下,但我很怀疑它的正确性。

 旋转= scipy.integrate.trapz(gyroSeries,时间序列)#一体化
如果(headingDirection  - 旋转)LT; -np.pi:
    headingDirection + = 2 * np.pi
ELIF(headingDirection  - 旋转)GT; np.pi:
    headingDirection  -  = 2 * np.pi
#互补滤波器
headingDirection = ALPHA *(headingDirection  - 旋转)+(1  -  ALPHA)* np.mean(方位[np.array(stepNo.tolist())== I])
如果headingDirection< -np.pi:
    headingDirection + = 2 * np.pi
ELIF headingDirection> np.pi:
    headingDirection  -  = 2 * np.pi
 


备注

这是不是那么简单,因为它涉及到以下麻烦制造者:

  1. 的方向传感器读数变为从 0 -PI ,然后直的跳转 + PI 并通过逐渐变回 0 + PI / 2
  2. 的gyrocope阅读的整合也导致了一些麻烦。我应该添加 D R 的方向或减去 D R

不要请参考Android的单证第一,给予确认答复之前。

估计答案也无济于事。

解决方案
  

该方向传感器实际上源于它的读数与实际磁力计和加速计。

我想也许这就是问题的根源。凡在文档中这个规定?更重要的是,它的文档的某个地方明确规定,陀螺仪的读数被忽略?据我所知,在这部影片中所描述的方法实现:

传感器融合在Android设备上:一个革命运动处理

此方法使用了陀螺仪和整合他们的阅读。这pretty的多呈现的问题没有实际意义的休息;不过我会尽量回答。


的方向传感器已经为您整合了陀螺仪的读数,那你是怎么得到的方向。我不明白为什么要自己做吧。

你是不是在做陀螺仪读数正常的融合,它比 CurrentOrientation + =的dR (这是不正确的)更复杂。如果您需要整合陀螺仪的读数(我不知道为什么,对的SensorManager已经在这样做了你),请阅读的姿态矩阵IMU:如何做正确的理论(公式17)

不要试图用欧拉角集成(又名方位角,俯仰,滚转),没有什么好就会出来。

请使用任一四元数或旋转矩阵在计算的,而不是欧拉角。如果您使用的旋转矩阵,你可以随时将它们转换为欧拉角,见

<一个href="https://truesculpt.google$c$c.com/hg-history/38000e9dfece971460473d5788c235fbbe82f31b/Doc/rotation_matrix_to_euler.pdf">Computing欧拉角由格雷戈里G. Slabaugh

旋转矩阵

(这同样适用于四元数)。有(在非degenrate情况)两种方式来重新present的旋转,也就是,的你将得到两个欧拉角。挑选一个是在你需要的范围内。(如遇万向节锁定的,有无穷多个欧拉角,看PDF上面)。只要保证你不会开始使用欧拉角再次在计算后的旋转矩阵欧拉角转换。

目前还不清楚自己在做什么的互补滤波器您可以根据的姿态矩阵IMU:稿件理论,这基本上是一个教程。这不容易做到,但我不认为你会找到一个更好,更容易理解的教程超过这个稿子。

有一件事,我必须去发现自己,当我实现传感器融合基于这份手稿是所谓的积分饱和可能发生。我把照顾它被包围的 TotalCorrection (第27页)。你会明白我在说什么,如果你实现这个传感器融合的。



更新:在这里,我回答你的问题,你在发表意见接受的答案后,

  

我觉得指南针给我我目前的方位用重力和磁场,对不对?采用的是罗盘陀螺仪?

是的,如果手机是至少半秒或多或少固定的,你可以得到一个很好的方向估计通过使用重力,只有指南针。这里是如何做到这一点:谁能告诉我重力感应是否作为倾斜传感器,以提高航向精度

没有,陀螺仪中不指南针使用。

  

你能不能请您解释一下为什么由我完成的整合是错误的?我明白,如果我的手机的音高点上去,欧拉角失败。但是,任何其他事情错了我的积分?

有两个不相关的事情:(i)该积分应做不同,(ⅱ)欧拉角度是因为万向锁的麻烦。我再说一遍,这两个是不相关的。

对于整合:这里是一个简单的例子,你如何可以真正的的什么是错的整合。令x和y是在房间内的水平面的轴线。获取手机在你的手中。通过45度旋转45度围绕x轴线(的室)的电话,然后绕y轴(的室)。然后,重复从开始这些步骤,但是现在绕y轴旋转,然后再绕x轴。电话结束了在一个完全不同的取向。如果按 CurrentOrientation + =的dR做整合你会看到没有什么区别!如果你想要做正确整合理论的手稿

:请阅读上面链接方向余弦矩阵IMU。

对于欧拉角:他们搞砸了应用程序的稳定性,这是足以让我不使用他们的3D任意旋转

我还是不明白,为什么你想自己做,为什么你不希望使用该平台提供的方位估计。机会是,你不能这样做比这更好的。

I wish to get my phone's current orientation by the following method:

  1. Get the initial orientation (azimuth) first via the getRotationMatrix() and getOrientation().
  2. Add the integration of gyroscope reading over time to it to get the current orientation.

Phone Orientation:

The phone's x-y plane is fixed parallel with the ground plane. i.e., is in a "texting-while-walking" orientation.

"getOrientation()" Returnings:

Android API allows me to easily get the orientation, i.e., azimuth, pitch, roll, from getOrientation().

Please note that this method always returns its value within the range: [0, -PI] and [o, PI].

My Problem:

Since the integration of the gyroscope reading, denoted by dR, may be quite big, so when I do CurrentOrientation += dR, the CurrentOrientation may exceed the [0, -PI] and [o, PI] ranges.

What manipulations are needed so that I can ALWAYS get the current orientation within the the [0, -PI] and [o, PI] ranges?

I have tried the following in Python, but I highly doubt its correctness.

rotation = scipy.integrate.trapz(gyroSeries, timeSeries) # integration
if (headingDirection - rotation) < -np.pi:
    headingDirection += 2 * np.pi
elif (headingDirection - rotation) > np.pi:
    headingDirection -= 2 * np.pi
# Complementary Filter
headingDirection = ALPHA * (headingDirection - rotation) + (1 - ALPHA) * np.mean(azimuth[np.array(stepNo.tolist()) == i])
if headingDirection < -np.pi:
    headingDirection += 2 * np.pi
elif headingDirection > np.pi:
    headingDirection -= 2 * np.pi


Remarks

This is NOT that simple, because it involves the following trouble-makers:

  1. The orientation sensor reading goes from 0 to -PI, and then DIRECTLY JUMPS to +PI and gradually gets back to 0 via +PI/2.
  2. The integration of the gyrocope reading also leads to some trouble. Should I add dR to the orientation or subtract dR.

Do please refer to the Android Documentations first, before giving a confirmed answer.

Estimated answers will not help.

解决方案

The orientation sensor actually derives its readings from the real magnetometer and the accelerometer.

I guess maybe this is the source of the confusion. Where is this stated in the documentation? More importantly, does the documentation somewhere explicitly state that the gyro readings are ignored? As far as I know the method described in this video is implemented:

Sensor Fusion on Android Devices: A Revolution in Motion Processing

This method uses the gyros and integrates their readings. This pretty much renders the rest of the question moot; nevertheless I will try to answer it.


The orientation sensor is already integrating the gyro readings for you, that is how you get the orientation. I don't understand why you are doing it yourself.

You are not doing the integration of the gyro readings properly, it is more complicated than CurrentOrientation += dR (which is incorrect). If you need to integrate the gyro readings (I don't see why, the SensorManager is already doing it for you) please read Direction Cosine Matrix IMU: Theory how to do it properly (Equation 17).

Don't try integrating with Euler angles (aka azimuth, pitch, roll), nothing good will come out.

Please use either quaternions or rotation matrices in your computations instead of Euler angles. If you work with rotation matrices, you can always convert them to Euler angles, see

Computing Euler angles from a rotation matrix by Gregory G. Slabaugh

(The same is true for quaternions.) There are (in the non-degenrate case) two ways to represent a rotation, that is, you will get two Euler angles. Pick the one that is in the range you need. (In case of gimbal lock, there are infinitely many Euler angles, see the PDF above). Just promise you won't start using Euler angles again in your computations after the rotation matrix to Euler angles conversion.

It is unclear what you are doing with the complementary filter. You can implement a pretty damn good sensor fusion based on the Direction Cosine Matrix IMU: Theory manuscript, which is basically a tutorial. It's not trivial to do it but I don't think you will find a better, more understandable tutorial than this manuscript.

One thing that I had to discover myself when I implemented sensor fusion based on this manuscript was that the so-called integral windup can occur. I took care of it by bounding the TotalCorrection (page 27). You will understand what I am talking about if you implement this sensor fusion.



UPDATE: Here I answer your questions that you posted in comments after accepting the answer.

I think the compass gives me my current orientation by using gravity and magnetic field, right? Is gyroscope used in the compass?

Yes, if the phone is more or less stationary for at least half a second, you can get a good orientation estimate by using gravity and the compass only. Here is how to do it: Can anyone tell me whether gravity sensor is as a tilt sensor to improve heading accuracy?

No, the gyroscopes are not used in the compass.

Could you please kindly explain why the integration done by me is wrong? I understand that if my phone's pitch points up, euler angle fails. But any other things wrong with my integration?

There are two unrelated things: (i) the integration should be done differently, (ii) Euler angles are trouble because of the Gimbal lock. I repeat, these two are unrelated.

As for the integration: here is a simple example how you can actually see what is wrong with your integration. Let x and y be the axes of the horizontal plane in the room. Get a phone in your hands. Rotate the phone around the x axis (of the room) by 45 degrees, then around the y axis (of the room) by 45 degrees. Then, repeat these steps from the beginning but now rotate around the y axis first, and then around the x axis. The phone ends up in a totally different orientation. If you do the integration according to CurrentOrientation += dR you will see no difference! Please read the above linked Direction Cosine Matrix IMU: Theory manuscript if you want to do the integration properly.

As for the Euler angles: they screw up the stability of the application and it is enough for me not to use them for arbitrary rotations in 3D.

I still don't understand why you are trying to do it yourself, why you don't want to use the orientation estimate provided by the platform. Chances are, you cannot do better than that.

这篇关于如何获得手机的方位与罗盘读数和陀螺仪的读数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆