Xinyu-Yi / TransPose

A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units

Home Page:https://xinyu-yi.github.io/TransPose/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

如何使用自己买的IMU的数据进行离线演示效果?

MrBen2019 opened this issue · comments

作者在论文中写到输入的数据是
校准后的6个IMU的加速度 accelerations 6 * 3,
还有方向矩阵 rotations 633
我自己买的九轴传感器其中一个数据是 三轴加速度 + 三轴角速度 +三轴磁场
我理解的是 accelerations 是由三轴加速度校准得来的
但我并不清楚rotations是 九轴IMU的哪些数据得来的
作者能解答一下rotations是怎么得来的吗

Hi @MrBen2019

Can your sensor output Quaternions?
From Quaternions, you can convert to Rotations

You can read more about the conversion axis here: https://zhuanlan.zhihu.com/p/351596374
The code in the lessons was converted from Matlab to Python by me, you can refer here: https://github.com/SlimeVRX/SlimeVRX/tree/main/hipnuc/document

The lectures are very easy to understand for beginners. Good Luck!

Hi @MrBen2019

Can your sensor output Quaternions? From Quaternions, you can convert to Rotations

You can read more about the conversion axis here: https://zhuanlan.zhihu.com/p/351596374 The code in the lessons was converted from Matlab to Python by me, you can refer here: https://github.com/SlimeVRX/SlimeVRX/tree/main/hipnuc/document

The lectures are very easy to understand for beginners. Good Luck!

Hello, I have a small question, the sensor can output quaternions, but should we consider the alignment relationship with the world coordinate system such as ENU or NED? I don't seem to have seen the paper has this answer

Hi @Rickyhzy

Sorry! I do not understand your question. Can you explain more?

Hi @Rickyhzy

Sorry! I do not understand your question. Can you explain more?

Sorry, I am also a beginner. I have learned that inertial navigation needs to determine the carrier system and navigation system, so the quaternion output of IMU is different under different reference systems ENU or NED. I would like to ask how to solve this problem. And After power-on, the IMU starts to output the first frame data. How to register with the initial T-pose? I'm curious about this question. Could you please give me an answer? Thank you very much.

Sorry if my answer is not correct

the quaternion output of IMU is different under different reference systems ENU or NED

image

With Quaternion I only distinguish between [w x y z] or [x y z w], not ENU or NED.

With Euler - Angles:

  • ENU: eul_312 (Z-> X-> Y) Rotate in the order of 312 (first turn Z-then X-last Y)
  • NED: eul_321 (Z-> Y-> X)

With Rotation Matrix

  • ENU: m_312 (Z-> X-> Y)
  • NED: m_321 (Z-> Y-> X)

Conversion formula (Refer: Link)

  • ch_eul2m_312
  • ch_eul2m_321
  • ch_m2eul_312
  • ch_m2eul_321
  • ch_m2q

How to register with the initial T-pose?

You need to perform coordinate system conversion, from sensor coordinate system to SMPL coordinate system.
Read the appendix: A.1 Calibration

Hi @MrBen2019
Can your sensor output Quaternions? From Quaternions, you can convert to Rotations
You can read more about the conversion axis here: https://zhuanlan.zhihu.com/p/351596374 The code in the lessons was converted from Matlab to Python by me, you can refer here: https://github.com/SlimeVRX/SlimeVRX/tree/main/hipnuc/document
The lectures are very easy to understand for beginners. Good Luck!

Hello, I have a small question, the sensor can output quaternions, but should we consider the alignment relationship with the world coordinate system such as ENU or NED? I don't seem to have seen the paper has this answer

During the calibration, we first put an IMU with its axis aligned with a user-defined coordinate system, which we call the "SMPL coordinate frame". By this step, the rotation offset between the SMPL coordinate frame and your IMU global inertial frame (whether ENU/NED or anything else) is determined. You can find the code in livedemo.py

You can also find the calibration part in the appendix of the paper. After this step, we can obtain the IMU measurements w.r.t the SMPL coordinate frame. For the orientation measurements, as they are the imu sensors' orientation rather than the bones' orientations, we additionally right-multiply a sensor-to-bone rotation offset to get the bone orientation. That's all what the calibration does.