Function without config file

This makes WMR headset tracking work out of the box.
This commit is contained in:
Mateo de Mayo 2022-05-27 14:13:37 +00:00
parent 594603861b
commit e36bead967
6 changed files with 171 additions and 43 deletions

View File

@ -27,7 +27,8 @@ in order to get it working in different ones.
## Installation
This was tested on both Ubuntu 20.04 and 18.04, be sure to open an issue if the
steps don't work for you.
steps don't work for you. The main branch of this fork is
[`xrtslam`](https://gitlab.freedesktop.org/mateosss/basalt/-/tree/xrtslam).
### Build and Install Directories
@ -106,7 +107,7 @@ This step is optional but you can try Basalt without Monado with one of the foll
### Monado Specifics
You'll need to compile Monado with the same Eigen used in Basalt, and with the
same flags. For that, set these with CMake (or equivalent flags for meson):
same flags. For that, set these with CMake:
`-DEIGEN3_INCLUDE_DIR=$bsltdeps/basalt/thirdparty/basalt-headers/thirdparty/eigen
-DCMAKE_C_FLAGS="-march=native" -DCMAKE_CXX_FLAGS="-march=native"` otherwise
Monado will automatically use your system's Eigen, and having mismatched Eigen

View File

@ -115,7 +115,7 @@
0.009999999873689375
],
"cam_time_offset_ns": 0,
"view_offset": [247, 0],
"view_offset": [0, 0],
"vignette": []
}
}

View File

@ -1,6 +1,22 @@
# Windows Mixed Reality Headsets
We'll need to make a Basalt config file for your headset, let's say it's a
Monado should work out of the box with WMR devices and Basalt without any input
on your part. So if you successfully followed the guide in the main README and
Monado detects your WMR headset, then tracking should already be working for you.
If it does not work, double check that you read everything in the guide, from
top to bottom.
If it still doesn't work, triple check it.
Now if you are still experiencing issues, crashes or would like to debug the
pipeline for whatever reason, the rest of this document should help you with
that.
## Making a custom Basalt config file
It's a good idea to make a Basalt config file for your headset so that you can
easily tweak it if needed. Let's say we are trying to make a config file for a
Reverb G2.
First, let's get your WMR device json config block. To get that json, set the
@ -22,18 +38,62 @@ already present for the Odyssey+:
cp $bsltdeps/basalt/data/monado/odysseyplus_rt8.toml $bsltdeps/basalt/data/monado/reverbg2.toml
```
And edit the `cam-calib` field in the `reverbg2.toml` file to point to your `reverbg2_calib.json` file.
And edit the `cam-calib` field in the `reverbg2.toml` file to point to your
`reverbg2_calib.json` file.
And that's it, now you just need to reference this `reverbg2.toml` in the
`SLAM_CONFIG` environment variable before launching Monado with `export
SLAM_CONFIG=$bsltdeps/basalt/data/monado/reverbg2.toml` and Basalt will use the
appropriate calibration for your headset.
That's it! now you have a Basalt config file that you can use for your headset.
## Set Monado options
Let's set a couple environment variables in Monado that will help us debug the
SLAM pipeline.
- `SLAM_CONFIG=$bsltdeps/basalt/data/monado/reverbg2.toml`: Tell Monado where
the Basalt `toml` config you just created is. Notice that the `show-gui`
property is enabled in this `toml` file so you will start seeing the Basalt
visualizer when opening Monado. Furthermore the `config-path` key points to a
Basalt specific config file for tweaking the VIO pipeline.
- `OXR_DEBUG_GUI=on`: Enable Monado's own debug GUI.
- `SLAM_SUBMIT_FROM_START=off`: Do not send frames to Basalt from the start,
rather wait until we check the checkbox in the Monado GUI box called "SLAM
Tracker".
- `WMR_AUTOEXPOSURE=off`: Disable autoexposure to have one less moving part, we
will manually adjust it instead on the "WMR Camera" box, by moving the
"Brightness" slider on the "Auto exposure and gain control" section.
## Controling auto exposure
By default, the UI box `SLAM Tracker` has the option `Submit data to SLAM`
disabled so that you first manually configure the exposure and gain values in
the `WMR Camera` box. You can enable it yourself in the UI or enable it at start
by setting the environment variable `SLAM_SUBMIT_FROM_START=true`.
# Video Walkthrough
## Recalibrating my device (TODO)
Here is a 15 minute walkthrough with some tips for using a WMR headset with Monado and Basalt that should help complement the guide found in the [README.md](README.md) file: <https://www.youtube.com/watch?v=jyQKjyRVMS4>
It's not a bad idea to recalibrate your headset manually with the tools Basalt provides.
TODO: Specify better the steps, but roughly they would be:
1. Get calibration target from Kalibr: https://github.com/ethz-asl/kalibr/wiki/downloads
2. Open the pdf in a flat monitor, measure dimensions with a ruler and put them on aprilgrid_6x6.json
3. Record an EuRoC dataset from Monado in which you move the headset around the target (link an example sequence)
4. Run
[basalt_calibrate](https://gitlab.com/VladyslavUsenko/basalt/-/blob/master/doc/Calibration.md#camera-calibration)
on
[euroc](https://gitlab.com/VladyslavUsenko/basalt/-/blob/master/doc/Calibration.md#euroc-dataset).
5. Run
[basalt_calibrate_vio](https://gitlab.com/VladyslavUsenko/basalt/-/blob/master/doc/Calibration.md#camera-imu-mocap-calibration)
on
[euroc](https://gitlab.com/VladyslavUsenko/basalt/-/blob/master/doc/Calibration.md#camera-imu-calibration).
# Video Walkthrough (DEPRECATED)
_This video is not up to date anymore but might be useful to see how things
worked before. Now, `view_offset` is automatically computed, exposure and gain
are automatically set too, so in general there is no manual input needed from
the user._
~~Here is a 15 minute walkthrough with some tips for using a WMR headset with Monado and Basalt that should help complement the guide found in the [README.md](README.md) file: <https://www.youtube.com/watch?v=jyQKjyRVMS4>~~

View File

@ -68,7 +68,7 @@ struct slam_tracker::implementation {
private:
// Options parsed from unified config file
bool show_gui = true;
bool show_gui = false;
string cam_calib_path;
string config_path;
string marg_data_path;
@ -83,6 +83,11 @@ struct slam_tracker::implementation {
static constexpr int NUM_CAMS = 2;
// VIO members
struct {
bool imu = false;
bool cam0 = false;
bool cam1 = false;
} calib_data_ready;
Calibration<double> calib;
VioConfig vio_config;
OpticalFlowBase::Ptr opt_flow_ptr;
@ -116,6 +121,11 @@ struct slam_tracker::implementation {
public:
implementation(const string &unified_config) {
if (unified_config == "DEFAULT") {
// For the pipeline to work now, the user will need to use add_cam/imu_calibration
return;
}
load_unified_config(unified_config);
vio_config.load(config_path);
@ -156,6 +166,7 @@ struct slam_tracker::implementation {
cereal::JSONInputArchive archive(os);
archive(calib);
cout << "Loaded camera with " << calib.intrinsics.size() << " cameras\n";
calib_data_ready.imu = calib_data_ready.cam0 = calib_data_ready.cam1 = true;
} else {
std::cerr << "could not load camera calibration " << calib_path << "\n";
std::abort();
@ -216,6 +227,13 @@ struct slam_tracker::implementation {
cout << "Finished queues_printer\n";
}
void print_calibration() {
std::stringstream ss{};
cereal::JSONOutputArchive write_to_stream(ss);
write_to_stream(calib);
cout << "Calibration: " << ss.str() << "\n";
}
public:
void initialize() {
// Overwrite camera calibration data
@ -223,11 +241,22 @@ struct slam_tracker::implementation {
apply_cam_calibration(c);
}
bool calib_from_monado = added_cam_calibs.size() == 2;
bool view_offset_unknown = calib.view_offset(0) == 0 && calib.view_offset(1) == 0;
if (calib_from_monado || view_offset_unknown) {
compute_view_offset();
cout << "Computed view_offset = " << calib.view_offset.transpose() << "\n";
}
// Overwrite IMU calibration data
for (const auto &c : added_imu_calibs) {
apply_imu_calibration(c);
}
ASSERT(calib_data_ready.imu, "Missing IMU calibration");
ASSERT(calib_data_ready.cam0, "Missing left camera (cam0) calibration");
ASSERT(calib_data_ready.cam1, "Missing right camera (cam1) calibration");
// NOTE: This factory also starts the optical flow
opt_flow_ptr = OpticalFlowFactory::getOpticalFlow(vio_config, calib);
image_data_queue = &opt_flow_ptr->input_queue;
@ -362,39 +391,68 @@ struct slam_tracker::implementation {
return true;
}
void compute_view_offset() {
constexpr double DISTANCE_TO_WALL = 2; // In meters
double width = calib.resolution[0][0];
double height = calib.resolution[0][1];
Sophus::SE3d T_i_c0 = calib.T_i_c[0];
Sophus::SE3d T_i_c1 = calib.T_i_c[1];
Sophus::SE3d T_c1_i = T_i_c1.inverse();
Sophus::SE3d T_c1_c0 = T_c1_i * T_i_c0; // Maps a point in c0 space to c1 space
Eigen::Vector4d p3d{0, 0, DISTANCE_TO_WALL, 1};
Eigen::Vector4d p3d_in_c1 = T_c1_c0 * p3d;
Eigen::Vector2d p2d;
calib.intrinsics[1].project(p3d_in_c1, p2d);
calib.view_offset.x() = (width / 2) - p2d.x();
calib.view_offset.y() = (height / 2) - p2d.y();
}
void add_cam_calibration(const cam_calibration &cam_calib) { added_cam_calibs.push_back(cam_calib); }
void apply_cam_calibration(const cam_calibration &cam_calib) {
using Scalar = double;
int i = cam_calib.cam_index;
size_t i = cam_calib.cam_index;
const auto &tci = cam_calib.T_cam_imu;
Eigen::Matrix3d rci;
rci << tci(0, 0), tci(0, 1), tci(0, 2), tci(1, 0), tci(1, 1), tci(1, 2), tci(2, 0), tci(2, 1), tci(2, 2);
Eigen::Quaterniond q(rci);
Eigen::Vector3d p{tci(0, 3), tci(1, 3), tci(2, 3)};
calib.T_i_c[i] = Calibration<Scalar>::SE3(q, p);
const auto &tic = cam_calib.t_imu_cam;
Eigen::Matrix3d ric;
ric << tic(0, 0), tic(0, 1), tic(0, 2), tic(1, 0), tic(1, 1), tic(1, 2), tic(2, 0), tic(2, 1), tic(2, 2);
Eigen::Quaterniond q(ric);
Eigen::Vector3d p{tic(0, 3), tic(1, 3), tic(2, 3)};
ASSERT_(calib.T_i_c.size() == i);
calib.T_i_c.push_back(Calibration<Scalar>::SE3(q, p));
GenericCamera<double> model;
const vector<Scalar> &cmp = cam_calib.model_params;
if (cam_calib.model == cam_calibration::cam_model::pinhole) {
const vector<Scalar> &d = cam_calib.distortion;
if (cam_calib.distortion_model == "none") {
ASSERT_(d.size() == 0);
PinholeCamera<Scalar>::VecN mp;
mp << cam_calib.fx, cam_calib.fy, cam_calib.cx, cam_calib.cy;
PinholeCamera pinhole(mp);
model.variant = pinhole;
} else if (cam_calib.model == cam_calibration::cam_model::fisheye) {
} else if (cam_calib.distortion_model == "kb4") {
ASSERT_(d.size() == 4);
KannalaBrandtCamera4<Scalar>::VecN mp;
mp << cam_calib.fx, cam_calib.fy, cam_calib.cx, cam_calib.cy, cmp[0], cmp[1], cmp[2], cmp[3];
mp << cam_calib.fx, cam_calib.fy, cam_calib.cx, cam_calib.cy, d[0], d[1], d[2], d[3];
KannalaBrandtCamera4 kannala_brandt(mp);
model.variant = kannala_brandt;
} else if (cam_calib.distortion_model == "rt8") {
ASSERT_(d.size() == 9); // 8 and rpmax
PinholeRadtan8Camera<Scalar>::VecN mp;
mp << cam_calib.fx, cam_calib.fy, cam_calib.cx, cam_calib.cy, d[0], d[1], d[2], d[3], d[4], d[5], d[6], d[7];
Scalar rpmax = d[8];
PinholeRadtan8Camera pinhole_radtan8(mp, rpmax);
model.variant = pinhole_radtan8;
} else {
ASSERT(false, "Unsupported camera model (%d)", static_cast<int>(cam_calib.model));
ASSERT(false, "Unsupported camera model (%s)", cam_calib.distortion_model.c_str());
}
calib.intrinsics[i] = model;
ASSERT_(calib.intrinsics.size() == i);
calib.intrinsics.push_back(model);
calib.resolution[i] = {cam_calib.width, cam_calib.height};
ASSERT_(calib.resolution.size() == i);
calib.resolution.push_back({cam_calib.width, cam_calib.height});
// NOTE: ignoring cam_calib.distortion_model and distortion_params as basalt can't use them
calib_data_ready.cam0 |= i == 0;
calib_data_ready.cam1 |= i == 1;
}
void add_imu_calibration(const imu_calibration &imu_calib) { added_imu_calibs.push_back(imu_calib); }
@ -448,6 +506,8 @@ struct slam_tracker::implementation {
calib.gyro_noise_std = {gyro.noise_std(0), gyro.noise_std(1), gyro.noise_std(2)};
calib.gyro_bias_std = {gyro.bias_std(0), gyro.bias_std(1), gyro.bias_std(2)};
calib_data_ready.imu = true;
}
shared_ptr<vector<string>> enable_pose_ext_timing() {

View File

@ -48,7 +48,7 @@ VioConfig::VioConfig() {
// optical_flow_type = "patch";
optical_flow_type = "frame_to_frame";
optical_flow_detection_grid_size = 50;
optical_flow_max_recovered_dist2 = 0.09f;
optical_flow_max_recovered_dist2 = 0.04f;
optical_flow_pattern = 51;
optical_flow_max_iterations = 5;
optical_flow_levels = 3;
@ -74,12 +74,14 @@ VioConfig::VioConfig() {
vio_enforce_realtime = false;
vio_use_lm = false;
vio_use_lm = true;
vio_lm_lambda_initial = 1e-4;
vio_lm_lambda_min = 1e-6;
vio_lm_lambda_max = 1e2;
// vio_lm_landmark_damping_variant = 1;
// vio_lm_pose_damping_variant = 1;
vio_scale_jacobian = true;
vio_scale_jacobian = false;
vio_init_pose_weight = 1e8;
vio_init_ba_weight = 1e1;
@ -104,9 +106,9 @@ VioConfig::VioConfig() {
mapper_no_factor_weights = false;
mapper_use_factors = true;
mapper_use_lm = false;
mapper_use_lm = true;
mapper_lm_lambda_min = 1e-32;
mapper_lm_lambda_max = 1e2;
mapper_lm_lambda_max = 1e3;
}
void VioConfig::save(const std::string& filename) {

View File

@ -27,7 +27,7 @@ namespace xrt::auxiliary::tracking::slam {
// For implementation: same as IMPLEMENTATION_VERSION_*
// For user: expected IMPLEMENTATION_VERSION_*. Should be checked in runtime.
constexpr int HEADER_VERSION_MAJOR = 2; //!< API Breakages
constexpr int HEADER_VERSION_MAJOR = 3; //!< API Breakages
constexpr int HEADER_VERSION_MINOR = 0; //!< Backwards compatible API changes
constexpr int HEADER_VERSION_PATCH = 0; //!< Backw. comp. .h-implemented changes
@ -185,17 +185,22 @@ private:
constexpr int FID_##SHORT_NAME = ID; \
constexpr int F_##NAME = ID;
/*!
* Container of parameters for a pinhole camera calibration (fx, fy, cx, cy)
* with an optional distortion.
*
*`distortion_model` and its corresponding `distortion` parameters are not
* standardized in this struct to facilitate implementation prototyping.
*/
struct cam_calibration {
enum class cam_model { pinhole, fisheye };
int cam_index; //!< For multi-camera setups. For stereo 0 ~ left, 1 ~ right.
int width, height; //<! Resolution
double frequency; //<! Frames per second
double fx, fy; //<! Focal point
double cx, cy; //<! Principal point
cam_model model;
std::vector<double> model_params;
cv::Matx<double, 4, 4> T_cam_imu; //!< Transformation from camera to imu space
int width, height; //<! Resolution
double frequency; //<! Frames per second
double fx, fy; //<! Focal point
double cx, cy; //<! Principal point
std::string distortion_model; //!< Models like: none, rt4, rt5, rt8, kb4
std::vector<double> distortion; //!< Parameters for the distortion_model
cv::Matx<double, 4, 4> t_imu_cam; //!< Transformation from IMU to camera
};
struct inertial_calibration {
@ -204,7 +209,7 @@ struct inertial_calibration {
//! This transform will be applied to raw measurements.
cv::Matx<double, 3, 3> transform;
//! Offset to apply to raw measurements to; called bias in other contexts.
//! Offset to add to raw measurements to; called bias in other contexts.
cv::Matx<double, 3, 1> offset;
// Parameters for the random processes that model this IMU. See section "2.1