From 61d9e67d485530be6a11013cea9378d5148d5a3d Mon Sep 17 00:00:00 2001 From: Vladyslav Usenko Date: Wed, 2 Oct 2019 16:33:09 +0200 Subject: [PATCH] fix t265 tutorial --- doc/Realsense.md | 6 +++--- thirdparty/basalt-headers | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/Realsense.md b/doc/Realsense.md index 71038d9..3aa7f6d 100644 --- a/doc/Realsense.md +++ b/doc/Realsense.md @@ -27,7 +27,7 @@ basalt_rs_t265_record --dataset-path ~/t265_calib_data/ --manual-exposure * `--dataset-path` specifies the location where the recorded dataset will be stored. In this case it will be stored in `~/t265_calib_data//`. * `--manual-exposure` disables the autoexposure. In this tutorial the autoexposure is disabled for all calibration sequences, but for the VIO sequence (sequence0) we enable it. -![t265_record](doc/img/t265_record.png) +![t265_record](/doc/img/t265_record.png) The GUI elements have the following meaning: * `webp_quality` compression quality. The highest value (101) means lossless compression. For photometric calibration it is important not to have any compression artifacts, so we record these calibration sequences with lossless compression. @@ -59,7 +59,7 @@ Run the response function calibration: basalt_response_calib.py -d ~/t265_calib_data/response_calib ``` You should see the response function and the irradiance image similar to the one shown below. For the details of the algorithm see Section 2.3.1 of [[arXiv:1607.02555]](https://arxiv.org/abs/1607.02555). The results suggest that the response function used in the camera is linear. -![t265_inv_resp_irradiance](doc/img/t265_inv_resp_irradiance.png) +![t265_inv_resp_irradiance](/doc/img/t265_inv_resp_irradiance.png) ## Multi-Camera Geometric and Vignette Calibration For the camera calibration we need to record a dataset with a static aprilgrid pattern. @@ -96,7 +96,7 @@ To perform the calibration follow these steps: ## IMU and Motion Capture Calibration After calibrating cameras we can proceed to geometric and time calibration of the cameras, IMU and motion capture system. Setting up the motion capture system is specific for your setup. -For the motion capture recording we use [ros_vrpn_client](https://github.com/ethz-asl/ros_vrpn_client) with [basalt_capture_mocap.py](scripts/basalt_capture_mocap.py). We record the data to the `mocap0` folder and then move it to the `mav0` directory of the camera dataset. This script is provided as an example. Motion capture setup is different in every particular case. +For the motion capture recording we use [ros_vrpn_client](https://github.com/ethz-asl/ros_vrpn_client) with [basalt_capture_mocap.py](/scripts/basalt_capture_mocap.py). We record the data to the `mocap0` folder and then move it to the `mav0` directory of the camera dataset. This script is provided as an example. Motion capture setup is different in every particular case. **Important for recording the dataset:** * Set the `skip_frames` slider to 1 to use the full framerate. diff --git a/thirdparty/basalt-headers b/thirdparty/basalt-headers index ee71c01..ff561d2 160000 --- a/thirdparty/basalt-headers +++ b/thirdparty/basalt-headers @@ -1 +1 @@ -Subproject commit ee71c0172400419853f675d385ee7e06242facaf +Subproject commit ff561d2369d8dd159ff1e7316e451bec41544ebe