Here, we explain how to use the calibration tools with [TUM-VI](https://vision.in.tum.de/data/datasets/visual-inertial-dataset), [EuRoC](https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets) and [UZH-FPV](http://rpg.ifi.uzh.ch/uzh-fpv.html) datasets as an example.
The buttons in the GUI are located in the order which you should follow to calibrate the camera. After pressing a button the system will print the output to the command line:
*`detect_corners` starts corner detection in the background thread. Since it is the most time consuming part of the calibration process, the detected corners are cached and loaded if you run the executable again pointing to the same result folder path.
*`optimize` runs an iteration of the optimization and visualizes the result. You should press this button until the error printed in the console output stops decreasing and the optimization converges. Alternatively, you can use the `opt_until_converge` checkbox that will run the optimization until it converges automatically.
*`compute_vign`**(Experimental)** computes a radially-symmetric vignetting for the cameras. For the algorithm to work, **the calibration pattern should be static (camera moving around it) and have a constant lighting throughout the calibration sequence**. If you run `compute_vign` you should press `save_calib` afterwards. The png images with vignetting will also be stored in the result folder.
*`show_init_reproj` shows the initial reprojections computed by the `init_cam_poses` step.
*`show_opt` shows reprojected corners with the current estimate of the intrinsics and poses.
*`show_vign` toggles the visibility of the points used for vignetting estimation. The points are distributed across white areas of the pattern.
*`show_ids` toggles the ID visualization for every point.
*`huber_thresh` controls the threshold for the huber norm in pixels for the optimization.
*`opt_intr` controls if the optimization can change the intrinsics. For some datasets it might be helpful to disable this option for several first iterations of the optimization.
*`optimize` runs an iteration of the optimization. You should press it several times until convergence before proceeding to next steps. Alternatively, you can use the `opt_until_converge` checkbox that will run the optimization until it converges automatically.
*`show_pos` shows spline position for `show_spline` and positions generated from camera pose initialization transformed into IMU coordinate frame for `show_data`.
*`show_rot_error` shows the rotation error between spline and camera pose initializations transformed into IMU coordinate frame.
*`show_mocap` shows the mocap marker position transformed to the IMU frame.
*`show_mocap_rot_error` shows rotation between the spline and Mocap measurements.
*`show_mocap_rot_vel` shows the rotation velocity computed from the Mocap.
The following options control the optimization process:
*`opt_intr` enables optimization of intrinsics. Usually should be disabled for the camera-IMU calibration.
*`opt_poses` enables optimization based camera pose initialization. Sometimes helps to better initialize the spline before running optimization with `opt_corners`.
*`opt_corners` enables optimization based on reprojection corner positions **(should be used by default)**.
*`opt_cam_time_offset` computes the time offset between camera and the IMU. This option should be used only for refinement when the optimization already converged.
*`opt_imu_scale` enables IMU axis scaling, rotation and misalignment calibration. This option should be used only for refinement when the optimization already converged.
**NOTE:** In this case we use a pre-calibrated sequence, so most of refinements or Mocap to IMU calibration will not have any visible effect. If you want to test this functionality use the "raw" sequences, for example `http://vision.in.tum.de/tumvi/raw/dataset-calib-cam3.bag` and `http://vision.in.tum.de/tumvi/raw/dataset-calib-imu1.bag`.