Depthai ROS Driver
Currently, recommended way to launch cameras is to use executables from depthai_ros_driver package.This runs your camera as a ROS2 Component/ROS Nodelet and gives you the ability to customize your camera using ROS parameters.Main API difference between ROS2 and ROS1 is that parameter names use different convention - parameters in ROS2 are namespaced using.
and parameters in ROS1 are namespaced using _
, for example in ROS2: camera.i_pipeline_type
, ROS1 camera_i_pipeline_type
. For sake of simplicity, this Readme uses ROS2 convention. List of all available parameters with their default values can be found at the bottom of Readme.Paramerers that begin with r_
can be freely modified during runtime, for example with rqt.Parameters that begin with i_
are set when camera is initializing, to change them you have to call stop
and start
services. This can be used to hot swap NNs during runtime, changing resolutions, etc.Launch files
We provide few examples for depthai-ros-driver package:camera.launch.py
launches camera in RGBD, and NN in spatial (Mobilenet) mode.rgbd_pcl.launch.py
launches camera in basic RGBD configuration, doesn't load any NNs. Also loads ROS depth processing nodes for RGBD pointcloud.example_multicam.launch.py
launches several cameras at once, each one in different container. Edit themulticam_example.yaml
config file inconfig
directory to change parametersexample_segmentation.launch.py
launches camera in RGBD + semantic segmentation (pipeline type=RGBD, nn_type=rgb)pointcloud.launch.py
- similar torgbd_pcl.launch.py
, but doesn't use RGB component for pointcloudexample_marker_publish.launch.py
launchescamera.launch.py
+ small python node that publishes detected objects as markers/tfsrtabmap.launch.py
launches camera and RTAB-MAP RGBD SLAM (you need to install it first -sudo apt install ros-$ROS_DISTRO-rtabmap-ros
). You might need to set manual focus via parameters here.
ros2 launch depthai_ros_driver camera.launch.py
(ROS2) roslaunch depthai_ros_driver camera.launch
(ROS1)Publishing TFs from extrinsics
By default, camera transforms are published from default URDF descriptions based on CAD models. This can be overriden by using TFPublisher class fromdepthai_bridge
, which based on Device's camera calibration data republishes the description with updated information. To enable this behavior in depthai_ros_driver
, you can use following parameters:camera.i_publish_tf_from_calibration
- setting this to true launches TFPublisher
camera.i_tf_camera_name
- if not set, defaults to the node namecamera.i_tf_camera_model
- if not set, it will be automatically detected. If the node is unable to detect STL file for the camera it is set toOAK-D
. To explicitly set it incamera.launch.py
, setoverride_cam_model:=true
camera.i_tf_base_frame
camera.i_tf_parent_frame
camera.i_tf_cam_pos_x
camera.i_tf_cam_pos_y
camera.i_tf_cam_pos_z
camera.i_tf_cam_roll
camera.i_tf_cam_pitch
camera.i_tf_cam_yaw
camera.launch.py
, you can set pass_tf_args_as_params:=true
so that TF arguments are used to fill those parameters. For example ros2 launch depthai_ros_driver camera.launch.py pass_tf_args_as_params:=true parent_frame:=map cam_pos_x:=1.0 imu_from_descr:=true
It is also possible to set custom URDF path (for now only absolute path works) and custom xacro arguments using camera.i_tf_custom_urdf_path
and camera.i_tf_custom_xacro_args
. Please note that robot_state_publisher must be running.NOTE ON IMU EXTRINSICS If your camera has uncalibrated IMU, a warning will be shown, and IMU will be published with zero rotation and translation. You can override this behavior by setting camera.i_tf_imu_from_descr
: true. This will publish default IMU extrinsics from URDF based on camera model.Available sensors and their resolutions:
- IMX378 - {"12MP", "4K", "1080P"}, default 1080P
- OV9282 - {"800P", "720P", "400P"}, default 800P
- OV9782 - {"800P", "720P", "400P"}, default 800P
- OV9281 - {"800P", "720P", "400P"}, default 800P
- IMX214 - {"13MP", "12MP", "4K", "1080P"}, default 1080P
- IMX412 - {"13MP", "12MP", "4K", "1080P"}, default 1080P
- OV7750 - {"480P", "400P"}, default 480P
- OV7251 - {"480P", "400P"}, default 480P
- IMX477 - {"12MP", "4K", "1080P"}, default 1080P
- IMX577 - {"12MP", "4K", "1080P"}, default 1080P
- AR0234 - {"1200P"}, default 1200P
- IMX582 - {"48MP", "12MP", "4K"}, default 4K
- LCM48 - {"48MP", "12MP", "4K"}, default 4K
Setting RGB parameters
By default RGB camera outputsISP
frame. To set custom width and height of output image, you can set i_isp_num
and i_isp_den
which scale image dimensions (2 and 3 by default, so from 1920x1080 to 1280x720), note for RGBD alignment to work resulting width and height must be divisible by 16.Additionally you can set i.output_isp: false
to use video
output and set custom size using i_width
and i_height
parameters.Setting Stereo parameters
Depth alignment
When settingstereo.i_align_depth: true
, stereo output is aligned to board socket specified by stereo.i_board_socket_id
parameter (by default 0/CAM_A)You can enable rectified Stereo streams by setting, for example in the case of right stream i_publish_right_rect: true
. You can also set i_publish_synced_rect_pair: true
to get both images with the same timestamps.Stereo socket order
By default, right camera is treated as first when used for stereo computation, which is reflected in CameraInfo messages. If you want to reverse that logic, setstereo.i_reverse_stereo_socket_order: true
(this can be also set for individual sensors).Custom Sensor sockets
Configuration of which sensors are used for computing stereo pair can be done either programatically, by specifying them in a constructor of a Stereo node (for example when building a custom pipeline), or via parameters -stereo.i_left_socket_id
/stereo.i_right_socket_id
. Please note that currently if you want to use rgb/center socket instead of one of the given pairs you will need to build a custom pipeline for that.Feature Tracker
Each sensor node (and rectified streams from Stereo node) has the option to add FeatureTracker node, which publishesdepthai_ros_msgs/msg/TrackedFeatures
messages. To enable features on, for example rgb node, set rgb: i_enable_feature_tracker: true
. To enable publishing on rectified streams, set for example stereo: i_left_rect_enable_feature_tracker
Setting IMU parameters
Parameters:i_acc_freq: 400
- Accelerometer sensor frequencyi_acc_cov: 0.0
- Accelerometer covariancei_batch_report_threshold: 1
- Batch report sizei_enable_rotation: false
- Whether to enable rotation vector & magnetometer data (available when using IMU_WITH_MAGN/ IMU_WITH_MAGN_SPLIT message type)i_gyro_cov: 0.0
- Gyroscope covariancei_gyro_freq: 400
- Gyroscope frequencyi_mag_cov: 0.0
- Magnetometer covariancei_mag_freq: 100
- Magnetometer frequencyi_max_batch_reports: 10
- Max reports per batchi_message_type: IMU
- ROS publisher type:IMU
- sensor_msgs/ImuIMU_WITH_MAG
- depthai_ros_msgs/ImuWithMagneticFieldIMU_WITH_MAG_SPLIT
- two publishers - sensor_msgs/Imu & sensor_msgs/MagneticField
i_rot_cov: -1.0
- Rotation covariancei_rot_freq: 400
- Rotation frequencyi_sync_method: LINEAR_INTERPOLATE_ACCEL
- sync method. Available options:COPY
LINEAR_INTERPOLATE_GYRO
LINEAR_INTERPOLATE_ACCEL
i_rotation_vector_type
- type of rotation vector, for more information, refer to this link. Available options:ROTATION_VECTOR
GAME_ROTATION_VECTOR
GEOMAGNETIC_ROTATION_VECTOR
ARVR_STABILIZED_ROTATION_VECTOR
ARVR_STABILIZED_GAME_ROTATION_VECTOR
Stopping/starting camera for power saving/reconfiguration
Stopping camera also can be used for power saving, as pipeline is removed from the device. Topics are also removed when camera is stopped.There is also an option to enable autorestart if camera encounters an error. To do that, setcamera.i_restart_on_diagnostics_error: true
Pipeline parameters
As for the parameters themselves, there are a few crucial ones that decide on how camera behaves.camera_i_pipeline_type
can be eitherRGB
- only publishes RGB stream , NN availableRGBD
- Publishes RGB + Depth streams (seti_publish_topic
for left and right cameras to enable them), NN & Spatial NN availableStereo
- Publishes streams from left and right sensors, NN not availableRGBStereo
- Publishes RGB + Left + Right streams, only RGB NN availableDepth
- Publishes only depth stream, no NN availableCamArray
- Publishes streams for all detected sensors, no NN available This tells the camera whether it should load stereo components. Default set toRGBD
. It is also possible to create a custom pipeline since all types are defined as plugins.
test_plugins
), create an executable in that package (test_plugins.cpp
). Inside that file, define a cusom plugin that inherits from depthai_ros_driver::pipeline_gen::BasePipeline
and overrides createPipeline
method.After that export plugin, for example:C
1#include <pluginlib/class_list_macros.hpp>
2PLUGINLIB_EXPORT_CLASS(test_plugins::Test, depthai_ros_driver::pipeline_gen::BasePipeline)
Xml
1<library path="test_plugins">
2 <class type="test_plugins::Test" base_class_type="depthai_ros_driver::pipeline_gen::BasePipeline">
3 <description>Test Pipeline.</description>
4 </class>
5</library>
camera.i_pipeline_type
to test_plugins::Test
.Neural networks
Basic configuration options for cameras with 3 sensors can be controlled by 3camera
parameters:camera.i_nn_type
can be eithernone
,rgb
orspatial
. This is responsible for whether the NN that we load should also take depth information (and for example provide detections in 3D format). Default set tospatial
camera.i_mx_id
/camera.i_ip
/camera.i_usb_port_id
are for connecting to a specific camera. If not set, it automatically connects to the next available device. You can get those parameters from logs by running the default launch file.nn.i_nn_config_path
represents path to JSON that contains information on what type of NN to load, and what parameters to use. Currently we provide options to load MobileNet, Yolo and Segmentation (not in spatial) models. To see their example configs, navigate todepthai_ros_driver/config/nn
. Defaults tomobilenet.json
fromdepthai_ros_driver
depthai_ros_driver/segmentation
depthai_ros_driver/mobilenet
depthai_ros_driver/yolo
sensor_name.i_enable_nn
to true
. After that, you can similarly set sensor_name_nn.i_nn_config_path
to a path to a config file. You can also set up spatial NN for Stereo node and use one of its left/right streams as rgb input. This can be enabled in cameras with RGB side sensors such as OAK-D-SR and OAK-D-LR. Setting up is also similar:stereo.i_enable_spatial_nn
- set to true to enable spatial NNstereo.i_spatial_nn_source
- set to eitherleft
orright
to choose which sensor to use as rgb input for NNstereo_nn.i_nn_config_path
- path to config file for spatial NN
depthai_ros_driver/config/camera.yaml
.Specific camera configurations
PoE Cameras
Since PoE cameras use protocol that has lower throughput than USB, running default camera launch can result in lags depending on chosen resolution/fps. To combat this issue, you can use encoded frames, which let you keep desired resolution/fps at the cost of image quality reduction due to compression. One additional difference is thatsubpixel
depth filtering is disabled in this mode. To enable low_bandwidth, for example for rgb camera, change parameters:rgb.i_low_bandwidth
-true
to enablergb.i_low_bandwidth_quality
- desired quality % (default-50) Seelow_bandwidth.yaml
file for example parameters for all streams
Recalibration
If you want to use other calibration values than the ones provided by the device, you can do it in following ways:- Use
set_camera_info
services available for each of the image streams - Use
i_calibration_file
parameter available to point to the calibration file. Note camera name must start with/
, so for example/rgb
. Seedepthai_ros_driver/config/calibration
for example calibration files.calibration.launch
file is provided to start up a ROS camera calibrator node in both monocular and stereo configurations. Calibration file syntax (fromcamera_info_manager
):
Yaml
1- file:///full/path/to/local/file.yaml
2 - file:///full/path/to/videre/file.ini
3 - package://camera_info_manager/tests/test_calibration.yaml
4 - package://ros_package_name/calibrations/camera3.yaml
Using external sources for NN inference or Stereo Depth
There is a possibility of using external image topics as input for NNs or Depth calculation.Example scenarios- We want to get segmentation or 2D detection output based on images published by usb camera node. This can be achieved by setting
rgb.i_simulate_from_topic
parameter totrue
. This createssensor_msgs/Image
subscriber listening by default on/<node_name>/rgb/input
topic that passes data into the pipeline on each callback. Topic names can be changed either by classic ROS topic remapping or settingrgb.i_simulated_topic_name
to a desired name. By default, original sensor node still runs and publishes data. Settingrgb.i_disable_node
to true will prevent it from spawning. Checkdet2d_usb_cam_overlay.launch.py
in `depthai_filters to see it in action. - Calculating depth - both
left
andright
sensor nodes can be setup as in the example above to calculate calculate depth/disparity from external topics. Note that for this to work properly you need specify:
left.i_board_socket_id: 1
right.i_board_socket_id: 2
- Default stereo input size is set to 1280x720, in case of different image size, To set custom ones, set
stereo.i_set_input_size: true
and adjuststereo.i_input_width
andstereo.i_input_height
accordingly. - external calibration file path using
camera.i_external_calibration_path
parameter. To get calibration from the device you can either setcamera.i_calibration_dump
to true or callsave_calibration
service. Calibration will be saved to/tmp/<mx_id>_calibration.json
. An example can be seen instereo_from_rosbag.launch.py
indepthai_ros_driver
Developers guide
For easier development inside isolated workspace, one can use Visual Studio Code with DevContainers plugin, to do that:- Create separate workspace
- Clone repository into src
- Copy
.devcontainer
directory into main workspace directory - Open workspace directory in VSCode
List of parameters
Yaml
1/oak:
2 ros__parameters:
3 camera:
4 i_calibration_dump: false
5 i_enable_imu: true
6 i_enable_ir: true
7 i_external_calibration_path: ''
8 i_floodlight_brightness: 0
9 i_ip: ''
10 i_laser_dot_brightness: 800
11 i_mx_id: ''
12 i_nn_type: spatial
13 i_pipeline_dump: false
14 i_pipeline_type: RGBD
15 i_publish_tf_from_calibration: false
16 i_restart_on_diagnostics_error: false
17 i_tf_base_frame: oak
18 i_tf_cam_pitch: '0.0'
19 i_tf_cam_pos_x: '0.0'
20 i_tf_cam_pos_y: '0.0'
21 i_tf_cam_pos_z: '0.0'
22 i_tf_cam_roll: '0.0'
23 i_tf_cam_yaw: '0.0'
24 i_tf_camera_model: ''
25 i_tf_camera_name: oak
26 i_tf_custom_urdf_location: ''
27 i_tf_custom_xacro_args: ''
28 i_tf_imu_from_descr: 'false'
29 i_tf_parent_frame: oak-d-base-frame
30 i_usb_port_id: ''
31 i_usb_speed: SUPER_PLUS
32 diagnostic_updater:
33 period: 1.0
34 imu:
35 i_acc_cov: 0.0
36 i_acc_freq: 400
37 i_batch_report_threshold: 5
38 i_enable_rotation: false
39 i_get_base_device_timestamp: false
40 i_gyro_cov: 0.0
41 i_gyro_freq: 400
42 i_mag_cov: 0.0
43 i_max_batch_reports: 10
44 i_message_type: IMU
45 i_rot_cov: -1.0
46 i_sync_method: LINEAR_INTERPOLATE_ACCEL
47 i_update_ros_base_time_on_ros_msg: false
48 left:
49 i_add_exposure_offset: false
50 i_board_socket_id: 1
51 i_calibration_file: ''
52 i_disable_node: false
53 i_enable_feature_tracker: false
54 i_enable_lazy_publisher: true
55 i_enable_nn: false
56 i_exposure_offset: 0
57 i_fps: 30.0
58 i_fsync_continuous: false
59 i_fsync_trigger: false
60 i_get_base_device_timestamp: false
61 i_height: 720
62 i_low_bandwidth: false
63 i_low_bandwidth_quality: 50
64 i_max_q_size: 30
65 i_publish_topic: false
66 i_resolution: 720P
67 i_reverse_stereo_socket_order: false
68 i_sensor_img_orientation: AUTO
69 i_set_isp3a_fps: false
70 i_simulate_from_topic: false
71 i_simulated_topic_name: ''
72 i_update_ros_base_time_on_ros_msg: false
73 i_width: 1280
74 r_exposure: 1000
75 r_iso: 800
76 r_set_man_exposure: false
77 nn:
78 i_board_socket_id: 0
79 i_disable_resize: false
80 i_enable_passthrough: false
81 i_enable_passthrough_depth: false
82 i_get_base_device_timestamp: false
83 i_num_inference_threads: 2
84 i_num_pool_frames: 4
85 i_update_ros_base_time_on_ros_msg: false
86 rgb:
87 i_add_exposure_offset: false
88 i_board_socket_id: 0
89 i_calibration_file: ''
90 i_disable_node: false
91 i_enable_feature_tracker: false
92 i_enable_lazy_publisher: true
93 i_enable_nn: false
94 i_enable_preview: false
95 i_exposure_offset: 0
96 i_fps: 30.0
97 i_fsync_continuous: false
98 i_fsync_trigger: false
99 i_get_base_device_timestamp: false
100 i_height: 720
101 i_interleaved: false
102 i_isp_den: 3
103 i_isp_num: 2
104 i_keep_preview_aspect_ratio: true
105 i_low_bandwidth: false
106 i_low_bandwidth_quality: 50
107 i_max_q_size: 30
108 i_output_isp: true
109 i_preview_height: 300
110 i_preview_size: 300
111 i_preview_width: 300
112 i_publish_topic: true
113 i_resolution: 1080P
114 i_reverse_stereo_socket_order: false
115 i_sensor_img_orientation: AUTO
116 i_set_isp3a_fps: false
117 i_set_isp_scale: true
118 i_simulate_from_topic: false
119 i_simulated_topic_name: ''
120 i_update_ros_base_time_on_ros_msg: false
121 i_width: 1280
122 r_exposure: 20000
123 r_focus: 1
124 r_iso: 800
125 r_set_man_exposure: false
126 r_set_man_focus: false
127 r_set_man_whitebalance: false
128 r_whitebalance: 3300
129 right:
130 i_add_exposure_offset: false
131 i_board_socket_id: 2
132 i_calibration_file: ''
133 i_disable_node: false
134 i_enable_feature_tracker: false
135 i_enable_lazy_publisher: true
136 i_enable_nn: false
137 i_exposure_offset: 0
138 i_fps: 30.0
139 i_fsync_continuous: false
140 i_fsync_trigger: false
141 i_get_base_device_timestamp: false
142 i_height: 720
143 i_low_bandwidth: false
144 i_low_bandwidth_quality: 50
145 i_max_q_size: 30
146 i_publish_topic: false
147 i_resolution: 720P
148 i_reverse_stereo_socket_order: false
149 i_sensor_img_orientation: AUTO
150 i_set_isp3a_fps: false
151 i_simulate_from_topic: false
152 i_simulated_topic_name: ''
153 i_update_ros_base_time_on_ros_msg: false
154 i_width: 1280
155 r_exposure: 1000
156 r_iso: 800
157 r_set_man_exposure: false
158 stereo:
159 i_add_exposure_offset: false
160 i_align_depth: true
161 i_bilateral_sigma: 0
162 i_board_socket_id: 0
163 i_depth_filter_size: 5
164 i_depth_preset: HIGH_ACCURACY
165 i_disparity_width: DISPARITY_96
166 i_enable_alpha_scaling: false
167 i_enable_brightness_filter: false
168 i_enable_companding: false
169 i_enable_decimation_filter: false
170 i_enable_disparity_shift: false
171 i_enable_distortion_correction: false
172 i_enable_lazy_publisher: true
173 i_enable_spatial_filter: false
174 i_enable_spatial_nn: false
175 i_enable_speckle_filter: false
176 i_enable_temporal_filter: false
177 i_enable_threshold_filter: false
178 i_exposure_offset: 0
179 i_extended_disp: false
180 i_get_base_device_timestamp: false
181 i_height: 720
182 i_left_rect_add_exposure_offset: false
183 i_left_rect_enable_feature_tracker: false
184 i_left_rect_exposure_offset: 0
185 i_left_rect_low_bandwidth: false
186 i_left_rect_low_bandwidth_quality: 50
187 i_left_socket_id: 1
188 i_low_bandwidth: false
189 i_low_bandwidth_quality: 50
190 i_lr_check: true
191 i_lrc_threshold: 10
192 i_max_q_size: 30
193 i_output_disparity: false
194 i_publish_left_rect: false
195 i_publish_right_rect: false
196 i_publish_synced_rect_pair: false
197 i_publish_topic: true
198 i_rectify_edge_fill_color: 0
199 i_reverse_stereo_socket_order: false
200 i_right_rect_add_exposure_offset: false
201 i_right_rect_enable_feature_tracker: false
202 i_right_rect_exposure_offset: 0
203 i_right_rect_low_bandwidth: false
204 i_right_rect_low_bandwidth_quality: 50
205 i_right_socket_id: 2
206 i_set_disparity_to_depth_use_spec_translation: false
207 i_set_input_size: false
208 i_socket_name: rgb
209 i_spatial_nn_source: right
210 i_stereo_conf_threshold: 240
211 i_subpixel: false
212 i_update_ros_base_time_on_ros_msg: false
213 i_width: 1280
214 use_sim_time: false