Skip to content

ZED2 Camera Integration Documentation

Overview

This document describes the integration of the ZED2 stereo camera into the kart, its usage through the ROS 2 wrapper, and the combination with YOLOv5 for cone detection.

Official Resources

Hardware: ZED2 Camera

The ZED2 camera by Stereolabs is a stereo vision camera capable of providing:

  • High-definition left and right stereo images
  • Depth sensing
  • 3D point clouds
  • Positional tracking (6DoF)
  • Integrated IMU sensors (accelerometer, gyroscope, magnetometer)
  • Environmental sensors (barometer, temperature sensor)

ROS 2 Integration

The ZED2 camera is integrated into the project using the official Stereolabs ZED ROS 2 Wrapper:

Installation Requirements

To properly install and run the ZED ROS 2 Wrapper with the ZED2 camera, you must ensure the following dependencies and system configuration are in place.

sudo dpkg -i nv-tensorrt-local-repo-ubuntu2404-10.10.0-cuda-12.9_1.0-1_amd64.deb
sudo apt update

If you encounter GPG key errors, follow these additional steps:

sudo cp /var/nv-tensorrt-local-repo-ubuntu2404-10.10.0-cuda-12.9/*.gpg /usr/share/keyrings/

sudo nano /etc/apt/sources.list.d/nv-tensorrt-local-repo-ubuntu2404-10.10.0-cuda-12.9.list

Replace the content of the file with:

deb [signed-by=/usr/share/keyrings/nv-tensorrt-local-CD20EDBE-keyring.gpg] file:///var/nv-tensorrt-local-repo-ubuntu2404-10.10.0-cuda-12.9 /

Then update again:

sudo apt update

This should resolve the key issues.

Finally, install the required TensorRT runtime libraries:

sudo apt-get install libnvinfer10 libnvinfer-dev libnvinfer-plugin-dev python3-libnvinfer
  • ZED ROS 2 Wrapper

    • Clone and build the zed-ros2-wrapper package in your existing ROS 2 workspace:
cd ~/ros2_ws/src
git clone https://github.com/stereolabs/zed-ros2-wrapper.git
cd ..
rosdep install --from-paths src --ignore-src -r -y
colcon build --symlink-install

Official repository:
https://github.com/stereolabs/zed-ros2-wrapper


Once all the dependencies are installed and the wrapper is successfully built, you should be able to launch the ZED2 ROS 2 node without issues.

Launching the Camera

The camera is launched using a provided launch file, typically:

ros2 launch zed_wrapper zed_camera.launch.py camera_model:=zed2

Cone Detection with YOLOv5

In this project, YOLOv5 is used to perform real-time cone detection on images captured by the ZED2 camera. The ZED ROS 2 Wrapper supports custom object detection models through ONNX integration, allowing you to run your own trained detectors such as YOLOv5 directly on the GPU using TensorRT for real-time inference.

Exporting and Using a Custom YOLOv5 Model

If you have trained a YOLOv5 model (e.g., for cone detection), follow these steps to integrate it into the ZED wrapper:

  1. Export the model to ONNX format:
    You can do this using PyTorch and the YOLOv5 export tools (e.g., export.py script from the YOLOv5 repository):

This will generate a .onnx file.

  1. Enable object detection in the ZED wrapper by editing the configuration file:

Open your common_stereo.yaml (located in your ROS 2 workspace, inside zed-ros2-wrapper/zed_wrapper/config), and modify or add the following lines:

yaml object_detection: od_enabled: true model: 'CUSTOM_YOLOLIKE_BOX_OBJECTS' custom_onnx_file: '$path to model'

First-Time Optimization

The first time you launch the node with your custom ONNX model, TensorRT will optimize the model for inference, which may take additional time (several seconds to minutes depending on the system).
Subsequent runs will be much faster, as the optimized engine will be cached and reused.


Once all dependencies are correctly installed and the YOLOv5 model is configured, you should be able to run real-time object detection with the ZED2 camera using ROS 2.