在RK3588上部署ROS2与ORB-SLAM3实现Gazebo小车自主导航-环境搭建过程

在RK3588上部署ROS2与ORB-SLAM3实现Gazebo小车自主导航-环境搭建过程

项目背景与目标

为什么选择这些技术?

环境准备与验证

创建开发容器(在桌面系统里创建)
在容器里安装ROS2 Humble

创建测试环境

创建ROS节点(小球运动模拟)
使用RViz2可视化数据

部署ORB-SLAM3

编译安装流程
发布相机位姿涉及的修改点
测试SLAM系统
如何获取位姿信息

Gazebo仿真环境

安装TurtleBot3仿真包
运行仿真环境

完整系统集成[TODO]
参考链接

项目背景与目标

在机器人开发领域,同时定位与地图构建(SLAM) 技术是赋予机器人自主移动能力的关键。本项目将展示如何在RK3588开发板上部署ROS2(Robot Operating System 2)环境,集成先进的ORB-SLAM3视觉SLAM算法,并通过Gazebo物理仿真环境模拟小车运动。

为什么选择这些技术?

RK3588:强大的ARM处理器,提供足够的计算能力处理SLAM算法
ROS2 Humble:机器人开发的标准化框架,提供通信中间件和工具集
ORB-SLAM3:目前最先进的视觉SLAM系统之一,支持单目、双目和RGB-D相机
Gazebo:高保真物理仿真环境,可在实际部署前验证算法

环境准备与验证

创建开发容器(在桌面系统里创建)

cat> run_ros.sh <<-'EOF'
#!/bin/bash
image_name="ubuntu:22.04"
echo $image_name
container_name="ros2_humble"
# 允许容器访问X11显示
xhost +
if [ $(docker ps -a -q -f name=^/${container_name}$) ]; then
    echo "容器 '$container_name' 已经存在。"
else
    echo "容器 '$container_name' 不存在。正在创建..."
    docker run -id --privileged --net=host 
                    -v /etc/localtime:/etc/localtime:ro 
                    -v $PWD:/home -e DISPLAY=$DISPLAY -w /home 
                    -v /tmp/.X11-unix:/tmp/.X11-unix 
                    -e GDK_SCALE -e GDK_DPI_SCALE 
                    --name $container_name --hostname=$container_name $image_name /bin/bash
fi
docker start $container_name
docker exec -ti $container_name bash
EOF
bash run_ros.sh

容器化开发的优势
使用Docker容器可以创建隔离的开发环境,避免污染主机系统。--privileged--net=host参数让容器获得必要的硬件访问权限,-v /tmp/.X11-unix允许容器内应用显示GUI界面。

在容器里安装ROS2 Humble

apt update
apt install x11-apps -y
xclock  # 测试GUI显示

apt install lsb-core -y
apt install locales -y
locale-gen en_US en_US.UTF-8
update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
export LANG=en_US.UTF-8

apt install curl gnupg2 lsb-release vim -y
curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(source /etc/os-release && echo $UBUNTU_CODENAME) main" | tee /etc/apt/sources.list.d/ros2.list > /dev/null

apt update
apt install ros-humble-desktop -y

apt install python3-argcomplete -y
apt install ros-dev-tools -y
apt install v4l-utils -y

# 安装开发工具
apt install python3-rosdep python3-rosinstall python3-rosinstall-generator python3-wstool build-essential -y

apt install ros-humble-vision-opencv && apt install ros-humble-message-filters -y

apt install ros-humble-tf2-geometry-msgs -y

# 安装Python依赖
apt install python3-pip -y

apt install ros-humble-sensor-msgs-py ros-humble-cv-bridge -y

apt install libepoxy-dev -y

# 初始化ROS依赖系统
rosdep init
rosdep update

source /opt/ros/humble/setup.bash
echo "127.0.0.1       ros2_humble" >> /etc/hosts

为什么需要这些步骤?
ROS2依赖特定的系统环境,包括正确的语言设置和软件源配置。这些命令确保系统满足ROS2运行的基本要求,并安装必要的通信工具和开发包。

创建测试环境

创建ROS节点(小球运动模拟)

mkdir -p /home/ROS2
cd /home/ROS2
cat > v2x_node_top_view.py <<-'EOF'
import rclpy
from rclpy.node import Node
import numpy as np
from sensor_msgs.msg import PointCloud2, Image as ImageRos
from visualization_msgs.msg import Marker, MarkerArray
import math
import cv2
from sensor_msgs.msg import Image, PointCloud2
from std_msgs.msg import Header
from sensor_msgs_py import point_cloud2
import builtin_interfaces.msg

from sensor_msgs.msg import Image
import cv2

import sys
import numpy as np
from sensor_msgs.msg import Image

def imgmsg_to_cv2(img_msg):
    if img_msg.encoding != "bgr8":
        rospy.logerr("This Coral detect node has been hardcoded to the 'bgr8' encoding.  Come change the code if you're actually trying to implement a new camera")
    dtype = np.dtype("uint8") # Hardcode to 8 bits...
    dtype = dtype.newbyteorder('>' if img_msg.is_bigendian else '<')
    image_opencv = np.ndarray(shape=(img_msg.height, img_msg.width, 3), # and three channels of data. Since OpenCV works with bgr natively, we don't need to reorder the channels.
                    dtype=dtype, buffer=img_msg.data)
    # If the byt order is different between the message and the system.
    if img_msg.is_bigendian == (sys.byteorder == 'little'):
        image_opencv = image_opencv.byteswap().newbyteorder()
    return image_opencv

def cv2_to_imgmsg(cv_image):
    img_msg = Image()
    img_msg.height = cv_image.shape[0]
    img_msg.width = cv_image.shape[1]
    img_msg.encoding = "bgr8"
    img_msg.is_bigendian = 0
    img_msg.data = cv_image.tostring()
    img_msg.step = len(img_msg.data) // img_msg.height # That double line is actually integer division, not a comment
    return img_msg

# 生成圆柱点云
def generate_cylinder(center_x, center_y, radius, height, num_points=500):
    points = []
    for _ in range(num_points):
        # 随机角度和高度
        theta = np.random.uniform(0, 2*np.pi)
        h = np.random.uniform(0, height)

        # 计算点坐标
        x = center_x + radius * np.cos(theta)
        y = center_y + radius * np.sin(theta)
        z = h

        # 添加到点云 (x,y,z + 强度)
        points.append([x, y, z, 0.0])

    return np.array(points, dtype=np.float32)

# 创建可视化Marker
def create_cylinder_marker(center_x, center_y, height, radius, id_num, frame_id="map", stamp=None):
    marker = Marker()
    marker.header.frame_id = frame_id
    if stamp:
        marker.header.stamp = stamp
    marker.ns = "cylinders"
    marker.id = id_num
    marker.type = Marker.CYLINDER
    marker.action = Marker.ADD

    # 位置和尺寸
    marker.pose.position.x = center_x
    marker.pose.position.y = center_y
    marker.pose.position.z = height/2  # 中心位于半高
    marker.pose.orientation.w = 1.0

    marker.scale.x = radius * 2
    marker.scale.y = radius * 2
    marker.scale.z = height

    # 颜色 (RGBA)
    marker.color.r = 0.0
    marker.color.g = 1.0
    marker.color.b = 0.0
    marker.color.a = 0.5  # 半透明

    return marker

def create_ball_marker(x, y, z, id_num, frame_id="map", stamp=None):
    marker = Marker()
    marker.header.frame_id = frame_id
    if stamp:
        marker.header.stamp = stamp
    marker.ns = "ball"
    marker.id = id_num
    marker.type = Marker.SPHERE
    marker.action = Marker.ADD

    # 位置和尺寸
    marker.pose.position.x = x
    marker.pose.position.y = y
    marker.pose.position.z = z
    marker.pose.orientation.w = 1.0

    marker.scale.x = 0.2  # 直径
    marker.scale.y = 0.2
    marker.scale.z = 0.2

    # 颜色 (RGBA)
    marker.color.r = 1.0
    marker.color.g = 0.0
    marker.color.b = 0.0
    marker.color.a = 1.0

    return marker

# 创建顶视图摄像头图像
def create_top_view_image(cylinder1, cylinder2, ball_pos, ball_history, img_width=640, img_height=480):
    # 创建白色背景
    image = np.ones((img_height, img_width, 3), dtype=np.uint8) * 255

    # 定义坐标映射参数
    scale = 100  # 缩放比例
    offset_x = img_width // 2
    offset_y = img_height // 2

    # 将3D坐标转换为2D图像坐标 (顶视图)
    def project_to_2d(x, y):
        u = int(offset_x + x * scale)
        v = int(offset_y + y * scale)
        return u, v

    # 绘制坐标轴
    cv2.line(image, (offset_x, 0), (offset_x, img_height), (200, 200, 200), 1)
    cv2.line(image, (0, offset_y), (img_width, offset_y), (200, 200, 200), 1)
    cv2.putText(image, "X", (img_width - 20, offset_y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 1)
    cv2.putText(image, "Y", (offset_x + 10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 1)

    # 绘制网格
    grid_size = 1.0  # 网格间距
    for i in range(-5, 6):
        # 水平线
        y_pos = i * grid_size
        _, v = project_to_2d(0, y_pos)
        cv2.line(image, (0, v), (img_width, v), (220, 220, 220), 1)

        # 垂直线
        x_pos = i * grid_size
        u, _ = project_to_2d(x_pos, 0)
        cv2.line(image, (u, 0), (u, img_height), (220, 220, 220), 1)

        # 网格标签
        if i != 0:
            cv2.putText(image, f"{
              y_pos:.1f}", (offset_x + 5, v - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (100, 100, 100), 1)
            cv2.putText(image, f"{
              x_pos:.1f}", (u + 5, offset_y + 15), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (100, 100, 100), 1)

    # 绘制圆柱1 (顶视图为圆形)
    cx, cy = project_to_2d(cylinder1[0], cylinder1[1])
    radius = int(cylinder1[3] * scale)
    cv2.circle(image, (cx, cy), radius, (0, 180, 0), -1)
    cv2.circle(image, (cx, cy), radius, (0, 100, 0), 2)

    # 绘制圆柱2
    cx, cy = project_to_2d(cylinder2[0], cylinder2[1])
    radius = int(cylinder2[3] * scale)
    cv2.circle(image, (cx, cy), radius, (0, 180, 0), -1)
    cv2.circle(image, (cx, cy), radius, (0, 100, 0), 2)

    # 绘制8字形轨迹
    if len(ball_history) > 1:
        for i in range(1, len(ball_history)):
            x1, y1 = ball_history[i-1]
            x2, y2 = ball_history[i]

            u1, v1 = project_to_2d(x1, y1)
            u2, v2 = project_to_2d(x2, y2)

            # 绘制轨迹线
            cv2.line(image, (u1, v1), (u2, v2), (200, 200, 255), 2)

    # 绘制小球轨迹点
    for pos in ball_history:
        u, v = project_to_2d(pos[0], pos[1])
        cv2.circle(image, (u, v), 2, (100, 100, 255), -1)

    # 绘制当前小球位置
    bx, by = project_to_2d(ball_pos[0], ball_pos[1])
    radius = int(0.2 * scale)
    cv2.circle(image, (bx, by), radius, (0, 0, 255), -1)
    cv2.circle(image, (bx, by), radius, (100, 0, 0), 2)

    # 添加标题和说明
    cv2.putText(image, "Top View: Ball Moving in Figure-8 Pattern", (10, 30),
                cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2)
    cv2.putText(image, "Red Ball: Current Position", (10, img_height - 30),
                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1)
    cv2.putText(image, "Pink Line: Movement Path", (10, img_height - 10),
                cv2.FONT_HERSHEY_SIMPLEX, 0.5, (200, 100, 255), 1)

    # 添加比例尺
    scale_length = 1.0  # 1米
    u1, v1 = project_to_2d(-4.5, -4.0)
    u2, v2 = project_to_2d(-4.5 + scale_length, -4.0)
    cv2.line(image, (u1, v1), (u2, v2), (0, 0, 0), 2)
    cv2.putText(image, "1m", (u1 + 10, v1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), 1)

    return image

class PcPublisher:
    def __init__(self, node, publisher_name="kitti_pcl", qos=10) -> None:
        self.pcl_pub = node.create_publisher(PointCloud2, publisher_name, qos)

    def publish(self, point_cloud, stamp, frame_id="map"):
        header = Header()
        header.stamp = stamp
        header.frame_id = frame_id
        # 创建PointCloud2消息 (只包含xyz)
        fields = [
            point_cloud2.PointField(name='x', offset=0, datatype=point_cloud2.PointField.FLOAT32, count=1),
            point_cloud2.PointField(name='y', offset=4, datatype=point_cloud2.PointField.FLOAT32, count=1),
            point_cloud2.PointField(name='z', offset=8, datatype=point_cloud2.PointField.FLOAT32, count=1),
            point_cloud2.PointField(name='intensity', offset=12, datatype=point_cloud2.PointField.FLOAT32, count=1)
        ]
        cloud_msg = point_cloud2.create_cloud(header, fields, point_cloud)
        self.pcl_pub.publish(cloud_msg)

class V2XNode(Node):
    def __init__(self):
        super().__init__('v2x_node')
        
        # 创建发布器
        self.pcl_pub = PcPublisher(self, 'v2x_pcl')
        self.marker_pub = self.create_publisher(MarkerArray, 'v2x_markers', 10)
        self.image_pub = self.create_publisher(ImageRos, 'v2x_top_view', 10)

        # 圆柱参数
        self.cylinder_height = 2.0
        self.cylinder_radius = 0.5
        self.cylinder1_pos = (1.0, 0.0, self.cylinder_height, self.cylinder_radius)
        self.cylinder2_pos = (-1.0, 0.0, self.cylinder_height, self.cylinder_radius)

        # 生成点云
        points1 = generate_cylinder(self.cylinder1_pos[0], self.cylinder1_pos[1],
                               self.cylinder_radius, self.cylinder_height)
        points2 = generate_cylinder(self.cylinder2_pos[0], self.cylinder2_pos[1],
                               self.cylinder_radius, self.cylinder_height)
        self.all_points = np.vstack((points1, points2))

        # 创建圆柱Marker
        self.marker_array = MarkerArray()
        
        # 小球运动参数
        self.ball_radius = 0.2
        self.ball_z = self.ball_radius  # 小球离地高度
        self.t = 0.0
        self.rate = self.create_rate(10)  # 10Hz

        # 存储小球历史位置
        self.ball_history = []
        self.max_history = 200  # 存储的历史点数

        self.get_logger().info("Starting simulation: two cylinders with ball moving in figure-8 pattern")
        
        # 创建定时器
        self.timer = self.create_timer(0.1, self.timer_callback)  # 10Hz

    def timer_callback(self):
        # 获取当前时间戳
        stamp = self.get_clock().now().to_msg()
        
        # 更新小球位置 (8字形轨迹)
        # 参数方程: x = sin(t), y = sin(t)cos(t)
        ball_x = np.sin(self.t)
        ball_y = 0.5 * np.sin(self.t) * np.cos(self.t) * 2  # 调整y轴幅度

        # 记录小球位置
        self.ball_history.append((ball_x, ball_y))
        if len(self.ball_history) > self.max_history:
            self.ball_history.pop(0)

        # 创建圆柱Marker (每次更新)
        self.marker_array.markers = []
        self.marker_array.markers.append(
            create_cylinder_marker(
                self.cylinder1_pos[0], self.cylinder1_pos[1], 
                self.cylinder_height, self.cylinder_radius, 0, "map", stamp
            )
        )
        self.marker_array.markers.append(
            create_cylinder_marker(
                self.cylinder2_pos[0], self.cylinder2_pos[1], 
                self.cylinder_height, self.cylinder_radius, 1, "map", stamp
            )
        )
        
        # 创建小球Marker
        ball_marker = create_ball_marker(ball_x, ball_y, self.ball_z, 0, "map", stamp)
        self.marker_array.markers.append(ball_marker)

        # 创建并发布顶视图图像
        camera_image = create_top_view_image(
            self.cylinder1_pos,
            self.cylinder2_pos,
            (ball_x, ball_y, self.ball_z),
            self.ball_history
        )
        image_msg = cv2_to_imgmsg(camera_image)
        image_msg.header.stamp = stamp
        self.image_pub.publish(image_msg)

        # 发布点云和标记
        self.pcl_pub.publish(self.all_points, stamp)
        self.marker_pub.publish(self.marker_array)

        # 更新时间
        self.t += 0.1
        if self.t > 2 * math.pi:
            self.t = 0

def main(args=None):
    rclpy.init(args=args)
    node = V2XNode()
    try:
        rclpy.spin(node)
    except KeyboardInterrupt:
        pass
    finally:
        node.destroy_node()
        rclpy.shutdown()

if __name__ == '__main__':
    main()
EOF
python3 v2x_node_top_view.py &

使用RViz2可视化数据


# 配置RViz默认视图
mkdir -p /root/.rviz2
cat > /root/.rviz2/default.rviz <<-'EOF'
Visualization Manager:
  Class: ""
  Displays:
    - Alpha: 0.5
      Cell Size: 1
      Class: rviz_default_plugins/Grid
      Color: 160; 160; 164
      Enabled: true
      Line Style:
        Line Width: 0.029999999329447746
        Value: Lines
      Name: Grid
      Normal Cell Count: 0
      Offset:
        X: 0
        Y: 0
        Z: 0
      Plane: XY
      Plane Cell Count: 10
      Reference Frame: <Fixed Frame>
      Value: true
    - Alpha: 1
      Autocompute Intensity Bounds: true
      Autocompute Value Bounds:
        Max Value: 10
        Min Value: -10
        Value: true
      Axis: Z
      Channel Name: intensity
      Class: rviz_default_plugins/PointCloud2
      Color: 255; 255; 255
      Color Transformer: Intensity
      Decay Time: 0
      Enabled: true
      Invert Rainbow: false
      Max Color: 255; 255; 255
      Max Intensity: 0
      Min Color: 0; 0; 0
      Min Intensity: 0
      Name: PointCloud2
      Position Transformer: XYZ
      Selectable: true
      Size (Pixels): 3
      Size (m): 0.009999999776482582
      Style: Flat Squares
      Topic:
        Depth: 5
        Durability Policy: Volatile
        History Policy: Keep Last
        Reliability Policy: Reliable
        Value: /v2x_pcl
      Use Fixed Frame: true
      Use rainbow: true
      Value: true
    - Class: rviz_default_plugins/Image
      Enabled: true
      Max Value: 1
      Median window: 5
      Min Value: 0
      Name: Image
      Normalize Range: true
      Topic:
        Depth: 5
        Durability Policy: Volatile
        History Policy: Keep Last
        Reliability Policy: Reliable
        Value: /v2x_top_view
      Value: true
    - Class: rviz_default_plugins/MarkerArray
      Enabled: true
      Name: MarkerArray
      Namespaces:
        ball: true
        cylinders: true
      Topic:
        Depth: 5
        Durability Policy: Volatile
        History Policy: Keep Last
        Reliability Policy: Reliable
        Value: /v2x_markers
      Value: true
  Enabled: true
  Global Options:
    Background Color: 48; 48; 48
    Fixed Frame: map
    Frame Rate: 30
  Name: root
  Tools:
    - Class: rviz_default_plugins/Interact
      Hide Inactive Objects: true
    - Class: rviz_default_plugins/MoveCamera
    - Class: rviz_default_plugins/Select
    - Class: rviz_default_plugins/FocusCamera
    - Class: rviz_default_plugins/Measure
      Line color: 128; 128; 0
    - Class: rviz_default_plugins/SetInitialPose
      Topic:
        Depth: 5
        Durability Policy: Volatile
        History Policy: Keep Last
        Reliability Policy: Reliable
        Value: /initialpose
    - Class: rviz_default_plugins/SetGoal
      Topic:
        Depth: 5
        Durability Policy: Volatile
        History Policy: Keep Last
        Reliability Policy: Reliable
        Value: /goal_pose
    - Class: rviz_default_plugins/PublishPoint
      Single click: true
      Topic:
        Depth: 5
        Durability Policy: Volatile
        History Policy: Keep Last
        Reliability Policy: Reliable
        Value: /clicked_point
  Transformation:
    Current:
      Class: rviz_default_plugins/TF
  Value: true
  Views:
    Current:
      Class: rviz_default_plugins/Orbit
      Distance: 14.556740760803223
      Enable Stereo Rendering:
        Stereo Eye Separation: 0.05999999865889549
        Stereo Focal Distance: 1
        Swap Stereo Eyes: false
        Value: false
      Focal Point:
        X: -1.2142829895019531
        Y: 1.8122000694274902
        Z: -1.6165688037872314
      Focal Shape Fixed Size: true
      Focal Shape Size: 0.05000000074505806
      Invert Z Axis: false
      Name: Current View
      Near Clip Distance: 0.009999999776482582
      Pitch: 1.4197964668273926
      Target Frame: <Fixed Frame>
      Value: Orbit (rviz)
      Yaw: 0.9553983807563782
    Saved: ~
EOF
# 启动可视化工具
rviz2

RViz2的作用
RViz是ROS的3D可视化工具,可以实时显示点云、机器人模型、标记和传感器数据。通过合理配置,开发者可以直观地监控SLAM算法的运行状态。

部署ORB-SLAM3

编译安装流程

cd /home/ROS2
git clone https://github.com/zang09/ORB_SLAM3_ROS2.git orbslam3_ros2
git clone https://github.com/zang09/ORB-SLAM3-STEREO-FIXED.git ORB_SLAM3
git clone https://github.com/stevenlovegrove/Pangolin.git
git clone https://github.com/ros-drivers/usb_cam.git

# 编译Pangolin
cd /home/ROS2/Pangolin/
git checkout v0.9.3

# 解除编译警告的影响
sed -i '/-Werror/!b; /^[[:space:]]*#/!s/^/#/' CMakeLists.txt
rm build -rf
mkdir build && cd build
cmake ..
make -j4 && make install
ldconfig

# 编译安装ORB_SLAM3
cd /home/ROS2/ORB_SLAM3
git checkout f8ac79153b103dd44f7ea3cf2a1a4aea10b18483

# 增加在关闭时保存点云的功能
sed -i '522 i
    {
        ofstream f("map.ply");
        f << "ply\nformat ascii 1.0\ncomment ORB-SLAM3 map\n";
        f << "element vertex " << mpAtlas->GetAllMapPoints().size() << "\n";
        f << "property float x\nproperty float y\nproperty float z\n";
        f << "end_header\n";

        for(auto &mp : mpAtlas->GetAllMapPoints()) {
            if(!mp->isBad()) {
                Eigen::Vector3f pos = mp->GetWorldPos();
                f << pos.x() << " " << pos.y() << " " << pos.z() << "\n";
            }
        }
        f.close();
    }' src/System.cc

rm build -rf
rm Thirdparty/DBoW2/build -rf
rm Thirdparty/g2o/build -rf
rm Thirdparty/Sophus/build -rf
chmod +x build.sh
./build.sh
cd /home/ROS2/ORB_SLAM3/Thirdparty/Sophus/build
make install

# 编译orbslam3_ros2
cd /home/ROS2/orbslam3_ros2
git checkout humble

sed -i 's#"camera"#"/image_raw"#g' src/monocular/monocular-slam-node.cpp
sed -i 's#/opt/ros/foxy/lib/python3.8/site-packages/#/opt/ros/humble/lib/python3.10/site-packages/#g' CMakeLists.txt
sed -i 's#~/Install/ORB_SLAM/ORB_SLAM3#/home/ROS2/ORB_SLAM3#g' CMakeModules/FindORB_SLAM3.cmake

rm build -rf
colcon build --symlink-install --packages-select orbslam3

cd /home/ROS2/orbslam3_ros2/vocabulary
tar -xf ORBvoc.txt.tar.gz
source /home/ROS2/orbslam3_ros2/install/local_setup.bash

# 编译安装usb_cam
cd /home/ROS2/usb_cam
git checkout 1091c5b3437d8ea4d04fa8c1593f648001daf539
rm build -rf
colcon build

# 设置环境变量
echo "source /home/ROS2/usb_cam/install/setup.bash" >> ~/.bashrc
echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc
echo "source /home/ROS2/orbslam3_ros2/install/local_setup.bash" >> ~/.bashrc
echo "export ROS_DOMAIN_ID=42"  >> ~/.bashrc
source ~/.bashrc

# 修改/home/ROS2/usb_cam/config/params_1.yaml 中的 video_device、image_width、image_height

发布相机位姿涉及的修改点

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 6b4d3a4..bd920a5 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -23,6 +23,7 @@ find_package(message_filters REQUIRED)
 find_package(Sophus REQUIRED)
 find_package(Pangolin REQUIRED)
 find_package(ORB_SLAM3 REQUIRED)
+find_package(tf2_geometry_msgs REQUIRED)

 include_directories(
   include
@@ -39,7 +40,7 @@ add_executable(mono
   src/monocular/mono.cpp
   src/monocular/monocular-slam-node.cpp
 )
-ament_target_dependencies(mono rclcpp sensor_msgs cv_bridge ORB_SLAM3 Pangolin OpenCV)
+ament_target_dependencies(mono rclcpp sensor_msgs cv_bridge ORB_SLAM3 Pangolin OpenCV tf2_geometry_msgs)

 add_executable(rgbd
   src/rgbd/rgbd.cpp
diff --git a/CMakeModules/FindORB_SLAM3.cmake b/CMakeModules/FindORB_SLAM3.cmake
index 6ea6a89..b6f43ed 100644
--- a/CMakeModules/FindORB_SLAM3.cmake
+++ b/CMakeModules/FindORB_SLAM3.cmake
@@ -5,7 +5,7 @@
 #
 # To help the search ORB_SLAM3_ROOT_DIR environment variable as the path to ORB_SLAM3 root folder
 #  e.g. `set( ORB_SLAM3_ROOT_DIR=~/ORB_SLAM3) `
-set(ORB_SLAM3_ROOT_DIR "~/Install/ORB_SLAM/ORB_SLAM3")
+set(ORB_SLAM3_ROOT_DIR "/home/ROS2/ORB_SLAM3")

 # message(${
              ORB_SLAM3_ROOT_DIR})
 # message(${
              ORB_SLAM3_ROOT_DIR}/include)
diff --git a/src/monocular/monocular-slam-node.cpp b/src/monocular/monocular-slam-node.cpp
index 5f083e9..f5c0b78 100644
--- a/src/monocular/monocular-slam-node.cpp
+++ b/src/monocular/monocular-slam-node.cpp
@@ -4,15 +4,47 @@

 using std::placeholders::_1;

+geometry_msgs::msg::PoseStamped SE3fToPoseStamped(
+    const Sophus::SE3f& Tcw,
+    const std::string& frame_id,
+    const rclcpp::Time& stamp)
+{
            
+    geometry_msgs::msg::PoseStamped pose_msg;
+
+    // 设置消息头
+    pose_msg.header.stamp = stamp;
+    pose_msg.header.frame_id = frame_id;  // 通常是 "map" 或 "odom"
+
+    // 提取平移部分
+    const Eigen::Vector3f& translation = Tcw.translation();
+    pose_msg.pose.position.x = translation.x();
+    pose_msg.pose.position.y = translation.y();
+    pose_msg.pose.position.z = translation.z();
+
+    // 提取旋转部分并转换为四元数
+    const Eigen::Quaternionf quat(Tcw.rotationMatrix());
+    pose_msg.pose.orientation.x = quat.x();
+    pose_msg.pose.orientation.y = quat.y();
+    pose_msg.pose.orientation.z = quat.z();
+    pose_msg.pose.orientation.w = quat.w();
+
+    return pose_msg;
+}
+
 MonocularSlamNode::MonocularSlamNode(ORB_SLAM3::System* pSLAM)
 :   Node("ORB_SLAM3_ROS2")
 {
            
     m_SLAM = pSLAM;
     // std::cout << "slam changed" << std::endl;
     m_image_subscriber = this->create_subscription<ImageMsg>(
-        "camera",
+        "/image_raw",
         10,
         std::bind(&MonocularSlamNode::GrabImage, this, std::placeholders::_1));
+    // 创建位姿发布器
+    pose_publisher_ = this->create_publisher<geometry_msgs::msg::PoseStamped>(
+                   "/orb_slam3/camera_pose", 10);
+    tf_broadcaster_ = std::make_shared<tf2_ros::TransformBroadcaster>(this);
+
     std::cout << "slam changed" << std::endl;
 }

@@ -38,6 +70,22 @@ void MonocularSlamNode::GrabImage(const ImageMsg::SharedPtr msg)
         return;
     }

-    std::cout<<"one frame has been sent"<<std::endl;
-    m_SLAM->TrackMonocular(m_cvImPtr->image, Utility::StampToSec(msg->header.stamp));
+    //std::cout<<"one frame has been sent"<<std::endl;
+    //m_SLAM->TrackMonocular(m_cvImPtr->image, Utility::StampToSec(msg->header.stamp));
+    auto Tcw=m_SLAM->TrackMonocular(m_cvImPtr->image, Utility::StampToSec(msg->header.stamp));
+    if(m_SLAM->GetTrackingState()==2)
+    {
            
+        auto now = this->now();
+        auto pose_msg = SE3fToPoseStamped(Tcw,"map",now);// 将 Sophus::SE3f 转换为 PoseStamped
+        pose_publisher_->publish(pose_msg);// 发布位姿
+        // 可选:同时广播TF变换
+        geometry_msgs::msg::TransformStamped transform;
+        transform.header = pose_msg.header;
+        transform.child_frame_id = "camera_link";
+        transform.transform.translation.x = pose_msg.pose.position.x;
+        transform.transform.translation.y = pose_msg.pose.position.y;
+        transform.transform.translation.z = pose_msg.pose.position.z;
+        transform.transform.rotation = pose_msg.pose.orientation;
+        tf_broadcaster_->sendTransform(transform);
+    }
 }
diff --git a/src/monocular/monocular-slam-node.hpp b/src/monocular/monocular-slam-node.hpp
index 9f2f584..d74a8e3 100644
--- a/src/monocular/monocular-slam-node.hpp
+++ b/src/monocular/monocular-slam-node.hpp
@@ -3,7 +3,10 @@

 #include "rclcpp/rclcpp.hpp"
 #include "sensor_msgs/msg/image.hpp"
-
+#include <geometry_msgs/msg/pose_stamped.hpp>
+#include <tf2_geometry_msgs/tf2_geometry_msgs.h>
+#include <tf2_ros/transform_broadcaster.h>
+#include <sophus/se3.hpp>
 #include <cv_bridge/cv_bridge.h>

 #include "System.h"
@@ -30,6 +33,9 @@ private:
     cv_bridge::CvImagePtr m_cvImPtr;

     rclcpp::Subscription<sensor_msgs::msg::Image>::SharedPtr m_image_subscriber;
+
+    rclcpp::Publisher<geometry_msgs::msg::PoseStamped>::SharedPtr pose_publisher_;
+    std::shared_ptr<tf2_ros::TransformBroadcaster> tf_broadcaster_;
 };

 #endif

为什么需要这些修改?
这些修改将SLAM系统内部计算的相机位姿转换为ROS2标准消息格式,使其他节点(如导航系统)可以利用定位结果。

测试SLAM系统

# 启动USB摄像头节点
ros2 run usb_cam usb_cam_node_exe --ros-args --params-file /home/ROS2/usb_cam/config/params_1.yaml

# 启动单目SLAM
ros2 run orbslam3 mono /home/ROS2/orbslam3_ros2/vocabulary/ORBvoc.txt /home/ROS2/orbslam3_ros2/config/monocular/TUM1.yaml

如何获取位姿信息

# 查看topic
ros2 topic list

# 查看位姿
ros2 topic echo /orb_slam3/camera_pose

Gazebo仿真环境

安装TurtleBot3仿真包

cd /home

# 安装依赖
apt install ros-humble-teleop-twist-keyboard -y
apt install ros-humble-gazebo-ros-pkgs -y
apt install ros-humble-turtlebot3-gazebo -y
apt install ros-humble-turtlebot3-teleop -y

# 下载源码并编译
wget -O turtlebot3.repos https://raw.githubusercontent.com/ROBOTIS-GIT/turtlebot3/refs/heads/humble-devel/turtlebot3.repos
sed -i 's/ros2-devel/humble-devel/g' turtlebot3.repos
rm -rf src
mkdir src
vcs import src<turtlebot3.repos

rosdep update
rosdep install --from-paths src --ignore-src  -y
export MAX_JOBS=4
colcon build --symlink-install

运行仿真环境

# 终端一: 启动Gazebo世界
source /home/install/setup.bash  
source /usr/share/gazebo/setup.sh
export TURTLEBOT3_MODEL=burger
export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:`ros2 pkg prefix turtlebot3_gazebo`/share/turtlebot3_gazebo/models/ 
ros2 launch turtlebot3_gazebo empty_world.launch.py

# 终端二:键盘控制
source /home/install/setup.bash  
source /usr/share/gazebo/setup.sh
export TURTLEBOT3_MODEL=burger
ros2 run turtlebot3_teleop teleop_keyboard

Gazebo仿真的价值
在实际机器人上测试SLAM算法可能成本高昂且存在风险。Gazebo提供真实的物理仿真环境,支持传感器噪声模拟、物理碰撞检测等,是算法开发验证的理想平台。

完整系统集成[TODO]

当所有组件就绪后,我们可以创建完整的自主导航系统:

Gazebo提供仿真环境和传感器数据
ORB-SLAM3处理视觉数据并输出定位信息
ROS2导航栈根据定位结果规划路径
控制指令发送给仿真小车执行

参考链接

https://blog.csdn.net/Functioe/article/details/131848026
https://github.com/ros-drivers/usb_cam/tree/main
https://github.com/zang09/ORB-SLAM3-STEREO-FIXED
https://github.com/zang09/ORB_SLAM3_ROS2
https://blog.csdn.net/liiiuzy/article/details/136272727
https://www.cnblogs.com/xiaoaug/p/17790458.html
https://ros2-industrial-workshop.readthedocs.io/en/latest/_source/navigation/ROS2-Turtlebot.html

© 版权声明
THE END
如果内容对您有所帮助,就支持一下吧!
点赞0 分享
小白学习成长记的头像 - 宋马
评论 抢沙发

请登录后发表评论

    暂无评论内容