Apollo-PETRv1演示DEMO操作指南

Apollo-PETRv1演示DEMO操作指南

一、背景二、相关知识点2.1 3D物体检测2.2 PETRv1模型概述2.3 坐标系转换2.4 NuScenes数据集2.5 3D边界框表示2.6 相机成像原理
三、操作步骤3.1、环境搭建3.2、NuScenes数据集3D BOX可视化3.3、PETRv1推理及可视化
四、总结

一、背景

PETR(Position Embedding Transformation for multi-view 3D Object Detection)是一种基于纯视觉的3D物体检测方法,它通过将图像特征与3D空间位置信息相结合,实现多相机视角下的3D目标检测。本文演示如何对NuScenes数据集中的3D框进行可视化,并展示如何使用PETRv1模型进行推理并对结果进行可视化。

具体内容包括:

将NuScenes数据集中的3D标注框绘制在雷达点云上进行可视化通过NuScenes数据集提供的相机参数,生成PETRv1模型推理所需的img2lidars矩阵对PETRv1模型的推理结果进行可视化

二、相关知识点

2.1 3D物体检测

是自动驾驶系统中的关键技术,旨在从传感器数据中识别和定位三维空间中的物体。与2D检测不同,3D检测需要提供物体的精确三维位置、尺寸和方向。

2.2 PETRv1模型概述

PETRv1是一种基于多视角相机的3D检测方法,它通过将图像特征与3D位置编码相结合,避免了传统方法中复杂的3D到2D的投影操作。其核心思想是将3D空间位置信息编码到图像特征中,使模型能够直接感知3D空间结构。

2.3 坐标系转换

在多传感器系统中,理解不同坐标系之间的转换至关重要:

图像坐标系:2D像素坐标相机坐标系:以相机光学中心为原点的3D坐标雷达坐标系:以激光雷达为原点的3D坐标车辆坐标系:以车辆为中心的统一坐标全局坐标系:世界坐标

2.4 NuScenes数据集

NuScenes是一个大规模自动驾驶数据集,包含:

1000个驾驶场景,每个场景20秒6个相机提供的360度覆盖图像1个360度激光雷达点云雷达、GPS和IMU数据3D边界框标注,包括位置、尺寸、方向和类别

2.5 3D边界框表示

在3D空间中,边界框通常用7个参数表示:

中心点坐标(x, y, z)尺寸(长、宽、高)偏航角(yaw,绕z轴的旋转)

2.6 相机成像原理

相机通过小孔成像模型将3D世界投影到2D图像平面,这一过程可以用内参矩阵和外参矩阵描述。内参矩阵描述相机内部光学特性,外参矩阵描述相机在世界坐标系中的位置和姿态。

三、操作步骤

3.1、环境搭建



# 启动Docker容器,提供隔离的环境并配置GPU支持
cd /home/apollo
docker run --gpus all --shm-size=128g -it -e NVIDIA_VISIBLE_DEVICES=all 
    --privileged --net=host 
    -v $PWD:/home -w /home 
    --rm registry.baidubce.com/paddlepaddle/paddle:2.4.2-gpu-cuda11.7-cudnn8.4-trt8.4 /bin/bash    

# 安装Paddle3D,这是一个基于PaddlePaddle的3D深度学习框架
cd /home/
git clone https://github.com/PaddlePaddle/Paddle3D.git
cd /home/Paddle3D
pip3 install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
python3 setup.py install
pip3 install ortools

# 下载NuScenes mini数据集,这是一个小规模的示例数据集
wget -O /home/v1.0-mini.tgz https://www.nuscenes.org/data/v1.0-mini.tgz
mkdir -p data/nuscenes
tar -xf /home/v1.0-mini.tgz -C data/nuscenes

# 下载预训练的PETRv1模型
wget https://apollo-pkg-beta.cdn.bcebos.com/perception_model/petrv1.zip
unzip petrv1.zip

# 安装其他必要的依赖库
pip3 install nuscenes-devkit matplotlib open3d -i https://pypi.tuna.tsinghua.edu.cn/simple

3.2、NuScenes数据集3D BOX可视化

以下代码展示了如何从NuScenes数据集中提取3D标注信息并将其可视化在点云上:


# 创建Python脚本文件
cat > 0_vis_box_from_annotation.py <<-'EOF'
import numpy as np
import open3d as o3d
from nuscenes import NuScenes
from nuscenes.utils.data_classes import LidarPointCloud, Box
from pyquaternion import Quaternion
import os
import shutil

# 初始化NuScenes数据集接口
datapath = 'data/nuscenes'
nusc = NuScenes(version='v1.0-mini', dataroot=datapath, verbose=False)

# 函数:获取传感器到顶层激光雷达的变换矩阵
def obtain_sensor2top(nusc,
                      sensor_token,
                      l2e_t,
                      l2e_r_mat,
                      e2g_t,
                      e2g_r_mat,
                      sensor_type='lidar'):
    """获取从通用传感器到顶层激光雷达的变换信息"""
    # 获取传感器数据记录
    sd_rec = nusc.get('sample_data', sensor_token)
    # 获取校准传感器信息
    cs_record = nusc.get('calibrated_sensor', sd_rec['calibrated_sensor_token'])
    # 获取ego姿态信息
    pose_record = nusc.get('ego_pose', sd_rec['ego_pose_token'])
    data_path = str(nusc.get_sample_data_path(sd_rec['token']))  # absolute path
    if os.getcwd() in data_path:  # path from lyftdataset is absolute path
        data_path = data_path.split(f'{os.getcwd()}/')[-1]  # relative path
    # 构建传感器信息字典
    sweep = {
        'data_path': nusc.get('sample_data',
                              sd_rec['token'])['filename'],  # relative path
        'type': sensor_type,
        'sample_data_token': sd_rec['token'],
        'sensor2ego_translation': cs_record['translation'],
        'sensor2ego_rotation': cs_record['rotation'],
        'ego2global_translation': pose_record['translation'],
        'ego2global_rotation': pose_record['rotation'],
        'timestamp': sd_rec['timestamp']
    }
    # 提取旋转和平移信息
    l2e_r_s = sweep['sensor2ego_rotation']
    l2e_t_s = sweep['sensor2ego_translation']
    e2g_r_s = sweep['ego2global_rotation']
    e2g_t_s = sweep['ego2global_translation']

    # 计算从传感器到激光雷达的旋转矩阵
    l2e_r_s_mat = Quaternion(l2e_r_s).rotation_matrix
    e2g_r_s_mat = Quaternion(e2g_r_s).rotation_matrix
    R = (l2e_r_s_mat.T @ e2g_r_s_mat.T) @ (
        np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T)
    
    # 计算从传感器到激光雷达的平移向量
    T = (l2e_t_s @ e2g_r_s_mat.T + e2g_t_s) @ (
        np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T)
    T -= e2g_t @ (np.linalg.inv(e2g_r_mat).T @ np.linalg.inv(l2e_r_mat).T
                  ) + l2e_t @ np.linalg.inv(l2e_r_mat).T
    sweep['sensor2lidar_rotation'] = R.T   # 点云变换公式: points @ R.T + T
    sweep['sensor2lidar_translation'] = T
    return sweep

# 函数:获取2D旋转矩阵
def _get_rot( h):
    return np.array([[np.cos(h), np.sin(h)],
        [-np.sin(h), np.cos(h)]])
  

# 主程序开始  
sample_idx=0 # 选择第一个样本

sample = nusc.sample[sample_idx]
lidar_token = sample['data']['LIDAR_TOP']
sd_rec = nusc.get('sample_data', sample['data']['LIDAR_TOP'])
cs_record = nusc.get('calibrated_sensor',sd_rec['calibrated_sensor_token'])
pose_record = nusc.get('ego_pose', sd_rec['ego_pose_token'])
lidar_path, boxes, _ = nusc.get_sample_data(lidar_token)
lidar_data = nusc.get('sample_data', lidar_token)

# 构建信息字典,包含所有必要的变换信息

info = {
    'lidar_token': lidar_token,
    'lidar_path': lidar_path,
    'token': sample['token'],
    'sweeps': [],
    'cams': dict(),
    'lidar2ego_translation': cs_record['translation'],
    'lidar2ego_rotation': cs_record['rotation'],
    'ego2global_translation': pose_record['translation'],
    'ego2global_rotation': pose_record['rotation'],
    'timestamp': sample['timestamp'],
}

# 提取变换矩阵
l2e_r = info['lidar2ego_rotation']
l2e_t = info['lidar2ego_translation']
e2g_r = info['ego2global_rotation']
e2g_t = info['ego2global_translation']
l2e_r_mat = Quaternion(l2e_r).rotation_matrix
e2g_r_mat = Quaternion(e2g_r).rotation_matrix

# 定义相机通道名称(6个环视相机)
camera_types = [
            'CAM_FRONT',
            'CAM_FRONT_RIGHT',
            'CAM_FRONT_LEFT',
            'CAM_BACK',
            'CAM_BACK_LEFT',
            'CAM_BACK_RIGHT',
]
# 为每个相机收集变换信息
for cam in camera_types:
    cam_token = sample['data'][cam]
    cam_path, _, cam_intrinsic = nusc.get_sample_data(cam_token)
    cam_info = obtain_sensor2top(nusc, cam_token, l2e_t, l2e_r_mat,
                                 e2g_t, e2g_r_mat, cam)
    cam_info.update(cam_intrinsic=cam_intrinsic)
    info['cams'].update({cam: cam_info})

# 图像预处理参数设置
bot_pct_lim= [0.0, 0.0]
H, W = 900, 1600  # 原始图像尺寸
fH, fW = 320,800  # 目标图像尺寸
resize = max(fH / H, fW / W)
resize_dims = (int(W * resize), int(H * resize))
newW, newH = resize_dims
crop_h = int((1 - np.mean(bot_pct_lim)) * newH) - fH
crop_w = int(max(0, newW - fW) / 2)
crop = (crop_w, crop_h, crop_w + fW, crop_h + fH)
rotate = 0

# 计算图像数据增强矩阵
ida_rot = np.eye(2)
ida_tran = np.zeros(2)
ida_rot *= resize
ida_tran -= np.array(crop[:2])
A = _get_rot(rotate / 180 * np.pi)
b = np.array([crop[2] - crop[0], crop[3] - crop[1]]) / 2
b = np.matmul(A, -b) + b
ida_rot = np.matmul(A, ida_rot)
ida_tran = np.matmul(A, ida_tran) + b
ida_mat = np.eye(3)
ida_mat[:2, :2] = ida_rot
ida_mat[:2, 2] = ida_tran        

# 计算相机内参和外参矩阵
intrinsics = []
extrinsics = []
for cam in camera_types:
    cam_info=info['cams'][cam]
    lidar2cam_r = np.linalg.inv(cam_info['sensor2lidar_rotation'])
    lidar2cam_t = cam_info['sensor2lidar_translation'] @ lidar2cam_r.T
    lidar2cam_rt = np.eye(4)
    lidar2cam_rt[:3, :3] = lidar2cam_r.T
    lidar2cam_rt[3, :3] = -lidar2cam_t
    intrinsic = cam_info['cam_intrinsic']
    viewpad = np.eye(4)
    viewpad[:intrinsic.shape[0], :intrinsic.shape[1]] = intrinsic    
    viewpad[:3, :3] = ida_mat @ viewpad[:3, :3]
    
    intrinsics.append(viewpad)
    extrinsics.append(lidar2cam_rt)
   
# 计算从激光雷达到图像的变换矩阵   
lidar2img_pre= [intrinsics[i] @ extrinsics[i].T for i in range(6)]            
img2lidar_post = []
for i in range(6):
    img2lidar_post.append(np.linalg.inv(lidar2img_pre[i]))

# 保存img2lidar变换矩阵供后续使用    
np.save("img2lidar.npy",np.asarray(img2lidar_post).astype(np.float32))
 
# 获取并保存相机图像
img2lidar=[]
for name in camera_types:
    print(f"获取相机数据: {name}...")
    cam_data = nusc.get('sample_data', sample['data'][name])
    image_path = os.path.join(nusc.dataroot, cam_data['filename'])
    print(image_path)
    shutil.copy(image_path,f"{name}.jpg")

# 保存激光雷达数据
print(lidar_path)
shutil.copy(lidar_path,"LIDAR_TOP.pcd.bin")

# 加载点云数据
point_cloud = LidarPointCloud.from_file(lidar_path)
points = point_cloud.points.T  # 转换为(N, 4)格式,每行是(x, y, z, intensity)

# 获取该样本的所有标注
annotations = []
for annotation_token in sample['anns']:
    annotation = nusc.get('sample_annotation', annotation_token)
    annotations.append(annotation)

# 创建Open3D点云对象
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(points[:, :3])  # 只使用xyz坐标

# 为点云着色(灰度,基于强度值)
if points.shape[1] >= 4:
    intensities = points[:, 3]
    intensity_colors = np.zeros((points.shape[0], 3))
    intensity_colors[:, 0] = intensities / np.max(intensities)  # 红色通道
    intensity_colors[:, 1] = intensities / np.max(intensities)  # 绿色通道
    intensity_colors[:, 2] = intensities / np.max(intensities)  # 蓝色通道
    pcd.colors = o3d.utility.Vector3dVector(intensity_colors)

# 获取雷达校准信息
lidar_calibrated_sensor = nusc.get('calibrated_sensor', lidar_data['calibrated_sensor_token'])
lidar_translation = np.array(lidar_calibrated_sensor['translation'])
lidar_rotation = Quaternion(np.array(lidar_calibrated_sensor['rotation']))

# 获取ego_pose信息
lidar_ego_pose = nusc.get('ego_pose', lidar_data['ego_pose_token'])
ego_translation = np.array(lidar_ego_pose['translation'])
ego_rotation = Quaternion(np.array(lidar_ego_pose['rotation']))

# 定义不同类别的颜色映射
category_colors = {
    'human.pedestrian.adult': [1, 0, 0],        # 红色
    'vehicle.car': [1, 0.5, 0],    # 橙色
    'movable_object.trafficcone': [1, 1, 0],        # 黄色
    'vehicle.bicycle': [0, 1, 0], # 绿色
    'movable_object.barrier': [0, 1, 1],    # 青色
    'vehicle.truck': [0, 0, 1],   # 蓝色
    'vehicle.bus.rigid': [0.5, 0, 0.5], # 紫色
    'vehicle.construction': [1, 0, 1],       # 粉色
    'movable_object.pushable_pullable': [0.2, 0.8, 1]
}

# 默认颜色(如果类别不在映射中)
default_color = [0.5, 0.5, 0.5]  # 灰色

# 收集所有检测框的点
box_points = []
box_colors = []
box_labels = []  # 保存每个点的标签信息

# 每条边采样的点数
points_per_edge = 10

for annotation in annotations:
    # 获取检测框信息
    rotation = Quaternion(annotation['rotation'])
    box = Box(
        annotation['translation'],
        annotation['size'],
        rotation,
        name=annotation['category_name'],
        token=annotation['token']
    )
    
    # 将框从全局坐标系转换到ego坐标系
    box.translate(-ego_translation)
    box.rotate(ego_rotation.inverse)
    
    # 将框从ego坐标系转换到雷达坐标系
    box.translate(-lidar_translation)
    box.rotate(lidar_rotation.inverse)
    
    # 获取框的角点
    corners = box.corners().T  # 转换为(8, 3)格式
    
    # 定义框的边(连接角点的索引对)
    edges = [
        [0, 1], [1, 2], [2, 3], [3, 0],  # 底面
        [4, 5], [5, 6], [6, 7], [7, 4],  # 顶面
        [0, 4], [1, 5], [2, 6], [3, 7]   # 连接底面和顶面的边
    ]
    
    # 获取类别颜色
    category_name = annotation['category_name']
    color = category_colors.get(category_name, default_color)
    
    # 为每条边采样多个点
    for edge in edges:
        start_point = corners[edge[0]]
        end_point = corners[edge[1]]
        
        # 在边上均匀采样
        for i in range(points_per_edge):
            t = i / (points_per_edge - 1)  # 参数化位置 (0到1)
            point = start_point + t * (end_point - start_point)
            box_points.append(point)
            box_colors.append(color)
            box_labels.append(category_name)  # 保存类别标签

# 将检测框点添加到点云中
if box_points:
    # 合并原始点云和检测框点
    all_points = np.vstack([np.asarray(pcd.points), np.array(box_points)])
    
    # 合并颜色
    if pcd.has_colors():
        original_colors = np.asarray(pcd.colors)
        all_colors = np.vstack([original_colors, np.array(box_colors)])
    else:
        all_colors = np.array(box_colors)
    
    # 更新点云
    pcd.points = o3d.utility.Vector3dVector(all_points)
    pcd.colors = o3d.utility.Vector3dVector(all_colors)

# 保存点云为PCD文件(包含检测框点)
output_file = "nuscenes_sample_with_boxes.pcd"
o3d.io.write_point_cloud(output_file, pcd)
print(f"点云和检测框点已保存为: {output_file}")

# 可视化点云和检测框点
o3d.visualization.draw_geometries([pcd], window_name="NuScenes Point Cloud with Bounding Boxes")
EOF

# 运行可视化脚本
python3 0_vis_box_from_annotation.py

可视化结果说明:

CAM_FRONT:前视角相机图像,显示了场景的正面视图

3D BOX:点云与3D边界框的可视化结果,不同颜色的框表示不同类别的物体

3.3、PETRv1推理及可视化

以下代码展示了如何使用PETRv1模型进行推理并对结果进行可视化:


# 创建推理脚本
cat > 1_infer_and_vis.py <<-'EOF'
import cv2
import numpy as np
import paddle
from paddle import inference
from paddle3d.geometries import BBoxes3D
from pyquaternion import Quaternion
from nuscenes.utils.data_classes import Box
from nuscenes.utils.data_classes import LidarPointCloud
import numpy as np
import open3d as o3d

# 函数:加载预测器
def load_predictor(model_file, params_file, gpu_id=0):
    config = inference.Config(model_file, params_file)
    config.enable_use_gpu(1000, gpu_id)
    config.enable_memory_optim()
    config.disable_glog_info()
    config.switch_use_feed_fetch_ops(False)
    config.switch_ir_optim(True)
    predictor = inference.create_predictor(config)
    return predictor

# 函数:图像归一化
def imnormalize(img, mean, std, to_rgb=True):
    img = img.copy().astype(np.float32)
    mean = np.float64(mean.reshape(1, -1))
    stdinv = 1 / np.float64(std.reshape(1, -1))
    if to_rgb:
        cv2.cvtColor(img, cv2.COLOR_BGR2RGB, img)
    cv2.subtract(img, mean, img)
    cv2.multiply(img, stdinv, img)
    return img

# 函数:获取图像缩放和裁剪参数
def get_resize_crop_shape(img_shape, target_shape):
    H, W = img_shape
    fH, fW = target_shape
    resize = max(fH / H, fW / W)
    resize_shape = (int(W * resize), int(H * resize))
    newW, newH = resize_shape
    crop_h = int(newH) - fH
    crop_w = int(max(0, newW - fW) / 2)
    crop_shape = (crop_h, crop_w, crop_h + fH, crop_w + fW)
    return resize_shape, crop_shape

# 函数:加载和预处理图像
def get_image(filenames):
    img = np.stack([cv2.imread(name) for name in filenames], axis=-1)
    imgs = [img[..., i] for i in range(img.shape[-1])]
    new_imgs = []
    target_shape = (320, 800)
    for i in range(len(imgs)):
        img_shape = imgs[i].shape[:2]
        resize_shape, crop_shape = get_resize_crop_shape(img_shape, target_shape)
        img = cv2.resize(imgs[i], resize_shape, cv2.INTER_LINEAR)
        img = img[crop_shape[0]:crop_shape[2], crop_shape[1]:crop_shape[3], :]
        new_imgs.append(np.array(img).astype(np.float32))
    mean = np.array([103.530, 116.280, 123.675], dtype=np.float32)
    std = np.array([57.375, 57.120, 58.395], dtype=np.float32)
    new_imgs = [imnormalize(img, mean, std, False) for img in new_imgs]
    return np.array(new_imgs).transpose([0, 3, 1, 2])[np.newaxis, ...]

# 函数:运行推理
def run(predictor, img):
    """运行模型推理并处理结果"""
    input_names = predictor.get_input_names()
    input_tensor0 = predictor.get_input_handle(input_names[0])
    input_tensor1 = predictor.get_input_handle(input_names[1])
    num_cams = 6
    
    # 加载之前计算的img2lidar变换矩阵
    img2lidars=np.load("img2lidar.npy")    
    img2lidars = np.array(img2lidars).reshape([num_cams, 4, 4]).astype('float32')
    
    # 设置输入数据
    input_tensor0.reshape([1, num_cams, 3, 320, 800])
    input_tensor0.copy_from_cpu(img)

    input_tensor1.reshape([num_cams, 4, 4])
    input_tensor1.copy_from_cpu(img2lidars)
    
    # 运行推理
    predictor.run()
    
    # 获取输出结果
    outs = []
    output_names = predictor.get_output_names()
    for name in output_names:
        out = predictor.get_output_handle(name)
        out = out.copy_to_cpu()
        out = paddle.to_tensor(out)
        outs.append(out)
    
    # x, y, z, w, l, h, yaw, vx, vy
    # 解析输出结果: 3D边界框, 置信度分数, 类别标签
    bboxes_3d, scores, labels = outs
    labels=labels.numpy()
    scores=scores.numpy()

    # 函数:计算边界框的重心
    def get_gravity_center(bboxes):
        bottom_center = bboxes[:, :3]        
        gravity_center = paddle.zeros_like(bottom_center)
        gravity_center[:, :2] = bottom_center[:, :2]
        gravity_center[:, 2] = bottom_center[:, 2] + bboxes[:, 5] * 0.5
        return gravity_center
    
    # 计算边界框参数
    box_gravity_center = get_gravity_center(bboxes_3d).numpy()
    box_dims = bboxes_3d[:, 3:6].numpy()
    box_yaw = bboxes_3d[:, 6].numpy()
    box_yaw = -box_yaw - np.pi / 2   # 调整偏航角表示
   
    # 创建边界框对象列表 
    box_list = []
    for i in range(len(bboxes_3d)):
        if scores[i] < 0.2:  # 过滤低置信度的检测结果
            continue
        
        # 创建四元数表示方向
        quat = Quaternion(axis=[0, 0, 1], radians=box_yaw[i])
        velocity = (*bboxes_3d[i, 7:9].numpy(), 0.0)    # 速度向量     
        
        # 创建边界框对象
        box = Box(
            box_gravity_center[i],
            box_dims[i],
            quat,
            label=labels[i],
            score=scores[i],
            velocity=velocity)
        box_list.append(box)

    # 加载点云数据
    point_cloud = LidarPointCloud.from_file('LIDAR_TOP.pcd.bin')
    points = point_cloud.points.T

    # 创建Open3D点云对象
    pcd = o3d.geometry.PointCloud()
    pcd.points = o3d.utility.Vector3dVector(points[:, :3])

    # 为点云着色
    if points.shape[1] >= 4:
        intensities = points[:, 3]
        intensity_colors = np.zeros((points.shape[0], 3))
        intensity_colors[:, 0] = intensities / np.max(intensities)
        intensity_colors[:, 1] = intensities / np.max(intensities)
        intensity_colors[:, 2] = intensities / np.max(intensities)
        pcd.colors = o3d.utility.Vector3dVector(intensity_colors)
        
    # 定义不同类别的颜色映射
    category_colors = {
        'human.pedestrian.adult': [1, 0, 0],        # 红色
        'vehicle.car': [1, 0.5, 0],    # 橙色
        'movable_object.trafficcone': [1, 1, 0],        # 黄色
        'vehicle.bicycle': [0, 1, 0], # 绿色
        'movable_object.barrier': [0, 1, 1],    # 青色
        'vehicle.truck': [0, 0, 1],   # 蓝色
        'vehicle.bus.rigid': [0.5, 0, 0.5], # 紫色
        'vehicle.construction': [1, 0, 1],       # 粉色
        'movable_object.pushable_pullable': [0.2, 0.8, 1]
    }
    
    category_names=["vehicle.car",
                    "vehicle.truck",
                    "vehicle.construction",
                    "vehicle.bus.rigid",
                    "vehicle.truck",
                    "movable_object.barrier",
                    "motorcycle",
                    "vehicle.bicycle",
                    "human.pedestrian.adult",
                    "movable_object.trafficcone"]

    default_color = [0.5, 0.5, 0.5]  # 灰色

    # 收集所有检测框的点
    box_points = []
    box_colors = []
    box_labels = []

    # 每条边采样的点数
    points_per_edge = 10
    
    for box in box_list:   
        corners = box.corners().T
        
        # 定义框的边(连接角点的索引对)
        edges = [
            [0, 1], [1, 2], [2, 3], [3, 0],  # 底面
            [4, 5], [5, 6], [6, 7], [7, 4],  # 顶面
            [0, 4], [1, 5], [2, 6], [3, 7]   # 连接底面和顶面的边
        ]
        
        # 获取类别颜色
        category_name = category_names[box.label]
        print(f"检测到类别: {category_name}, 置信度: {box.score:.2f}")
        color = category_colors.get(category_name, default_color)
        
        # 为每条边采样多个点
        for edge in edges:
            start_point = corners[edge[0]]
            end_point = corners[edge[1]]            
            # 在边上均匀采样
            for i in range(points_per_edge):
                t = i / (points_per_edge - 1)
                point = start_point + t * (end_point - start_point)
                box_points.append(point)
                box_colors.append(color)
                box_labels.append(category_name)

    # 将检测框点添加到点云中
    if box_points:
        # 合并原始点云和检测框点
        all_points = np.vstack([np.asarray(pcd.points), np.array(box_points)])
        
        # 合并颜色
        if pcd.has_colors():
            original_colors = np.asarray(pcd.colors)
            all_colors = np.vstack([original_colors, np.array(box_colors)])
        else:
            all_colors = np.array(box_colors)
        
        # 更新点云
        pcd.points = o3d.utility.Vector3dVector(all_points)
        pcd.colors = o3d.utility.Vector3dVector(all_colors)

    # 保存点云为PCD文件(包含检测框点)
    output_file = "pred_boxes.pcd"
    o3d.io.write_point_cloud(output_file, pcd)
    print(f"点云和检测框点已保存为: {output_file}")
  
# 主函数  
def main():
    predictor = load_predictor("petrv1/petr_inference.pdmodel",
                               "petrv1/petr_inference.pdiparams")
    
    # 相机通道列表
    camera_channels = [
        'CAM_FRONT.jpg',
        'CAM_FRONT_RIGHT.jpg',
        'CAM_FRONT_LEFT.jpg',
        'CAM_BACK.jpg',
        'CAM_BACK_LEFT.jpg',
        'CAM_BACK_RIGHT.jpg'
    ]
    # 加载和预处理图像
    image = get_image(camera_channels)
    
    # 运行推理
    run(predictor, image)

if __name__ == '__main__':
    main()
EOF

# 设置环境变量并运行推理脚本
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
python3 1_infer_and_vis.py

推理结果说明:

3D BOX:PETRv1模型推理结果的3D边界框可视化,不同颜色表示不同类别的检测物体

四、总结

本教程详细介绍了如何使用PETRv1模型进行3D物体检测,包括环境搭建、数据预处理、模型推理和结果可视化。通过本教程,您可以学习到:

环境配置:如何搭建PaddlePaddle和Paddle3D的开发环境数据处理:如何处理NuScenes数据集,计算相机与激光雷达之间的变换矩阵3D标注可视化:如何将3D标注框可视化在点云上模型推理:如何使用PETRv1模型进行3D物体检测结果可视化:如何将检测结果可视化并与原始点云结合展示

© 版权声明
THE END
如果内容对您有所帮助,就支持一下吧!
点赞0 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容