当前位置: 首页 > news >正文

无人机避障——感知篇(在Ubuntu20.04的Orin nx上基于ZED2实现Vins Fusion)

设备:Jetson Orin nx

系统:Ubuntu 20.04

双目视觉:zed 2

结果展示:

官网中的rosdep install --from-paths src --ignore-src -r -y如果连不上,可以用小鱼rosdepc进行替换:

安装标定工具:

1、使用kalibr工具标定ZED2双目相机(时间较长,1 hr)。
2、用imu_utils标定IMU,依次安装编译code_utils、imu_utils (这两个比较好装,找github安装即可)。

安装kalibr:

sudo apt update
sudo apt-get install python3-setuptools python3-rosinstall ipython3 libeigen3-dev libboost-all-dev doxygen libopencv-dev ros-noetic-vision-opencv ros-noetic-image-transport-plugins ros-noetic-cmake-modules python3-software-properties software-properties-common libpoco-dev python3-matplotlib python3-scipy python3-git python3-pip libtbb-dev libblas-dev liblapack-dev libv4l-dev python3-catkin-tools python3-igraph libsuitesparse-dev 
# 安装时间较长,预计一个小时以上
pip3 install wxPython sudo pip3 install python-igraph --upgrademkdir ~/kalibr_ws/src
cd ~/kalibr_ws/src
git clone --recursive https://github.com/ori-drs/kalibrcd ~/kalibr_ws
source /opt/ros/noetic/setup.bash
catkin init
catkin config --extend /opt/ros/noetic
catkin config --merge-devel # Necessary for catkin_tools >= 0.4.
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release# 安装时间较长,预计一个小时以上
catkin build -DCMAKE_BUILD_TYPE=Release -j4

采集双目视觉制作标定用的bag文件:

# 打开一个终端,读取相关zed2的话题内容
cd zed_ws
roslaunch zed_wrapper zed2.launch# 打开以下三个终端,分别设置IMU与双目视觉的话题频率
rosrun topic_tools throttle messages /zed2/zed_node/imu/data_raw  200 /zed2/zed_node/imu/data_raw2rosrun topic_tools throttle messages /zed2/zed_node/left/image_rect_color 20.0 /zed2/zed_node/left/image_rect_color2rosrun topic_tools throttle messages /zed2/zed_node/right/image_rect_color 20.0 /zed2/zed_node/right/image_rect_color2# 打开以下两个终端,查看双目视觉的频率
rostopic hz /zed2/zed_node/left/image_rect_color2rostopic hz /zed2/zed_node/right/image_rect_color2# 打开一个终端,查看双目视觉的可视化,以确保标定板始终在全部在视野范围内
rosrun image_view image_view image:=/zed2/zed_node/left/image_rect_color & rosrun image_view image_view image:=/zed2/zed_node/right/image_rect_color # 打开下面一个终端进行录包
rosbag record -O Kalib_data_vga.bag /zed2/zed_node/imu/data_raw2 /zed2/zed_node/left/image_rect_color2 /zed2/zed_node/right/image_rect_color2

双目视觉标定代码部分,最好用相对位置:

# 带时间分段
rosrun kalibr kalibr_calibrate_cameras --bag /home/nvidia/kalibr_ws/Kalib_data_vga.bag --topics /zed2/zed_node/left/image_rect_color2 /zed2/zed_node/right/image_rect_color2 --models pinhole-radtan  pinhole-radtan --target /home/nvidia/kalibr_ws/april.yaml --bag-from-to 5 150 --show-extraction --approx-sync 0.04# 不带时间分段
rosrun kalibr kalibr_calibrate_cameras --bag /home/nvidia/kalibr_ws/Kalib_data_vga.bag --topics /zed2/zed_node/left/image_rect_color2 /zed2/zed_node/right/image_rect_color2 --models pinhole-radtan  pinhole-radtan --target /home/nvidia/kalibr_ws/april.yaml --show-extraction --approx-sync 0.04# 进入到kalibr_ws文件夹下,yaml和bag最好用相对位置
rosrun kalibr kalibr_calibrate_cameras --bag Kalib_data_vga.bag --topics /zed2/zed_node/left/image_rect_color2 /zed2/zed_node/right/image_rect_color2 --models pinhole-radtan  pinhole-radtan --target april.yaml --bag-from-to 5 150 --show-extraction --approx-sync 0.04

 --bag-from-to 5 150 这个选项的作用是指定从一个 .bag 文件中读取数据的时间范围,其中的两个数字分别表示开始时间和结束时间,单位是秒。

依赖插件安装报错问题:

wxpython安装:

非常缓慢,可能需要一到两个小时,但是确实可以装上,只能用以下指令安装,等就行了。

pip3 install wxpython

ImportError: /usr/local/lib/python3.8/dist-packages/igraph/../igraph.libs/libgomp-d22c30c5.so.1.0.0: cannot allocate memory in static TLS block

# 检查 igraph 使用的库​​
ldd /usr/local/lib/python3.8/dist-packages/igraph/_igraph.cpython-*.so | grep gomp

/usr/lib/gcc/aarch64-linux-gnu/10/libgomp.so (0x0000ffff81671000)
    libgomp-d22c30c5.so.1.0.0 => /usr/local/lib/python3.8/dist-packages/igraph/../igraph.libs/libgomp-d22c30c5.so.1.0.0 (0x0000ffff8161d000)

根据ldd的输出,igraph的模块加载了两个不同的libgomp版本:

  1. 一个是系统gcc10的libgomp.so(路径为/usr/lib/gcc/aarch64-linux-gnu/10/libgomp.so
  2. 另一个是igraph自带的libgomp-d22c30c5.so.1.0.0

这导致了冲突,因为两个版本的OpenMP库被同时加载,而静态TLS内存不足以容纳两个库。

删除或重命名igraph自带的libgomp​​(这样就不会被加载):

sudo mv /usr/local/lib/python3.8/dist-packages/igraph/../igraph.libs/libgomp-d22c30c5.so.1.0.0 /usr/local/lib/python3.8/dist-packages/igraph/../igraph.libs/libgomp-d22c30c5.so.1.0.0.bak

这会使igraph无法找到自带的库,从而回退到系统库 

设置LD_PRELOAD为系统库​​:

export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1

​运行kalibr命令​​:

rosrun kalibr kalibr_calibrate_cameras \
--bag /home/nvidia/kalibr_ws/Kalib_data_vga.bag \
--topics /zed2/zed_node/left/image_rect_color2 /zed2/zed_node/right/image_rect_color2 \
--models pinhole-radtan pinhole-radtan \
--target /home/nvidia/kalibr_ws/april.yaml \
--show-extraction \
--approx-sync 0.04

标定过程出现的问题:

如果出现报错:

[ERROR] [1749602137.037109]: [TargetViewTable]: Tried to add second view to a given cameraId & timestamp. Maybe try to reduce the approximate syncing tolerance.

这个错误明确指出了同步问题:在同一个时间戳上有多个图像帧试图映射到同一相机ID上。原因是您当前的近似同步容忍度设置过高(--approx-sync 0.04),导致不同时间点的图像被匹配到相同时间戳。 

则降低--approx-sync 0.04的值,比如说降低到0.02或者0.01试试。

报错:

[ERROR] [1749605435.840354]: Did not converge in maxIterations... restarting...
[ WARN] [1749605435.846113]: Optimization diverged possibly due to a bad initialization. (Do the models fit the lenses well?)
[ WARN] [1749605435.851867]: Restarting for a new attempt...

[注意]:标定板不要离开相机视野范围,开始和结束要平稳进行,尽量使标定板出现在视野所有角落,可以打开相机视角进行查看,以保证标定板在相机的范围内。!!!!因为一次标定时长很长,所以千万注意数据包的有效性。

保存路径报错问题

Processed 1978 images with 98 images used
Camera-system parameters:cam0 (/zed2/zed_node/left/image_rect_color2):type: <class 'aslam_cv.libaslam_cv_python.DistortedPinholeCameraGeometry'>distortion: [-0.02382763  0.01629181  0.00026011  0.00037113] +- [0.00108974 0.00094976 0.00027375 0.00033212]projection: [260.71076493 260.87946198 319.83263746 174.02627529] +- [0.78613123 0.77906412 0.43553872 0.33612348]reprojection error: [-0.000017, 0.000003] +- [0.181615, 0.159810]cam1 (/zed2/zed_node/right/image_rect_color2):type: <class 'aslam_cv.libaslam_cv_python.DistortedPinholeCameraGeometry'>distortion: [-0.03399342  0.03044787  0.00013933 -0.00001722] +- [0.00134309 0.00149905 0.00027161 0.00032987]projection: [260.94581834 261.09354315 321.15981684 173.98065983] +- [0.78872497 0.78202592 0.42413571 0.32633954]reprojection error: [0.000015, -0.000001] +- [0.147800, 0.136867]baseline T_1_0:q: [-0.00003511  0.00109749  0.00009766  0.99999939] +- [0.000853   0.00117251 0.0001395 ]t: [-0.11982954  0.00005466 -0.00037704] +- [0.00020272 0.00019293 0.0005048 ]Traceback (most recent call last):File "/home/nvidia/kalibr_ws/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_common/ConfigReader.py", line 221, in writeYamlwith open(filename, 'w') as outfile:
FileNotFoundError: [Errno 2] 没有那个文件或目录: 'camchain-/home/nvidia/kalibr_ws/Kalib_data_vga02.yaml'During handling of the above exception, another exception occurred:Traceback (most recent call last):File "/home/nvidia/kalibr_ws/devel/lib/kalibr/kalibr_calibrate_cameras", line 15, in <module>exec(compile(fh.read(), python_script, 'exec'), context)File "/home/nvidia/kalibr_ws/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_calibrate_cameras", line 447, in <module>main()File "/home/nvidia/kalibr_ws/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_calibrate_cameras", line 408, in mainkcc.saveChainParametersYaml(calibrator, resultFile, graph)File "/home/nvidia/kalibr_ws/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_camera_calibration/CameraCalibrator.py", line 711, in saveChainParametersYamlchain.writeYaml()File "/home/nvidia/kalibr_ws/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_common/ConfigReader.py", line 224, in writeYamlself.raiseError( "Could not write configuration to {0}".format(self.yamlFile) )File "/home/nvidia/kalibr_ws/src/kalibr/aslam_offline_calibration/kalibr/python/kalibr_common/ConfigReader.py", line 234, in raiseErrorraise RuntimeError( "{0}{1}".format(header, message) )
RuntimeError: [CameraChainParameters Reader]: Could not write configuration to camchain-/home/nvidia/kalibr_ws/Kalib_data_vga02.yaml

导出文件的位置问题:这个错误是由于文件路径不完整导致的。在保存标定结果时,Kalibr试图将结果写入到文件camchain-/home/nvidia/kalibr_ws/Kalib_data_vga02.yaml,但是路径中多了一个斜杠。正确的路径应该是/home/nvidia/kalibr_ws/camchain-Kalib_data_vga02.yaml

标定结果:

 

双目视觉标定完成!!像素误差还是能够接受的。

IMU标定:

## 数据采集
source devel/setup.bashroslaunch zed_wrapper zed2.launch## 单独录制imu
rosbag record -O zed-imu-calibrate.bag /zed2/zed_node/imu/data_raw

标定结果:

运行imu标定:

source devel/setup.bashroslaunch imu_utils zed_imu.launch# 新打开一个终端 播放imu
rosbag play -r 200 /home/nvidia/imu_utils/zed_imu/zed-imu-calibrate.bag

自己编辑文本imu.yaml将结果填入其中:

IMU+双目视觉融合标定:

准备之前做双目标定准备的双目视觉的bag,以及相机标定的yaml、imu的yaml还有april二维码的yaml。

双目视觉的bag:zed_data.bag

双目视觉标定的yaml:zed_cam.yaml

IMU的标定的yaml:zed_imu.yaml

标定板的yaml:april.yaml

标定过程:

cd kalibr_wssource devel/setup.bash# imu+双目
rosrun kalibr kalibr_calibrate_imu_camera --bag zed_data.bag --target april.yaml --cam zed_cam.yaml --imu zed_imu.yaml

加载时间较长,耐心等待!!!

标定结果:

两个相机的数值都在0.2左右稍微偏大点。

在Vins Fusion的config文件中构建用于zed相机使用的yaml:

新建一个zed文件夹,里面包括cam0.yaml,cam1.yaml,zed2_stereo_config.yaml

cam0.yaml:

%YAML:1.0
---
model_type: PINHOLE
camera_name: camera
image_width: 640
image_height: 360
distortion_parameters:k1: 0k2: 0p1: 0p2: 0
projection_parameters:fx: 259.4430975003653fy: 259.67208178809966cx: 319.9435416269173cy: 173.83960536571925

 cam1:

%YAML:1.0
---
model_type: PINHOLE
camera_name: camera
image_width: 640
image_height: 360
distortion_parameters:k1: 0k2: 0p1: 0p2: 0
projection_parameters:fx: 259.6992468904268fy: 259.9044534627534cx: 321.37630119603097cy: 173.84454900322726

其中projection_parameters中的参数来自于跑双目标定的yaml中的参数intrinsics:

 zed2_stereo_config.yaml:

%YAML:1.0#common parameters
#support: 1 imu 1 cam; 1 imu 2 cam: 2 cam; 
imu: 1         
num_of_cam: 2  #实时相机
imu_topic: "/zed2/zed_node/imu/data_raw"
image0_topic: "/zed2/zed_node/left/image_rect_gray"
image1_topic: "/zed2/zed_node/right/image_rect_gray"# 录制bag包
#imu_topic: "/zed2/zed_node/imu/data_raw2"
#image0_topic: "/zed2/zed_node/left/image_rect_color2"
#image1_topic: "/zed2/zed_node/right/image_rect_color2"
output_path: "~"cam0_calib: "cam0.yaml"
cam1_calib: "cam1.yaml"
image_width: 640
image_height: 360# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.# 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.body_T_cam0: !!opencv-matrixrows: 4cols: 4dt: ddata: [0.00621782, 0.00255719, 0.9999774, 0.02442757,-0.99997099, -0.00438481, 0.00622899, 0.02442823,0.00440064, -0.99998712, 0.00252985, 0.00964505,0, 0, 0, 1]body_T_cam1: !!opencv-matrixrows: 4cols: 4dt: ddata: [0.00376341, 0.00237248, 0.9999901, 0.02559884,-0.99998414, -0.00418019, 0.00377331, -0.09545715,0.0041891, -0.99998845, 0.00235671, 0.01015661,0, 0, 0, 1]#Multiple thread support
multiple_thread: 0#feature traker paprameters
max_cnt: 150            # max feature number in feature tracking
min_dist: 30            # min distance between two features 
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
flow_back: 1            # perform forward and backward optical flow to improve feature tracking accuracy#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 1.4402862002020933e-02          # accelerometer measurement noise standard deviation. 
gyr_n: 1.3752563738546138e-03         # gyroscope measurement noise standard deviation.     
acc_w: 5.3890784193863061e-04        # accelerometer bias random work noise standard deviation.  
gyr_w: 4.5861836272840561e-05       # gyroscope bias random work noise standard deviation.     
g_norm: 9.81007     # gravity magnitude
# acc_n: 0.1          # accelerometer measurement noise standard deviation. 
# gyr_n: 0.01         # gyroscope measurement noise standard deviation.     
# acc_w: 0.001        # accelerometer bias random work noise standard deviation.  
# gyr_w: 0.0001       # gyroscope bias random work noise standard deviation.     
# g_norm: 9.81007     # gravity magnitude#unsynchronization parameters
estimate_td: 0                      # online estimate time offset between camera and imu
td: 0.0                             # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)#loop closure parameters
load_previous_pose_graph: 0        # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "~/output/pose_graph/" # save and load path
save_image: 1                   # save image in pose graph for visualization prupose; you can close this function by setting 0 

其中两个 相机的矩阵来自于imu与双目视觉标定得到的从cam到imu的矩阵:

body_T_cam0: !!opencv-matrixrows: 4cols: 4dt: ddata: [0.00621782, 0.00255719, 0.9999774, 0.02442757,-0.99997099, -0.00438481, 0.00622899, 0.02442823,0.00440064, -0.99998712, 0.00252985, 0.00964505,0, 0, 0, 1]
body_T_cam1: !!opencv-matrixrows: 4cols: 4dt: ddata: [0.00376341, 0.00237248, 0.9999901, 0.02559884,-0.99998414, -0.00418019, 0.00377331, -0.09545715,0.0041891, -0.99998845, 0.00235671, 0.01015661,0, 0, 0, 1]

 注意:是其中cam到imu的矩阵,即第二个矩阵。

注意:这里配置文件中的body_T_cam0: !!opencv-matrix和body_T_cam1: !!opencv-matrix是imu0 to cma0的变换矩阵,VIINS FUSION需要的是cam to imu,因此需要取逆这里也可以看生成的result-imucam.txt文件,里面直接有各个转换矩阵,如(imu0 to cam0)就是imu到相机,以此类推,就可以不用以下的求逆代码内容。

然后运行Vins Fusion:

通过自己写的bash文件:

# run.sh文件#!/bin/bash# Start RViz
gnome-terminal -- bash -c "source devel/setup.bash && roslaunch vins vins_rviz.launch"# Start VINS-Fusion node
sleep 5
gnome-terminal -- bash -c "source devel/setup.bash && rosrun vins vins_node src/VINS-Fusion/config/zed/zed2_stereo_config.yaml"#回环检测
sleep 5
gnome-terminal -- bash -c "source devel/setup.bash && rosrun loop_fusion loop_fusion_node src/config/zed/zed2_stereo_config.yaml"## 实时相机
sleep 5
gnome-terminal -- bash -c "source devel/setup.bash && source /home/nvidia/zed_ws/devel/setup.bash && roslaunch zed_wrapper zed2.launch"# Play rosbag
# sleep 5
# gnome-terminal -- bash -c "source devel/setup.bash && rosbag play /home/nvidia/data_set/MH_01_easy.bag"# Keep the terminal open until you manually close it
echo "Press Enter to close the terminals"
read

就可以得到下面的效果了!!!

参考博客:

ZED2相机标定--双目、IMU、联合标定_cameras are not connected through mutual observati-CSDN博客

ZED双目相机标定跑通vins fusion_zed相机双目标定-CSDN博客

ZED2相机IMU联合标定&&运行vins-mono_extending kalibr:-CSDN博客

相关文章:

  • 系统集成自动化流程编排实现条件分支高级篇(二)
  • 一般增长率
  • 大量RPM仓库管理指南:更新与批量获取实战手册
  • Manus AI与多语言手写识别技术突破
  • LangSmith 实战指南:大模型链路调试与监控的深度解析
  • DeepCritic: SFT+RL两阶段训练突破LLM自我监督!显著提升大模型的自我批判能力!!
  • 离线部署openstack 2024.1 placement
  • c++算法学习6——迪杰斯特拉算法
  • 彻底禁用Windows Defender通知和图标
  • Python_day51
  • openstack实例创建过程分析
  • 40.第二阶段x64游戏实战-封包-添加发包功能
  • Jadx(开源AVA反编译工具) v1.5.0
  • 40 C 语言日期与时间函数详解:time、ctime、difftime、clock(含 UTC/本地时间转换)
  • ateⅹⅰt()的用法
  • 选择、填空、判断
  • c++经典好题
  • Unicode:如何让用户东方不败和[Family: Man, Woman, Girl, Boy]顺利通过用户名长度检查?
  • 从字节到对象的漂流---JavaIO流篇
  • (46)课68:查看索引 SHOW INDEX FROM 表名;删除索引 DROP INDEX index_name ON 表名;
  • 社交网站建设网/连云港百度推广总代理
  • qq网站建设/自己制作一个网页
  • b2b网站建设怎么做/媒体邀约
  • 网站媒体作风建设年工作总结/高中同步测控优化设计答案
  • 传媒公司网站建设方案/搜索引擎营销的主要方法包括
  • 做网站不用tomcat行吗/长沙seo优化排名