当前位置: 首页 > news >正文

GTSAM中实现多机器人位姿图优化(multi-robot pose graph optimization)示例

要在 GTSAM 里实现多机器人位姿图优化(multi-robot pose graph optimization, MR-PGO),其核心思路分为以下部分:

  • 每个机器人各自一条位姿链(odometry edges)。
  • BetweenFactor<Pose3> 把不同机器人之间的相对观测(inter-robot loop closures)连接起来。
  • 只给一个全局锚点(prior)来消除自由度(gauge)。
  • 用 LM/ Dogleg 做一次性批优化,或用 iSAM2 做增量优化。
  • 对跨机器人边使用鲁棒核防 outlier。

下面我们从简单的两个机器人(A、B,可以扩展到N个)开始讲解说明。


1. 关键设计

  1. 变量命名与键空间
    Symbol('a', i)Symbol('b', j)… 区分机器人,避免 Key 冲突:
using gtsam::Symbol;
Symbol A(size_t k) { return Symbol('a', k); }
Symbol B(size_t k) { return Symbol('b', k); }
  1. 因子类型
  • 里程计:BetweenFactor<Pose3>(X_k, X_{k+1}, T_k_k1, noise)
  • 跨机器人闭环:BetweenFactor<Pose3>(A_i, B_j, T_aibj, noise)
  • 锚点:PriorFactor<Pose3>(A(0), Pose3::Identity(), small_noise)

只锚一个节点即可(常用:A(0));给多个强 prior 会过约束,相关内容可在本专栏的其他文章看到。

  1. 鲁棒核
    跨机器人边更容易错配,建议:
auto base = noiseModel::Diagonal::Sigmas(sigmas6); // 6x1
auto huber = noiseModel::Robust::Create(noiseModel::mEstimator::Huber::Create(c), base);
  1. 优化器
  • 批量:LevenbergMarquardtOptimizer
  • 增量:ISAM2(在线融合、实时)

2. 可运行示例(两机器人 + 跨机器人闭环)

#include <gtsam/nonlinear/NonlinearFactorGraph.h>
#include <gtsam/nonlinear/LevenbergMarquardtOptimizer.h>
#include <gtsam/nonlinear/ISAM2.h>
#include <gtsam/nonlinear/Values.h>
#include <gtsam/slam/BetweenFactor.h>
#include <gtsam/slam/PriorFactor.h>
#include <gtsam/geometry/Pose3.h>
#include <gtsam/geometry/Rot3.h>
#include <gtsam/noiseModel/Diagonal.h>
#include <gtsam/noiseModel/Robust.h>
#include <gtsam/inference/Symbol.h>
#include <iostream>using namespace gtsam;inline Symbol A(size_t k){ return Symbol('a', k); }
inline Symbol B(size_t k){ return Symbol('b', k); }int main() {NonlinearFactorGraph graph;// 噪声设置Vector6 odoSigmas; odoSigmas << 0.05,0.05,0.05, 0.02,0.02,0.02;   // [rx,ry,rz, tx,ty,tz]auto odoNoise = noiseModel::Diagonal::Sigmas(odoSigmas);Vector6 loopSigmas; loopSigmas << 0.1,0.1,0.1, 0.05,0.05,0.05;auto loopBase  = noiseModel::Diagonal::Sigmas(loopSigmas);auto loopNoise = noiseModel::Robust::Create(noiseModel::mEstimator::Huber::Create(1.0), loopBase);auto priorNoise = noiseModel::Diagonal::Sigmas((Vector6() << 1e-6,1e-6,1e-6, 1e-6,1e-6,1e-6).finished());// 1) 锚点:仅锚 A(0)graph.add(PriorFactor<Pose3>(A(0), Pose3(), priorNoise));// 2) A 机器人的里程计graph.add(BetweenFactor<Pose3>(A(0), A(1), Pose3(Rot3::RzRyRx(0,0,0.02), Point3(1.0, 0.0, 0.0)), odoNoise));graph.add(BetweenFactor<Pose3>(A(1), A(2), Pose3(Rot3::RzRyRx(0,0,0.02), Point3(1.0, 0.0, 0.0)), odoNoise));// 3) B 机器人的里程计graph.add(BetweenFactor<Pose3>(B(0), B(1), Pose3(Rot3::RzRyRx(0,0,-0.01), Point3(1.0, 0.0, 0.0)), odoNoise));graph.add(BetweenFactor<Pose3>(B(1), B(2), Pose3(Rot3::RzRyRx(0,0,-0.01), Point3(1.0, 0.0, 0.0)), odoNoise));// 4) 跨机器人闭环(例如:A(2) 观测到 B(1))Pose3 A2_to_B1(Rot3::RzRyRx(0.0, 0.0, 0.1), Point3(0.05, -0.10, 0.0));graph.add(BetweenFactor<Pose3>(A(2), B(1), A2_to_B1, loopNoise));// 初值(可来自各自里程计积分)Values init;// A 初值(稍有偏差)init.insert(A(0), Pose3(Rot3::Ypr(0,0,0),    Point3(0,0,0)));init.insert(A(1), Pose3(Rot3::Ypr(0,0,0.02), Point3(1.0, 0.0, 0)));init.insert(A(2), Pose3(Rot3::Ypr(0,0,0.04), Point3(2.0, 0.0, 0)));// B 初值(未知与 A 的相对位姿,给个粗略猜测)init.insert(B(0), Pose3(Rot3::Ypr(0,0,0.1),  Point3(0.3,  -0.2, 0)));init.insert(B(1), Pose3(Rot3::Ypr(0,0,0.09), Point3(1.25, -0.18, 0)));init.insert(B(2), Pose3(Rot3::Ypr(0,0,0.08), Point3(2.25, -0.16, 0)));// 批量优化LevenbergMarquardtParams params;params.setVerbosity("ERROR");LevenbergMarquardtOptimizer lm(graph, init, params);Values result = lm.optimize();std::cout << "=== Optimized Poses ===\n";result.at<Pose3>(A(0)).print("A0: ");result.at<Pose3>(A(1)).print("A1: ");result.at<Pose3>(A(2)).print("A2: ");result.at<Pose3>(B(0)).print("B0: ");result.at<Pose3>(B(1)).print("B1: ");result.at<Pose3>(B(2)).print("B2: ");// (可选)增量模式:iSAM2 在线融合// ISAM2 isam; isam.update(graph, init); Values inc = isam.calculateEstimate();return 0;
}

3. 工程化要点(踩坑清单)

  • 只锚一个全局位姿
    常规做法是锚定 A(0)。如果还给 B(0)强 prior,会造成不一致(全局坐标重复定义)。若必须锚多个(比如 GPS),建议适当放大噪声或使用软约束。

  • 跨机器人边使用鲁棒核
    跨机器人匹配(回环)更容易错配。Huber/Cauchy/Geman-McClure 都可,Huber 是常用首选。

  • 初值很重要
    对 SE(3) 优化,初值差太大容易收敛到局部最优。两种策略:

    • 基于外部里程计/VO/NDT 先粗对齐;
    • 利用 RANSAC + ICP/TEASER++ 做 inter-robot 初始对齐,再进入图优化。
  • 多传感器外参
    每个机器人通常多个传感器(LiDAR/IMU/Camera)。外参可以当作常量用在测量预测里,或作为变量加入图中(加 prior 约束)做联动标定

  • 时间与异步
    跨机器人观测要做好时标对齐(或在测量模型里显式考虑时延)。

  • 尺度与重力(视觉系统)
    纯单目会有尺度模糊,多机器人融合时要靠其他传感器(IMU/里程计/GPS)或跨机器人约束恢复尺度。

  • 在线增量(iSAM2)建议

    • 每次来了新因子与新节点,就 isam.update(newGraph, newValues)
    • 周期性 result = isam.calculateEstimate()
    • 若要延迟剔除 outlier,可先用鲁棒核 + 统计阈值(如我们前面给你的“Mahalanobis 异常检测”)筛掉再加入主图。

4. 拓展到 N 机器人

  • 键空间:Symbol('a', i), Symbol('b', j), Symbol('c', k), …
  • 封装一个 addRobotOdometry(graph, robot_id, seq, relPose, noise) 的小工具函数;
  • 跨机器人闭环:统一接口 addInterRobotClosure(graph, rid1, idx1, rid2, idx2, T, noise)
  • 统一的锚点策略:只锚定一个机器人的第一帧(或使用全局参考/GPS 作为弱 prior)。

5. 与“残差异常检测”联动

之前要的Mahalanobis 残差检测可以直接用于 MR-PGO:

  • 在每轮优化后,遍历因子,找出 2*error(values) > chi2_{df,1-α} 的跨机器人边;
  • 将其标记为可疑闭环,暂缓加入降低权重/重新验证(ICP 再配准、检查描述子一致性等)。

6.加强版完整代码示例

下面是一个 可直接 CMake 构建 的模板工程,包含:

  • 批量优化(Levenberg-Marquardt)和在线增量(iSAM2)示例
  • 跨机器人鲁棒核切换(Huber/Cauchy)
  • 因子级 Mahalanobis 残差异常检测并导出 CSV
  • 与 PCL / TEASER++ 回环示例(集成点云配准作为回环验证的占位集成)

说明:工程代码以最小依赖与清晰结构为目标,提供可编译的核心代码与占位点(Teaser++ 需要单独安装)。


目录结构

gtsam_mr_pgo_template/
├─ CMakeLists.txt
├─ README.md
├─ include/
│  ├─ mr_pgo/mr_pgo.h
│  ├─ mr_pgo/outlier_check.h
│  └─ mr_pgo/csv_util.h
├─ src/
│  ├─ main.cpp
│  ├─ mr_pgo/mr_pgo.cpp
│  ├─ mr_pgo/outlier_check.cpp
│  └─ mr_pgo/csv_util.cpp
└─ examples/└─ run_example.sh

CMakeLists.txt

cmake_minimum_required(VERSION 3.10)
project(gtsam_mr_pgo_template LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD_REQUIRED ON)# Options
option(USE_TEASER "Enable TEASER++ integration (must be installed)" OFF)
option(USE_PCL "Enable PCL integration (must be installed)" OFF)find_package(Eigen3 REQUIRED)
find_package(Boost REQUIRED COMPONENTS program_options)
find_package(GTSAM REQUIRED) # expect GTSAM config (gtsam-config.cmake)if(USE_PCL)find_package(PCL REQUIRED)
endif()if(USE_TEASER)find_package(teaserpp REQUIRED) # depends on TEASER++ providing a config package
endif()include_directories(${GTSAM_INCLUDE_DIRS} ${EIGEN3_INCLUDE_DIRS} ${Boost_INCLUDE_DIRS})
add_compile_definitions($<$<CONFIG:Debug>:DEBUG>)add_executable(mr_pgosrc/main.cppsrc/mr_pgo/mr_pgo.cppsrc/mr_pgo/outlier_check.cppsrc/mr_pgo/csv_util.cpp
)target_include_directories(mr_pgo PRIVATE include)# Link libraries
target_link_libraries(mr_pgo PRIVATE ${GTSAM_LIBRARIES} Eigen3::Eigen Boost::program_options)
if(USE_PCL)target_link_libraries(mr_pgo PRIVATE ${PCL_LIBRARIES})
endif()
if(USE_TEASER)target_link_libraries(mr_pgo PRIVATE teaserpp::teaser) # adjust according to TEASER++ export
endif()# install
install(TARGETS mr_pgo RUNTIME DESTINATION bin)

include/mr_pgo/mr_pgo.h

#pragma once#include <gtsam/nonlinear/NonlinearFactorGraph.h>
#include <gtsam/nonlinear/Values.h>
#include <gtsam/nonlinear/ISAM2.h>
#include <gtsam/geometry/Pose3.h>
#include <gtsam/inference/Symbol.h>
#include <vector>namespace mr_pgo {using gtsam::Symbol;
using gtsam::Pose3;
using gtsam::NonlinearFactorGraph;
using gtsam::Values;struct MRPGOOptions {double huber_k = 1.0;   // Huber parameter for robust kernelbool use_robust = true;bool use_isam2 = false;double prior_small = 1e-6;double outlier_alpha = 0.05; // significance level for chi2
};// Build a demo multi-robot graph (two robots) - helper to construct graph + initial values
void buildDemoGraph(NonlinearFactorGraph& graph, Values& init);// Run batch optimization (LM) and return optimized values
Values runBatchOptimize(const NonlinearFactorGraph& graph, const Values& init);// Run incremental optimization using iSAM2 (applies all factors incrementally)
Values runISAM2(const NonlinearFactorGraph& graph, const Values& init);} // namespace mr_pgo

include/mr_pgo/outlier_check.h

#pragma once#include <gtsam/nonlinear/NonlinearFactorGraph.h>
#include <gtsam/nonlinear/Values.h>
#include <string>
#include <vector>namespace mr_pgo {struct FactorResidualReport {size_t index;std::string type_name;std::vector<gtsam::Key> keys;size_t dof;double chi2;double mahalanobis;double chi2_threshold;bool is_outlier;
};std::vector<FactorResidualReport> analyzeFactorGraphMahalanobis(const gtsam::NonlinearFactorGraph& graph,const gtsam::Values& values,double alpha = 0.05);// Export reports to CSV (path)
void exportReportsCSV(const std::string& path,const std::vector<FactorResidualReport>& reports);} // namespace mr_pgo

include/mr_pgo/csv_util.h

#pragma once
#include <string>
#include <vector>namespace mr_pgo {bool writeCSV(const std::string& path, const std::vector<std::vector<std::string>>& rows);} // namespace mr_pgo

src/mr_pgo/mr_pgo.cpp

#include "mr_pgo/mr_pgo.h"
#include <gtsam/slam/PriorFactor.h>
#include <gtsam/slam/BetweenFactor.h>
#include <gtsam/noiseModel/Diagonal.h>
#include <gtsam/noiseModel/Robust.h>
#include <gtsam/nonlinear/LevenbergMarquardtOptimizer.h>
#include <gtsam/nonlinear/ISAM2.h>
#include <gtsam/nonlinear/ISAM2Params.h>
#include <gtsam/nonlinear/Values.h>
#include <gtsam/geometry/Pose3.h>
#include <gtsam/geometry/Rot3.h>
#include <gtsam/inference/Symbol.h>using namespace gtsam;
using namespace mr_pgo;namespace mr_pgo {inline Symbol A(size_t k) { return Symbol('a', k); }
inline Symbol B(size_t k) { return Symbol('b', k); }void buildDemoGraph(NonlinearFactorGraph& graph, Values& init) {Vector6 odoSigmas; odoSigmas << 0.05,0.05,0.05, 0.02,0.02,0.02;auto odoNoise = noiseModel::Diagonal::Sigmas(odoSigmas);Vector6 loopSigmas; loopSigmas << 0.1,0.1,0.1, 0.05,0.05,0.05;auto loopNoise = noiseModel::Diagonal::Sigmas(loopSigmas);auto priorNoise = noiseModel::Diagonal::Sigmas((Vector6() << 1e-6,1e-6,1e-6,1e-6,1e-6,1e-6).finished());// Prior on A0graph.add(PriorFactor<Pose3>(A(0), Pose3(), priorNoise));// A odometrygraph.add(BetweenFactor<Pose3>(A(0), A(1), Pose3(Rot3::RzRyRx(0,0,0.02), Point3(1.0, 0.0, 0.0)), odoNoise));graph.add(BetweenFactor<Pose3>(A(1), A(2), Pose3(Rot3::RzRyRx(0,0,0.02), Point3(1.0, 0.0, 0.0)), odoNoise));// B odometrygraph.add(BetweenFactor<Pose3>(B(0), B(1), Pose3(Rot3::RzRyRx(0,0,-0.01), Point3(1.0, 0.0, 0.0)), odoNoise));graph.add(BetweenFactor<Pose3>(B(1), B(2), Pose3(Rot3::RzRyRx(0,0,-0.01), Point3(1.0, 0.0, 0.0)), odoNoise));// cross robot closure (A2 -> B1)Pose3 A2_to_B1(Rot3::RzRyRx(0.0, 0.0, 0.1), Point3(0.05, -0.10, 0.0));graph.add(BetweenFactor<Pose3>(A(2), B(1), A2_to_B1, loopNoise));// initial valuesinit.insert(A(0), Pose3(Rot3::Ypr(0,0,0), Point3(0,0,0)));init.insert(A(1), Pose3(Rot3::Ypr(0,0,0.02), Point3(1.0, 0.0, 0)));init.insert(A(2), Pose3(Rot3::Ypr(0,0,0.04), Point3(2.0, 0.0, 0)));init.insert(B(0), Pose3(Rot3::Ypr(0,0,0.1), Point3(0.3, -0.2, 0)));init.insert(B(1), Pose3(Rot3::Ypr(0,0,0.09), Point3(1.25, -0.18, 0)));init.insert(B(2), Pose3(Rot3::Ypr(0,0,0.08), Point3(2.25, -0.16, 0)));
}Values runBatchOptimize(const NonlinearFactorGraph& graph, const Values& init) {LevenbergMarquardtParams params;params.setVerbosity("ERROR");LevenbergMarquardtOptimizer opt(graph, init, params);return opt.optimize();
}Values runISAM2(const NonlinearFactorGraph& graph, const Values& init) {ISAM2Params params;params.relinearizeThreshold = 0.01;params.relinearizeSkip = 1;gtsam::ISAM2 isam(params);// We will push factors incrementally: simplest approach - feed all factors in small batches// Here we add them factor-by-factor along with initial estimates for any new keysNonlinearFactorGraph newFactors;Values newInit;// naive: add all factors and initial values once (demonstration)isam.update(graph, init);return isam.calculateEstimate();
}} // namespace mr_pgo

src/mr_pgo/outlier_check.cpp

#include "mr_pgo/outlier_check.h"
#include "mr_pgo/csv_util.h"
#include <boost/math/distributions/chi_squared.hpp>
#include <cxxabi.h>
#include <memory>
#include <iostream>namespace mr_pgo {#if defined(__GNUG__)
inline std::string demangle(const char* name) {int status = 0;std::unique_ptr<char, void(*)(void*)> res{abi::__cxa_demangle(name, nullptr, nullptr, &status), std::free};return (status == 0 && res) ? std::string(res.get()) : std::string(name);
}
#else
inline std::string demangle(const char* name) { return std::string(name); }
#endifstd::vector<FactorResidualReport> analyzeFactorGraphMahalanobis(const gtsam::NonlinearFactorGraph& graph,const gtsam::Values& values,double alpha) {std::vector<FactorResidualReport> reports;for (size_t i = 0; i < graph.size(); ++i) {auto f = graph[i];if (!f) continue;if (!f->active(values)) continue;size_t dof = f->dim();if (dof == 0) continue;double e = f->error(values);double chi2 = 2.0 * e;double maha = std::sqrt(std::max(chi2, 0.0));boost::math::chi_squared chi_dist(static_cast<double>(dof));double thr = boost::math::quantile(chi_dist, 1.0 - alpha);std::string tname = demangle(typeid(*f).name());std::vector<gtsam::Key> keys(f->keys().begin(), f->keys().end());reports.push_back({i, tname, keys, dof, chi2, maha, thr, chi2 > thr});}return reports;
}void exportReportsCSV(const std::string& path,const std::vector<FactorResidualReport>& reports) {std::vector<std::vector<std::string>> rows;rows.push_back({"index","type","dof","chi2","mahalanobis","threshold","is_outlier","keys"});for (const auto& r : reports) {std::string keystr;for (size_t i = 0; i < r.keys.size(); ++i) {keystr += std::to_string(r.keys[i]);if (i + 1 < r.keys.size()) keystr += ";";}rows.push_back({std::to_string(r.index), r.type_name, std::to_string(r.dof),std::to_string(r.chi2), std::to_string(r.mahalanobis),std::to_string(r.chi2_threshold), r.is_outlier ? "1" : "0", keystr});}if (!writeCSV(path, rows)) {std::cerr << "Failed to write CSV: " << path << std::endl;}
}} // namespace mr_pgo

src/mr_pgo/csv_util.cpp

#include "mr_pgo/csv_util.h"
#include <fstream>namespace mr_pgo {bool writeCSV(const std::string& path, const std::vector<std::vector<std::string>>& rows) {std::ofstream ofs(path);if (!ofs) return false;for (const auto& row : rows) {for (size_t i = 0; i < row.size(); ++i) {ofs << row[i];if (i + 1 < row.size()) ofs << ",";}ofs << '\n';}ofs.close();return true;
}} // namespace mr_pgo

src/main.cpp

#include <iostream>
#include "mr_pgo/mr_pgo.h"
#include "mr_pgo/outlier_check.h"using namespace mr_pgo;
using namespace gtsam;int main(int argc, char** argv) {NonlinearFactorGraph graph;Values init;// build demo graphbuildDemoGraph(graph, init);// batch optimizeValues result = runBatchOptimize(graph, init);// analyze residualsdouble alpha = 0.05;auto reports = analyzeFactorGraphMahalanobis(graph, result, alpha);// print simple summarysize_t outliers = 0;for (const auto& r : reports) if (r.is_outlier) ++outliers;std::cout << "Active factors: " << reports.size() << "  Outliers: " << outliers << std::endl;// export CSVexportReportsCSV("factor_residuals.csv", reports);std::cout << "Exported factor_residuals.csv" << std::endl;// Optional: demonstrate isam2Values isam_est = runISAM2(graph, init);(void)isam_est;// Placeholder: TEASER++ or PCL based verification hooks can be added herereturn 0;
}

http://www.dtcms.com/a/339312.html

相关文章:

  • 权限管理系统
  • 动手学深度学习(pytorch版):第四章节—多层感知机(7、8)数值稳定性和模型初始化
  • 《算法导论》第 31 章 - 数论算法
  • 个人介绍CSDNmjhcsp
  • Kubernetes集群安装部署--flannel
  • Vue 2 项目中快速集成 Jest 单元测试(超详细教程)
  • 云计算学习100天-第23天
  • github 上传代码步骤
  • 【Python】新手入门:python模块是什么?python模块有什么作用?什么是python包?
  • Day13_【DataFrame数据组合merge连接】【案例】
  • 嵌入式开发学习———Linux环境下网络编程学习(三)
  • 第5.5节:awk算术运算
  • RabbitMQ:交换机(Exchange)
  • LeetCode-17day:贪心算法
  • 95、23种设计模式之建造者模式(4/23)
  • 大模型 + 垂直场景:搜索/推荐/营销/客服领域开发新范式与技术实践
  • 抓取手机游戏相关数据
  • 细化的 Spring Boot 和 Spring Framework 版本对应关系
  • c++计算器(简陋版)
  • 【全面推导】策略梯度算法:公式、偏差方差与进化
  • 差分(附带例题题解)
  • 深度学习 --- 基于ResNet50的野外可食用鲜花分类项目代码
  • 基于单片机身体健康监测/身体参数测量/心率血氧血压
  • 接口性能测试工具 - JMeter
  • . keepalived+haproxy
  • Ubuntu22.04安装docker最新教程,包含安装自动脚本
  • 【QT入门到晋级】进程间通信(IPC)-socket(包含详细分析及性能优化)
  • Day08 Go语言学习
  • C#/.NET/.NET Core技术前沿周刊 | 第 50 期(2025年8.11-8.17)
  • es7.x es的高亮与solr高亮查询的对比对比说明