7 视觉里程计 1

理论部分

7.1 特征点法

视觉里程计

1.概念:什么是里程计?

  在里程计问题中,我们希望测量一个运动物体的轨迹。这可以通过许多不同的手段来实现。例如,我们在汽车轮胎上安装计数码盘,就可以得到轮胎转动的距离,从而得到汽车的估计。或者,也可以测量汽车的速度、加速度,通过时间积分来计算它的位移。完成这种运动估计的装置(包括硬件和算法)叫做里程计(Odometry)。
2.特性:里程计的特性?

  里程计一个很重要的特性,是它只关心局部时间上的运动,多数时候是指两个时刻间的运动。当我们以某种间隔对时间进行采样时,就可估计运动物体在各时间间隔之内的运动。由于这个估计受噪声影响,先前时刻的估计误差,会累加到后面时间的运动之上,这种现象称为漂移(Drift)。
3.概念:什么是视觉里程计?

  视觉里程计VO的目标是根据拍摄的图像估计相机的运动。它的主要方式分为特征点法和直接方法。其中,特征点方法目前占据主流,能够在噪声较大、相机运动较快时工作,但地图则是稀疏特征点;直接方法不需要提特征,能够建立稠密地图,但存在着计算量大、鲁棒性不好的缺陷。
本章内容
1 特征点如何提取并且匹配
2 利用对极约束方法从 2 D − 2 D 2D-2D 2D−2D匹配好的特征点估计相机运动
3 使用三角测量方法从 2 D − 2 D 2D-2D 2D−2D匹配估计一个点的空间位置
4 3 D − 2 D 3D-2D 3D−2D的 P n P PnP PnP问题的线性解法和 B o u n d A d j u s t m e n t Bound Adjustment BoundAdjustment解法
5 3 D − 3 D 3D-3D 3D−3D的 I C P ICP ICP问题的线性解法和 B o u n d A d j u s t m e n t Bound Adjustment BoundAdjustment解法

在这里插入图片描述
在这里插入图片描述

常见的特征点

  1. SIFT 尺度不变特征变换
  2. SURF
  3. ORB 具有代表性的实时图像特征

提取ORB特征的两个步骤:

FAST角点提取
BRIEF描述子

改进的FAST 关键点(Oriented FAST)

(FAST是一种角点,主要检测局部像素灰度变化明显的地方,以速度快著称。它的一个思想是:如果一个像素与邻域像素的差别较大(过亮或过暗),那么它更可能是角点。
改进的FAST 关键点添加了方向和尺度信息。尺度不变性由构建图像金字塔,并在金字塔的每一层上检测角点来实现。而特征的旋转由灰度质心法来表示,步骤如下)
定义图像块的矩

图像块的质心

特征点方向

BRIEF(Binary Robust Independent Elementary Feature) 是二进制描述子,它的描述向量是由许多个0和1组成,这里的0和1编码了关键点附近的两个像素 p和 q 的大小关系,如果 p > q p>q p>q,则取1;反之取0。

大体上按照某种概率分布,随机的挑选 p和 q 的位置,最终选取128对这样的 p p p和 q q q构成128维向量

特征匹配

在这里插入图片描述

7.2 实践:特征提取和匹配

编译运行代码结果如下:

在这里插入图片描述通过KDevelop调试
注意参数的传递

同时调试器要改为 GDB

在这里插入图片描述结果:
在这里插入图片描述

代码及注释如下:


#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <chrono>

using namespace std;
using namespace cv;

int main(int argc, char **argv) {
  if (argc != 3) {
    cout << "usage: feature_extraction img1 img2" << endl;
    return 1;
  }
  //-- 读取图像
  Mat img_1 = imread(argv[1], CV_LOAD_IMAGE_COLOR);
  Mat img_2 = imread(argv[2], CV_LOAD_IMAGE_COLOR);
  assert(img_1.data != nullptr && img_2.data != nullptr);

  //-- 初始化
  std::vector<KeyPoint> keypoints_1, keypoints_2; //关键点
  Mat descriptors_1, descriptors_2;  //描述子
  Ptr<FeatureDetector> detector = ORB::create();//Oriented FAST 角点
  Ptr<DescriptorExtractor> descriptor = ORB::create();// BRIEF 描述子
  Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming"); //汉明距离

  //-- 第一步:检测 Oriented FAST 角点位置
  chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
  detector->detect(img_1, keypoints_1);
  detector->detect(img_2, keypoints_2);

  //-- 第二步:根据角点位置计算 BRIEF 描述子
  descriptor->compute(img_1, keypoints_1, descriptors_1);
  descriptor->compute(img_2, keypoints_2, descriptors_2);
  chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
  chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  cout << "extract ORB cost = " << time_used.count() << " seconds. " << endl;

  Mat outimg1;
  drawKeypoints(img_1, keypoints_1, outimg1, Scalar::all(-1), DrawMatchesFlags::DEFAULT);
  imshow("ORB features", outimg1);

  //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
  vector<DMatch> matches;  //未筛选匹配
  t1 = chrono::steady_clock::now();
  matcher->match(descriptors_1, descriptors_2, matches);
  t2 = chrono::steady_clock::now();
  time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  cout << "match ORB cost = " << time_used.count() << " seconds. " << endl;

  //-- 第四步:匹配点对筛选
  // 计算最小距离和最大距离
  auto min_max = minmax_element(matches.begin(), matches.end(),
                                [](const DMatch &m1, const DMatch &m2) { return m1.distance < m2.distance; });
  double min_dist = min_max.first->distance;
  double max_dist = min_max.second->distance;

  printf("-- Max dist : %f \n", max_dist);
  printf("-- Min dist : %f \n", min_dist);

  //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
  std::vector<DMatch> good_matches;  //筛选匹配,汉明距离小于最小距离的两倍
  for (int i = 0; i < descriptors_1.rows; i++) {
    if (matches[i].distance <= max(2 * min_dist, 30.0)) {
      good_matches.push_back(matches[i]);
    }
  }

  //-- 第五步:绘制匹配结果
  Mat img_match;
  Mat img_goodmatch;
  drawMatches(img_1, keypoints_1, img_2, keypoints_2, matches, img_match);
  drawMatches(img_1, keypoints_1, img_2, keypoints_2, good_matches, img_goodmatch);
  imshow("all matches", img_match);
  imshow("good matches", img_goodmatch);
  waitKey(0);

  return 0;
}

手写ORB结果:

在这里插入图片描述代码及注释如下:

此部分还没完全理解,留一个坑,55555,

7.3 2D-3D:对极问题

2D2D,2D-3D,三角测量理论部分参考博客:

视觉里程计1

7.4实践: 对极约束求解相机运动

编译运行代码结果如下:

[ INFO:0] Initialize OpenCL runtime...
-- Max dist : 95.000000 
-- Min dist : 7.000000 
一共找到了81组匹配点
fundamental_matrix is 
[5.435453065936294e-06, 0.0001366043242989641, -0.02140890086948122;
 -0.0001321142229824704, 2.339475702778057e-05, -0.006332906454396256;
 0.02107630352202776, -0.00366683395295285, 1]
essential_matrix is 
[0.01724015832721706, 0.328054335794133, 0.0473747783144249;
 -0.3243229585962962, 0.03292958445202408, -0.6262554366073018;
 -0.005885857752320116, 0.6253830041920333, 0.0153167864909267]
homography_matrix is 
[0.91317517918067, -0.1092435315821776, 29.95860009981271;
 0.02223560352310949, 0.9826008005061946, 6.50891083956826;
 -0.0001001560381023939, 0.0001037779436396116, 1]
R is 
[0.9985534106102478, -0.05339308467584758, 0.006345444621108698;
 0.05321959721496264, 0.9982715997131746, 0.02492965459802003;
 -0.007665548311697523, -0.02455588961730239, 0.9996690690694516]
t is 
[-0.8829934995085544;
 -0.05539655431450562;
 0.4661048182498402]
t^R=
[-0.02438126572381045, -0.4639388908753606, -0.06699805400667856;
 0.4586619266358499, -0.04656946493536188, 0.8856589319599302;
 0.008323859859529846, -0.8844251262060034, -0.0216612071874423]
epipolar constraint = [0.004334754136721797]
epipolar constraint = [-0.0002809243685121809]
epipolar constraint = [-0.001438247945977744]
epipolar constraint = [0.0003269033947393973]
epipolar constraint = [-0.0003553231638489529]
epipolar constraint = [0.001284545296795364]
epipolar constraint = [0.0007111119070243033]
epipolar constraint = [0.0005809963024551446]
epipolar constraint = [-0.0004569505410570683]
epipolar constraint = [-0.0001985091674428126]
epipolar constraint = [0.0009954466629851014]
epipolar constraint = [0.004183444398105557]
epipolar constraint = [0.0003301500278483499]
epipolar constraint = [0.000433468422895756]
epipolar constraint = [0.002166463717508241]
epipolar constraint = [0.0008612142972820036]
epipolar constraint = [0.006260134367574832]
epipolar constraint = [0.007343864270669354]
epipolar constraint = [0.0006997299583792263]
epipolar constraint = [-0.0002735148772005716]
epipolar constraint = [-5.272337012321437e-07]
epipolar constraint = [0.0007372565015377822]
epipolar constraint = [-0.0006697357792934122]
epipolar constraint = [0.003123484301720714]
epipolar constraint = [-0.001231690598807428]
epipolar constraint = [0.0002668695748940936]
epipolar constraint = [0.004005543462373876]
epipolar constraint = [0.0003056705066544624]
epipolar constraint = [-0.003108414268527718]
epipolar constraint = [-1.915351944714594e-06]
epipolar constraint = [0.0006933459945522302]
epipolar constraint = [-1.20842520190817e-05]
epipolar constraint = [0.001931054288970224]
epipolar constraint = [-0.001327411567348349]
epipolar constraint = [-0.001158918062891631]
epipolar constraint = [-0.001858904428330923]
epipolar constraint = [0.0001556314587936695]
epipolar constraint = [-0.002373723544446836]
epipolar constraint = [0.004805889122330598]
epipolar constraint = [-0.0009747832347852675]
epipolar constraint = [0.0006571155616519331]
epipolar constraint = [0.002337204394122716]
epipolar constraint = [0.004765516758509947]
epipolar constraint = [-0.001750870050808317]
epipolar constraint = [0.001092859392827487]
epipolar constraint = [0.0006769492505509164]
epipolar constraint = [0.0003307429644405918]
epipolar constraint = [0.001876564994448115]
epipolar constraint = [0.001832354276950641]
epipolar constraint = [7.744615696386736e-07]
epipolar constraint = [-0.0007964236477854686]
epipolar constraint = [0.0001236409258935089]
epipolar constraint = [1.386843227244028e-06]
epipolar constraint = [0.0001450355326295741]
epipolar constraint = [0.00109731208246224]
epipolar constraint = [-0.002227053499071974]
epipolar constraint = [0.002240603253167585]
epipolar constraint = [0.0002359213388878484]
epipolar constraint = [0.003035338951631217]
epipolar constraint = [0.002650788007581513]
epipolar constraint = [-0.0002311129052587763]
epipolar constraint = [0.003158726652554816]
epipolar constraint = [0.00318443380492206]
epipolar constraint = [-0.0003030993775385848]
epipolar constraint = [-0.004151424678597325]
epipolar constraint = [0.000962908812924268]
epipolar constraint = [0.00166825104315349]
epipolar constraint = [-0.001953002815440141]
epipolar constraint = [-0.0024677215713114]
epipolar constraint = [0.0008218799177506786]
epipolar constraint = [1.369310688559278e-06]
epipolar constraint = [-0.001891259735584946]
epipolar constraint = [-0.0001304333417016107]
epipolar constraint = [0.0002565573533065552]
epipolar constraint = [0.006010380518284446]
epipolar constraint = [-0.0004837802950156331]
epipolar constraint = [0.00547529124350349]
epipolar constraint = [0.001722398975726524]
epipolar constraint = [-0.0004945848903547823]
epipolar constraint = [-0.004259380631293275]
epipolar constraint = [-0.0007738612238716719]
*** 已完成 ***

代码及注释如下: (部分类的调用没能明白,轻置小坑)

#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
// #include "extra.h" // use this if in OpenCV2

using namespace std;
using namespace cv;

/****************************************************
 * 本程序演示了如何使用2D-2D的特征匹配估计相机运动
 * **************************************************/

void find_feature_matches(
  const Mat &img_1, const Mat &img_2,
  std::vector<KeyPoint> &keypoints_1,
  std::vector<KeyPoint> &keypoints_2,
  std::vector<DMatch> &matches);

void pose_estimation_2d2d(
  std::vector<KeyPoint> keypoints_1,
  std::vector<KeyPoint> keypoints_2,
  std::vector<DMatch> matches,
  Mat &R, Mat &t);

// 像素坐标转相机归一化坐标
Point2d pixel2cam(const Point2d &p, const Mat &K);

int main(int argc, char **argv) {
  if (argc != 3) {
    cout << "usage: pose_estimation_2d2d img1 img2" << endl;
    return 1;
  }
  //-- 读取图像
  Mat img_1 = imread(argv[1], CV_LOAD_IMAGE_COLOR);
  Mat img_2 = imread(argv[2], CV_LOAD_IMAGE_COLOR);
  assert(img_1.data && img_2.data && "Can not load images!");

  vector<KeyPoint> keypoints_1, keypoints_2;
  vector<DMatch> matches;
  find_feature_matches(img_1, img_2, keypoints_1, keypoints_2, matches);
  cout << "一共找到了" << matches.size() << "组匹配点" << endl;

  //-- 估计两张图像间运动
  Mat R, t;
  pose_estimation_2d2d(keypoints_1, keypoints_2, matches, R, t);

  //-- 验证E=t^R*scale
  Mat t_x =
    (Mat_<double>(3, 3) << 0, -t.at<double>(2, 0), t.at<double>(1, 0),
      t.at<double>(2, 0), 0, -t.at<double>(0, 0),
      -t.at<double>(1, 0), t.at<double>(0, 0), 0);

  cout << "t^R=" << endl << t_x * R << endl;

  //-- 验证对极约束
  Mat K = (Mat_<double>(3, 3) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1);
  for (DMatch m: matches) {
    Point2d pt1 = pixel2cam(keypoints_1[m.queryIdx].pt, K);
    Mat y1 = (Mat_<double>(3, 1) << pt1.x, pt1.y, 1);
    Point2d pt2 = pixel2cam(keypoints_2[m.trainIdx].pt, K);
    Mat y2 = (Mat_<double>(3, 1) << pt2.x, pt2.y, 1);
    Mat d = y2.t() * t_x * R * y1;
    cout << "epipolar constraint = " << d << endl;
  }
  return 0;
}

void find_feature_matches(const Mat &img_1, const Mat &img_2,
                          std::vector<KeyPoint> &keypoints_1,
                          std::vector<KeyPoint> &keypoints_2,
                          std::vector<DMatch> &matches) {
  //-- 初始化
  Mat descriptors_1, descriptors_2;
  // used in OpenCV3
  Ptr<FeatureDetector> detector = ORB::create();
  Ptr<DescriptorExtractor> descriptor = ORB::create();
  // use this if you are in OpenCV2
  // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
  // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );
  Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
  //-- 第一步:检测 Oriented FAST 角点位置
  detector->detect(img_1, keypoints_1);
  detector->detect(img_2, keypoints_2);

  //-- 第二步:根据角点位置计算 BRIEF 描述子
  descriptor->compute(img_1, keypoints_1, descriptors_1);
  descriptor->compute(img_2, keypoints_2, descriptors_2);

  //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
  vector<DMatch> match;
  //BFMatcher matcher ( NORM_HAMMING );
  matcher->match(descriptors_1, descriptors_2, match);

  //-- 第四步:匹配点对筛选
  double min_dist = 10000, max_dist = 0;

  //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
  for (int i = 0; i < descriptors_1.rows; i++) {
    double dist = match[i].distance;
    if (dist < min_dist) min_dist = dist;
    if (dist > max_dist) max_dist = dist;
  }

  printf("-- Max dist : %f \n", max_dist);
  printf("-- Min dist : %f \n", min_dist);

  //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
  for (int i = 0; i < descriptors_1.rows; i++) {
    if (match[i].distance <= max(2 * min_dist, 30.0)) {
      matches.push_back(match[i]);
    }
  }
}

Point2d pixel2cam(const Point2d &p, const Mat &K) {
  return Point2d
    (
      (p.x - K.at<double>(0, 2)) / K.at<double>(0, 0),
      (p.y - K.at<double>(1, 2)) / K.at<double>(1, 1)
    );
}

void pose_estimation_2d2d(std::vector<KeyPoint> keypoints_1,
                          std::vector<KeyPoint> keypoints_2,
                          std::vector<DMatch> matches,
                          Mat &R, Mat &t) {
  // 相机内参,TUM Freiburg2
  Mat K = (Mat_<double>(3, 3) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1);

  //-- 把匹配点转换为vector<Point2f>的形式
  vector<Point2f> points1;
  vector<Point2f> points2;

  for (int i = 0; i < (int) matches.size(); i++) {
    points1.push_back(keypoints_1[matches[i].queryIdx].pt);
    points2.push_back(keypoints_2[matches[i].trainIdx].pt);
  }

  //-- 计算基础矩阵
  Mat fundamental_matrix;
  fundamental_matrix = findFundamentalMat(points1, points2, CV_FM_8POINT);
  cout << "fundamental_matrix is " << endl << fundamental_matrix << endl;

  //-- 计算本质矩阵
  Point2d principal_point(325.1, 249.7);  //相机光心, TUM dataset标定值
  double focal_length = 521;      //相机焦距, TUM dataset标定值
  Mat essential_matrix;
  essential_matrix = findEssentialMat(points1, points2, focal_length, principal_point);
  cout << "essential_matrix is " << endl << essential_matrix << endl;

  //-- 计算单应矩阵
  //-- 但是本例中场景不是平面,单应矩阵意义不大
  Mat homography_matrix;
  homography_matrix = findHomography(points1, points2, RANSAC, 3);
  cout << "homography_matrix is " << endl << homography_matrix << endl;

  //-- 从本质矩阵中恢复旋转和平移信息.
  // 此函数仅在Opencv3中提供
  recoverPose(essential_matrix, points1, points2, R, t, focal_length, principal_point);
  cout << "R is " << endl << R << endl;
  cout << "t is " << endl << t << endl;

}

7.5 三角测量

7.6 实践:三角测量

编译运行代码结果如下:
在这里插入图片描述代码及注释如下: (部分类的调用没能明白,轻置小坑)

#include <iostream>
#include <opencv2/opencv.hpp>
// #include "extra.h" // used in opencv2
using namespace std;
using namespace cv;

void find_feature_matches(
  const Mat &img_1, const Mat &img_2,
  std::vector<KeyPoint> &keypoints_1,
  std::vector<KeyPoint> &keypoints_2,
  std::vector<DMatch> &matches);

void pose_estimation_2d2d(
  const std::vector<KeyPoint> &keypoints_1,
  const std::vector<KeyPoint> &keypoints_2,
  const std::vector<DMatch> &matches,
  Mat &R, Mat &t);

void triangulation(
  const vector<KeyPoint> &keypoint_1,
  const vector<KeyPoint> &keypoint_2,
  const std::vector<DMatch> &matches,
  const Mat &R, const Mat &t,
  vector<Point3d> &points
);

/// 作图用
inline cv::Scalar get_color(float depth) {
  float up_th = 50, low_th = 10, th_range = up_th - low_th;
  if (depth > up_th) depth = up_th;
  if (depth < low_th) depth = low_th;
  return cv::Scalar(255 * depth / th_range, 0, 255 * (1 - depth / th_range));
}

// 像素坐标转相机归一化坐标
Point2f pixel2cam(const Point2d &p, const Mat &K);

int main(int argc, char **argv) {
  if (argc != 3) {
    cout << "usage: triangulation img1 img2" << endl;
    return 1;
  }
  //-- 读取图像
  Mat img_1 = imread(argv[1], CV_LOAD_IMAGE_COLOR);
  Mat img_2 = imread(argv[2], CV_LOAD_IMAGE_COLOR);

  vector<KeyPoint> keypoints_1, keypoints_2;
  vector<DMatch> matches;
  find_feature_matches(img_1, img_2, keypoints_1, keypoints_2, matches);
  cout << "一共找到了" << matches.size() << "组匹配点" << endl;

  //-- 估计两张图像间运动
  Mat R, t;
  pose_estimation_2d2d(keypoints_1, keypoints_2, matches, R, t);

  //-- 三角化
  vector<Point3d> points;
  triangulation(keypoints_1, keypoints_2, matches, R, t, points);

  //-- 验证三角化点与特征点的重投影关系
  Mat K = (Mat_<double>(3, 3) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1);
  Mat img1_plot = img_1.clone();
  Mat img2_plot = img_2.clone();
  for (int i = 0; i < matches.size(); i++) {
    // 第一个图
    float depth1 = points[i].z;
    cout << "depth: " << depth1 << endl;
    Point2d pt1_cam = pixel2cam(keypoints_1[matches[i].queryIdx].pt, K);
    cv::circle(img1_plot, keypoints_1[matches[i].queryIdx].pt, 2, get_color(depth1), 2);

    // 第二个图
    Mat pt2_trans = R * (Mat_<double>(3, 1) << points[i].x, points[i].y, points[i].z) + t;
    float depth2 = pt2_trans.at<double>(2, 0);
    cv::circle(img2_plot, keypoints_2[matches[i].trainIdx].pt, 2, get_color(depth2), 2);
  }
  cv::imshow("img 1", img1_plot);
  cv::imshow("img 2", img2_plot);
  cv::waitKey();

  return 0;
}

void find_feature_matches(const Mat &img_1, const Mat &img_2,
                          std::vector<KeyPoint> &keypoints_1,
                          std::vector<KeyPoint> &keypoints_2,
                          std::vector<DMatch> &matches) {
  //-- 初始化
  Mat descriptors_1, descriptors_2;
  // used in OpenCV3
  Ptr<FeatureDetector> detector = ORB::create();
  Ptr<DescriptorExtractor> descriptor = ORB::create();
  // use this if you are in OpenCV2
  // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
  // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );
  Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
  //-- 第一步:检测 Oriented FAST 角点位置
  detector->detect(img_1, keypoints_1);
  detector->detect(img_2, keypoints_2);

  //-- 第二步:根据角点位置计算 BRIEF 描述子
  descriptor->compute(img_1, keypoints_1, descriptors_1);
  descriptor->compute(img_2, keypoints_2, descriptors_2);

  //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
  vector<DMatch> match;
  // BFMatcher matcher ( NORM_HAMMING );
  matcher->match(descriptors_1, descriptors_2, match);

  //-- 第四步:匹配点对筛选
  double min_dist = 10000, max_dist = 0;

  //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
  for (int i = 0; i < descriptors_1.rows; i++) {
    double dist = match[i].distance;
    if (dist < min_dist) min_dist = dist;
    if (dist > max_dist) max_dist = dist;
  }

  printf("-- Max dist : %f \n", max_dist);
  printf("-- Min dist : %f \n", min_dist);

  //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
  for (int i = 0; i < descriptors_1.rows; i++) {
    if (match[i].distance <= max(2 * min_dist, 30.0)) {
      matches.push_back(match[i]);
    }
  }
}

void pose_estimation_2d2d(
  const std::vector<KeyPoint> &keypoints_1,
  const std::vector<KeyPoint> &keypoints_2,
  const std::vector<DMatch> &matches,
  Mat &R, Mat &t) {
  // 相机内参,TUM Freiburg2
  Mat K = (Mat_<double>(3, 3) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1);

  //-- 把匹配点转换为vector<Point2f>的形式
  vector<Point2f> points1;
  vector<Point2f> points2;

  for (int i = 0; i < (int) matches.size(); i++) {
    points1.push_back(keypoints_1[matches[i].queryIdx].pt);
    points2.push_back(keypoints_2[matches[i].trainIdx].pt);
  }

  //-- 计算本质矩阵
  Point2d principal_point(325.1, 249.7);        //相机主点, TUM dataset标定值
  int focal_length = 521;            //相机焦距, TUM dataset标定值
  Mat essential_matrix;
  essential_matrix = findEssentialMat(points1, points2, focal_length, principal_point);

  //-- 从本质矩阵中恢复旋转和平移信息.
  recoverPose(essential_matrix, points1, points2, R, t, focal_length, principal_point);
}

void triangulation(   //通过三角化利用对极几何求解的相机位姿求出特征点的空间位置
  const vector<KeyPoint> &keypoint_1,
  const vector<KeyPoint> &keypoint_2,
  const std::vector<DMatch> &matches,
  const Mat &R, const Mat &t,
  vector<Point3d> &points) {
  Mat T1 = (Mat_<float>(3, 4) <<
    1, 0, 0, 0,
    0, 1, 0, 0,
    0, 0, 1, 0);
  Mat T2 = (Mat_<float>(3, 4) <<
    R.at<double>(0, 0), R.at<double>(0, 1), R.at<double>(0, 2), t.at<double>(0, 0),
    R.at<double>(1, 0), R.at<double>(1, 1), R.at<double>(1, 2), t.at<double>(1, 0),
    R.at<double>(2, 0), R.at<double>(2, 1), R.at<double>(2, 2), t.at<double>(2, 0)
  );

  Mat K = (Mat_<double>(3, 3) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1);
  vector<Point2f> pts_1, pts_2;
  for (DMatch m:matches) {
    // 将像素坐标转换至相机坐标
    pts_1.push_back(pixel2cam(keypoint_1[m.queryIdx].pt, K));
    pts_2.push_back(pixel2cam(keypoint_2[m.trainIdx].pt, K));
  }

  Mat pts_4d;
  cv::triangulatePoints(T1, T2, pts_1, pts_2, pts_4d);

  // 转换成非齐次坐标
  for (int i = 0; i < pts_4d.cols; i++) {
    Mat x = pts_4d.col(i);
    x /= x.at<float>(3, 0); // 归一化
    Point3d p(
      x.at<float>(0, 0),
      x.at<float>(1, 0),
      x.at<float>(2, 0)
    );
    points.push_back(p);
  }
}

Point2f pixel2cam(const Point2d &p, const Mat &K) {
  return Point2f
    (
      (p.x - K.at<double>(0, 2)) / K.at<double>(0, 0),
      (p.y - K.at<double>(1, 2)) / K.at<double>(1, 1)
    );
}


7.7 3D-2D: PnP

7.7.1 直接线性变化

7.7.2 P3P

7.7.3 最小化重投影误差求解 PnP

7.8 实践: 求解PnP

编译运行代码结果如下:

[ INFO:0] Initialize OpenCL runtime...
-- Max dist : 95.000000 
-- Min dist : 7.000000 
一共找到了81组匹配点
3d-2d pairs: 77
solve pnp in opencv cost time: 0.0142977 seconds.
R=
[0.9979193252225095, -0.05138618904650329, 0.03894200717385431;
 0.05033852907733768, 0.9983556574295407, 0.0274228694479559;
 -0.04028712992732943, -0.02540552801471818, 0.998865109165653]
t=
[-0.1255867099750184;
 -0.007363525258815287;
 0.06099926588678122]
calling bundle adjustment by gauss newton
iteration 0 cost=44765.3537799
iteration 1 cost=431.695366816
iteration 2 cost=319.560037493
iteration 3 cost=319.55886789
pose by g-n: 
   0.997919325221  -0.0513861890122   0.0389420072614   -0.125586710123
  0.0503385290405    0.998355657431   0.0274228694545 -0.00736352527141
 -0.0402871300142  -0.0254055280183    0.998865109162   0.0609992659219
                0                 0                 0                 1
solve pnp by gauss newton cost time: 9.4266e-05 seconds.
calling bundle adjustment by g2o
iteration= 0	 chi2= 431.695367	 time= 2.3349e-05	 cumTime= 2.3349e-05	 edges= 77	 schur= 0
iteration= 1	 chi2= 319.560037	 time= 1.0797e-05	 cumTime= 3.4146e-05	 edges= 77	 schur= 0
iteration= 2	 chi2= 319.558868	 time= 1.0022e-05	 cumTime= 4.4168e-05	 edges= 77	 schur= 0
iteration= 3	 chi2= 319.558868	 time= 9.89402e-06	 cumTime= 5.4062e-05	 edges= 77	 schur= 0
iteration= 4	 chi2= 319.558868	 time= 9.83902e-06	 cumTime= 6.3901e-05	 edges= 77	 schur= 0
iteration= 5	 chi2= 319.558868	 time= 9.86701e-06	 cumTime= 7.3768e-05	 edges= 77	 schur= 0
iteration= 6	 chi2= 319.558868	 time= 9.80101e-06	 cumTime= 8.3569e-05	 edges= 77	 schur= 0
iteration= 7	 chi2= 319.558868	 time= 9.78299e-06	 cumTime= 9.3352e-05	 edges= 77	 schur= 0
iteration= 8	 chi2= 319.558868	 time= 9.785e-06	 cumTime= 0.000103137	 edges= 77	 schur= 0
iteration= 9	 chi2= 319.558868	 time= 9.765e-06	 cumTime= 0.000112902	 edges= 77	 schur= 0
optimization costs time: 0.000419886 seconds.
pose estimated by g2o =
  0.997919325223 -0.0513861890471  0.0389420071732  -0.125586709974
 0.0503385290779   0.998355657429  0.0274228694493  -0.007363525261
-0.0402871299268 -0.0254055280161   0.998865109166  0.0609992658862
               0                0                0                1
solve pnp by g2o cost time: 0.000538141 seconds.
*** 正常退出 ***

代码及注释如下:

1.open CV直接求解

chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
  Mat r, t;
  solvePnP(pts_3d, pts_2d, K, Mat(), r, t, false); // 调用OpenCV 的 PnP 求解,可选择EPNP,DLS等方法
  Mat R;
  cv::Rodrigues(r, R); // r为旋转向量形式,用Rodrigues公式转换为矩阵
  chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
  chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  cout << "solve pnp in opencv cost time: " << time_used.count() << " seconds." << endl;

  cout << "R=" << endl << R << endl;
  cout << "t=" << endl << t << endl;

手写位姿估计,先写高斯牛顿法的PnP,然后调用g2o求解

void bundleAdjustmentGaussNewton(
  const VecVector3d &points_3d,
  const VecVector2d &points_2d,
  const Mat &K,
  Sophus::SE3d &pose) {
  typedef Eigen::Matrix<double, 6, 1> Vector6d;
  const int iterations = 10;
  double cost = 0, lastCost = 0;
  double fx = K.at<double>(0, 0);
  double fy = K.at<double>(1, 1);
  double cx = K.at<double>(0, 2);
  double cy = K.at<double>(1, 2);

  for (int iter = 0; iter < iterations; iter++) {
    Eigen::Matrix<double, 6, 6> H = Eigen::Matrix<double, 6, 6>::Zero();
    Vector6d b = Vector6d::Zero();

    cost = 0;
    // compute cost
    for (int i = 0; i < points_3d.size(); i++) {
      Eigen::Vector3d pc = pose * points_3d[i];
      double inv_z = 1.0 / pc[2];
      double inv_z2 = inv_z * inv_z;
      Eigen::Vector2d proj(fx * pc[0] / pc[2] + cx, fy * pc[1] / pc[2] + cy);

      Eigen::Vector2d e = points_2d[i] - proj;

      cost += e.squaredNorm();
      Eigen::Matrix<double, 2, 6> J;
      J << -fx * inv_z,
        0,
        fx * pc[0] * inv_z2,
        fx * pc[0] * pc[1] * inv_z2,
        -fx - fx * pc[0] * pc[0] * inv_z2,
        fx * pc[1] * inv_z,
        0,
        -fy * inv_z,
        fy * pc[1] * inv_z2,
        fy + fy * pc[1] * pc[1] * inv_z2,
        -fy * pc[0] * pc[1] * inv_z2,
        -fy * pc[0] * inv_z;

      H += J.transpose() * J;
      b += -J.transpose() * e;
    }

    Vector6d dx;
    dx = H.ldlt().solve(b);

    if (isnan(dx[0])) {
      cout << "result is nan!" << endl;
      break;
    }

    if (iter > 0 && cost >= lastCost) {
      // cost increase, update is not good
      cout << "cost: " << cost << ", last cost: " << lastCost << endl;
      break;
    }

    // update your estimation
    pose = Sophus::SE3d::exp(dx) * pose;
    lastCost = cost;

    cout << "iteration " << iter << " cost=" << std::setprecision(12) << cost << endl;
    if (dx.norm() < 1e-6) {
      // converge
      break;
    }
  }

  cout << "pose by g-n: \n" << pose.matrix() << endl;
}

使用g2o进行BA优化

/实现顶点更新与边的误差计算
class VertexPose : public g2o::BaseVertex<6, Sophus::SE3d> {
public:
  EIGEN_MAKE_ALIGNED_OPERATOR_NEW;

  virtual void setToOriginImpl() override {
    _estimate = Sophus::SE3d();
  }

  /// left multiplication on SE3
  virtual void oplusImpl(const double *update) override {
    Eigen::Matrix<double, 6, 1> update_eigen;
    update_eigen << update[0], update[1], update[2], update[3], update[4], update[5];
    _estimate = Sophus::SE3d::exp(update_eigen) * _estimate;
  }

  virtual bool read(istream &in) override {}

  virtual bool write(ostream &out) const override {}
};

class EdgeProjection : public g2o::BaseUnaryEdge<2, Eigen::Vector2d, VertexPose> {
public:
  EIGEN_MAKE_ALIGNED_OPERATOR_NEW;

  EdgeProjection(const Eigen::Vector3d &pos, const Eigen::Matrix3d &K) : _pos3d(pos), _K(K) {}

  virtual void computeError() override {
    const VertexPose *v = static_cast<VertexPose *> (_vertices[0]);
    Sophus::SE3d T = v->estimate();
    Eigen::Vector3d pos_pixel = _K * (T * _pos3d);
    pos_pixel /= pos_pixel[2];
    _error = _measurement - pos_pixel.head<2>();
  }

  virtual void linearizeOplus() override {
    const VertexPose *v = static_cast<VertexPose *> (_vertices[0]);
    Sophus::SE3d T = v->estimate();
    Eigen::Vector3d pos_cam = T * _pos3d;
    double fx = _K(0, 0);
    double fy = _K(1, 1);
    double cx = _K(0, 2);
    double cy = _K(1, 2);
    double X = pos_cam[0];
    double Y = pos_cam[1];
    double Z = pos_cam[2];
    double Z2 = Z * Z;
    _jacobianOplusXi
      << -fx / Z, 0, fx * X / Z2, fx * X * Y / Z2, -fx - fx * X * X / Z2, fx * Y / Z,
      0, -fy / Z, fy * Y / (Z * Z), fy + fy * Y * Y / Z2, -fy * X * Y / Z2, -fy * X / Z;
  }

  virtual bool read(istream &in) override {}

  virtual bool write(ostream &out) const override {}

private:
  Eigen::Vector3d _pos3d;
  Eigen::Matrix3d _K;
};

/组成图优化问题
void bundleAdjustmentG2O(
  const VecVector3d &points_3d,
  const VecVector2d &points_2d,
  const Mat &K,
  Sophus::SE3d &pose) {

  // 构建图优化,先设定g2o
  typedef g2o::BlockSolver<g2o::BlockSolverTraits<6, 3>> BlockSolverType;  // pose is 6, landmark is 3
  typedef g2o::LinearSolverDense<BlockSolverType::PoseMatrixType> LinearSolverType; // 线性求解器类型
  // 梯度下降方法,可以从GN, LM, DogLeg 中选
  auto solver = new g2o::OptimizationAlgorithmGaussNewton(
    g2o::make_unique<BlockSolverType>(g2o::make_unique<LinearSolverType>()));
  g2o::SparseOptimizer optimizer;     // 图模型
  optimizer.setAlgorithm(solver);   // 设置求解器
  optimizer.setVerbose(true);       // 打开调试输出

  // vertex
  VertexPose *vertex_pose = new VertexPose(); // camera vertex_pose
  vertex_pose->setId(0);
  vertex_pose->setEstimate(Sophus::SE3d());
  optimizer.addVertex(vertex_pose);

  // K
  Eigen::Matrix3d K_eigen;
  K_eigen <<
          K.at<double>(0, 0), K.at<double>(0, 1), K.at<double>(0, 2),
    K.at<double>(1, 0), K.at<double>(1, 1), K.at<double>(1, 2),
    K.at<double>(2, 0), K.at<double>(2, 1), K.at<double>(2, 2);

  // edges
  int index = 1;
  for (size_t i = 0; i < points_2d.size(); ++i) {
    auto p2d = points_2d[i];
    auto p3d = points_3d[i];
    EdgeProjection *edge = new EdgeProjection(p3d, K_eigen);
    edge->setId(index);
    edge->setVertex(0, vertex_pose);
    edge->setMeasurement(p2d);
    edge->setInformation(Eigen::Matrix2d::Identity());
    optimizer.addEdge(edge);
    index++;
  }

  chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
  optimizer.setVerbose(true);
  optimizer.initializeOptimization();
  optimizer.optimize(10);
  chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
  chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  cout << "optimization costs time: " << time_used.count() << " seconds." << endl;
  cout << "pose estimated by g2o =\n" << vertex_pose->estimate().matrix() << endl;
  pose = vertex_pose->estimate();
}


7.8.1 使用EPnP 位姿

7.8.2 手写位姿估计

7.8.3 使用g2o进行BA优化

7.9 3D-3D: ICP

3D-3D: ICP理论部分参考博客:

ICP算法与Kdtree

7.9.1 SVD方法

7.9.2 非线性优化方法

7.10 实践:求解ICP

7.10.1实践:SVD方法

编译运行代码结果如下:

[ INFO:0] Initialize OpenCL runtime...
-- Max dist : 95.000000 
-- Min dist : 7.000000 
一共找到了81组匹配点
3d-3d pairs: 75
W=  11.8688 -0.717698   1.89486
 -1.88065   3.83391  -5.78219
  3.25846  -5.82734   9.65267
U=  0.592295  -0.805658 -0.0101195
 -0.418171  -0.318113   0.850845
  0.688709   0.499719   0.525319
V=   0.64736  -0.761401 -0.0345329
 -0.388765  -0.368829   0.844291
  0.655581   0.533135   0.534772
ICP via SVD results: 
R = [0.9972065647956201, 0.05834273442898391, -0.04663895869192625;
 -0.05787745545449196, 0.998260122172866, 0.01126626067193237;
 0.04721511705620757, -0.008535444848613793, 0.9988482762174666]
t = [0.1379879629890433;
 -0.06551699470729988;
 -0.02981649388290575]
R_inv = [0.9972065647956201, -0.05787745545449196, 0.04721511705620757;
 0.05834273442898391, 0.998260122172866, -0.008535444848613793;
 -0.04663895869192625, 0.01126626067193237, 0.9988482762174666]
t_inv = [-0.1399866702492459;
 0.05709791102272541;
 0.03695589996443214]
calling bundle adjustment

代码及注释如下:

 pose_estimation_3d3d(pts1, pts2, R, t);
  cout << "ICP via SVD results: " << endl;
  cout << "R = " << R << endl;
  cout << "t = " << t << endl;
  cout << "R_inv = " << R.t() << endl;
  cout << "t_inv = " << -R.t() * t << endl;

  cout << "calling bundle adjustment" << endl;

  bundleAdjustment(pts1, pts2, R, t);

  // verify p1 = R * p2 + t
  for (int i = 0; i < 5; i++) {
    cout << "p1 = " << pts1[i] << endl;
    cout << "p2 = " << pts2[i] << endl;
    cout << "(R*p2+t) = " <<
         R * (Mat_<double>(3, 1) << pts2[i].x, pts2[i].y, pts2[i].z) + t
         << endl;
    cout << endl;
  }
}

void find_feature_matches(const Mat &img_1, const Mat &img_2,
                          std::vector<KeyPoint> &keypoints_1,
                          std::vector<KeyPoint> &keypoints_2,
                          std::vector<DMatch> &matches) {
  //-- 初始化
  Mat descriptors_1, descriptors_2;
  // used in OpenCV3
  Ptr<FeatureDetector> detector = ORB::create();
  Ptr<DescriptorExtractor> descriptor = ORB::create();
  // use this if you are in OpenCV2
  // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
  // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );
  Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
  //-- 第一步:检测 Oriented FAST 角点位置
  detector->detect(img_1, keypoints_1);
  detector->detect(img_2, keypoints_2);

  //-- 第二步:根据角点位置计算 BRIEF 描述子
  descriptor->compute(img_1, keypoints_1, descriptors_1);
  descriptor->compute(img_2, keypoints_2, descriptors_2);

  //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
  vector<DMatch> match;
  // BFMatcher matcher ( NORM_HAMMING );
  matcher->match(descriptors_1, descriptors_2, match);

  //-- 第四步:匹配点对筛选
  double min_dist = 10000, max_dist = 0;

  //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
  for (int i = 0; i < descriptors_1.rows; i++) {
    double dist = match[i].distance;
    if (dist < min_dist) min_dist = dist;
    if (dist > max_dist) max_dist = dist;
  }

  printf("-- Max dist : %f \n", max_dist);
  printf("-- Min dist : %f \n", min_dist);

  //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
  for (int i = 0; i < descriptors_1.rows; i++) {
    if (match[i].distance <= max(2 * min_dist, 30.0)) {
      matches.push_back(match[i]);
    }
  }
}

Point2d pixel2cam(const Point2d &p, const Mat &K) {
  return Point2d(
    (p.x - K.at<double>(0, 2)) / K.at<double>(0, 0),
    (p.y - K.at<double>(1, 2)) / K.at<double>(1, 1)
  );
}

void pose_estimation_3d3d(const vector<Point3f> &pts1,
                          const vector<Point3f> &pts2,
                          Mat &R, Mat &t) {
  Point3f p1, p2;     // center of mass
  int N = pts1.size();
  for (int i = 0; i < N; i++) {
    p1 += pts1[i];
    p2 += pts2[i];
  }
  p1 = Point3f(Vec3f(p1) / N);
  p2 = Point3f(Vec3f(p2) / N);
  vector<Point3f> q1(N), q2(N); // remove the center
  for (int i = 0; i < N; i++) {
    q1[i] = pts1[i] - p1;
    q2[i] = pts2[i] - p2;
  }

  // compute q1*q2^T
  Eigen::Matrix3d W = Eigen::Matrix3d::Zero();
  for (int i = 0; i < N; i++) {
    W += Eigen::Vector3d(q1[i].x, q1[i].y, q1[i].z) * Eigen::Vector3d(q2[i].x, q2[i].y, q2[i].z).transpose();
  }
  cout << "W=" << W << endl;

  // SVD on W
  Eigen::JacobiSVD<Eigen::Matrix3d> svd(W, Eigen::ComputeFullU | Eigen::ComputeFullV);
  Eigen::Matrix3d U = svd.matrixU();
  Eigen::Matrix3d V = svd.matrixV();

  cout << "U=" << U << endl;
  cout << "V=" << V << endl;

  Eigen::Matrix3d R_ = U * (V.transpose());
  if (R_.determinant() < 0) {
    R_ = -R_;
  }
  Eigen::Vector3d t_ = Eigen::Vector3d(p1.x, p1.y, p1.z) - R_ * Eigen::Vector3d(p2.x, p2.y, p2.z);

  // convert to cv::Mat
  R = (Mat_<double>(3, 3) <<
    R_(0, 0), R_(0, 1), R_(0, 2),
    R_(1, 0), R_(1, 1), R_(1, 2),
    R_(2, 0), R_(2, 1), R_(2, 2)
  );
  t = (Mat_<double>(3, 1) << t_(0, 0), t_(1, 0), t_(2, 0));
}

7.10.2 实践:非线性优化方法

编译运行代码结果如下:

iteration= 0	 chi2= 1.815772	 time= 2.7424e-05	 cumTime= 2.7424e-05	 edges= 75	 schur= 0	 lambda= 0.000787	 levenbergIter= 1
iteration= 1	 chi2= 1.815193	 time= 1.3543e-05	 cumTime= 4.0967e-05	 edges= 75	 schur= 0	 lambda= 0.000524	 levenbergIter= 1
iteration= 2	 chi2= 1.815193	 time= 1.2344e-05	 cumTime= 5.3311e-05	 edges= 75	 schur= 0	 lambda= 0.000350	 levenbergIter= 1
iteration= 3	 chi2= 1.815193	 time= 1.2062e-05	 cumTime= 6.5373e-05	 edges= 75	 schur= 0	 lambda= 0.000233	 levenbergIter= 1
iteration= 4	 chi2= 1.815193	 time= 1.2225e-05	 cumTime= 7.7598e-05	 edges= 75	 schur= 0	 lambda= 0.000155	 levenbergIter= 1
iteration= 5	 chi2= 1.815193	 time= 1.7876e-05	 cumTime= 9.5474e-05	 edges= 75	 schur= 0	 lambda= 0.000829	 levenbergIter= 3
iteration= 6	 chi2= 1.815193	 time= 1.1872e-05	 cumTime= 0.000107346	 edges= 75	 schur= 0	 lambda= 0.001657	 levenbergIter= 1
optimization costs time: 0.000329077 seconds.

after optimization:
T=
   0.997207   0.0583427   -0.046639    0.137988
 -0.0578775     0.99826   0.0112663   -0.065517
  0.0472151 -0.00853546    0.998848  -0.0298169
          0           0           0           1
p1 = [-0.0374123, -0.830816, 2.7448]
p2 = [-0.0111479, -0.746763, 2.7652]
(R*p2+t) = [-0.04566297853353579;
 -0.7791822357027391;
 2.738045920497219]

p1 = [-0.243698, -0.117719, 1.5848]
p2 = [-0.299118, -0.0975683, 1.6558]
(R*p2+t) = [-0.2432120339103851;
 -0.126948636385738;
 1.610785991401685]

p1 = [-0.627753, 0.160186, 1.3396]
p2 = [-0.709645, 0.159033, 1.4212]
(R*p2+t) = [-0.6266796581943102;
 0.1503228882936767;
 1.354882969168255]

p1 = [-0.323443, 0.104873, 1.4266]
p2 = [-0.399079, 0.12047, 1.4838]
(R*p2+t) = [-0.3221508326762042;
 0.09455769417917799;
 1.432403439105304]

p1 = [-0.627221, 0.101454, 1.3116]
p2 = [-0.709709, 0.100216, 1.3998]
(R*p2+t) = [-0.6291763410391112;
 0.09137124101732745;
 1.334006553934159]

*** 正常退出 ***

代码及注释如下:

class EdgeProjectXYZRGBDPoseOnly : public g2o::BaseUnaryEdge<3, Eigen::Vector3d, VertexPose> {
public:
  EIGEN_MAKE_ALIGNED_OPERATOR_NEW;

  EdgeProjectXYZRGBDPoseOnly(const Eigen::Vector3d &point) : _point(point) {}

  virtual void computeError() override {
    const VertexPose *pose = static_cast<const VertexPose *> ( _vertices[0] );
    _error = _measurement - pose->estimate() * _point;
  }

  virtual void linearizeOplus() override {
    VertexPose *pose = static_cast<VertexPose *>(_vertices[0]);
    Sophus::SE3d T = pose->estimate();
    Eigen::Vector3d xyz_trans = T * _point;
    _jacobianOplusXi.block<3, 3>(0, 0) = -Eigen::Matrix3d::Identity();
    _jacobianOplusXi.block<3, 3>(0, 3) = Sophus::SO3d::hat(xyz_trans);
  }

  bool read(istream &in) {}

  bool write(ostream &out) const {}

protected:
  Eigen::Vector3d _point;
};

BUG及调试

错误:
在这里插入图片描述

这里需要重新安装Sophus模板库,前面chp4代码比较简单,我用的而是非模板库,通过修改头文件等方式也调试通过,但是为了之后章节调试的方便,我觉得定重新改用模板版库,安装之前记得将usr Sophus文件夹删除
如下:

sudo rm -rf /usr/local/include/sophus 
sudo rm -rf /usr/local/lib/libSophus.so
locate Sophus //大小写有区别
locate sophus //查看一下有没有删除干净

安装模板库 代码:

git clone https://github.com/strasdat/Sophus.git
cd Sophus
mkdir build
cd build
cmake ..
make
sudo make install

在这里插入图片描述

error: static assertion failed: Cannot format an argument. To make type T formattable provide a formatter<T> specialization: https://fmt.dev/latest/api.html#udt
test/core/CMakeFiles/test_so2.dir/build.make:75: recipe for target 'test/core/CMakeFiles/test_so2.dir/test_so2.cpp.o' failed
make[2]: *** [test/core/CMakeFiles/test_so2.dir/test_so2.cpp.o] Error 1
CMakeFiles/Makefile2:235: recipe for target 'test/core/CMakeFiles/test_so2.dir/all' failed
make[1]: *** [test/core/CMakeFiles/test_so2.dir/all] Error 2
Makefile:145: recipe for target 'all' failed

显然是fmt的问题,需要重新安装最新fmt8.1.1版本,下载网址如下 fmt8.1.1版本 (PS):旧版本不用删除,直接安装
按照上面修改后,Sophus正常安装,后面编译运行ch7代码,又报了新的bug,看了一下是fmt的问题

CMakeFiles/pose_estimation_3d2d.dir/pose_estimation_3d2d.cpp.o:在函数‘fmt::v8::appender fmt::v8::detail::write<char, fmt::v8::appender>(fmt::v8::appender, fmt::v8::basic_string_view<fmt::v8::type_identity<char>::type>, fmt::v8::basic_format_specs<char> const&, fmt::v8::detail::locale_ref) [clone .isra.1436]’中:
pose_estimation_3d2d.cpp:(.text+0x4971):对‘fmt::v8::detail::error_handler::on_error(char const*)’未定义的引用
CMakeFiles/pose_estimation_3d2d.dir/pose_estimation_3d2d.cpp.o:在函数‘unsigned long long fmt::v8::detail::width_checker<fmt::v8::detail::error_handler>::operator()<float, 0>(float) [clone .isra.370]’中:
pose_estimation_3d2d.cpp:(.text.unlikely+0xf):对‘fmt::v8::detail::error_handler::on_error(char const*)’未定义的引用
CMakeFiles/pose_estimation_3d2d.dir/pose_estimation_3d2d.cpp.o:在函数‘unsigned long long fmt::v8::detail::precision_checker<fmt::v8::detail::error_handler>::operator()<float, 0>(float) [clone .isra.379]’中:
pose_estimation_3d2d.cpp:(.text.unlikely+0x23):对‘fmt::v8::detail::error_handler::on_error(char const*)’未定义的引用
CMakeFiles/pose_estimation_3d2d.dir/pose_estimation_3d2d.cpp.o:在函数‘Sophus::SO3Base<Sophus::SO3<double, 0> >::normalize() [clone .part.1296]’中:
pose_estimation_3d2d.cpp:(.text.unlikely+0x9a):对‘fmt::v8::vprint(fmt::v8::basic_string_view<char>, fmt::v8::basic_format_args<fmt::v8::basic_format_context<fmt::v8::appender, char> >)’未定义的引用

解决方案参考如下博客:

视觉slam十四讲(第二版)之第七讲:pose_estimation_3d2d、pose_estimation_3d3d报错解决方案

全部改完之后,就能正常运行了,结果如下:

在这里插入图片描述