0. 简介

我们在上文中提到R3LIVE主要由两个node节点所依赖的cpp文件组成,我们在上一节中完成了r3live_LiDAR_front_end 简单介绍,下面我们需要详细的看/r3live_mapping这个节点。这个节点也是我们R3LIVE的核心。

1.主函数进入

首先我们知道R3LIVE的主函数进入在r3live.cpp这个文件夹中,其中cpp文件中没有很多值得关注的东西。基本是初始化一些基础的参数,唯一值得我们关注的就是new R3LIVE()函数了,我们到r3live.hpp文件中来详细看一下。

Camera_Lidar_queue g_camera_lidar_queue;
MeasureGroup Measures;
StatesGroup g_lio_state;
std::string data_dump_dir = std::string("/mnt/0B3B134F0B3B134F/color_temp_r3live/");

int main(int argc, char **argv)
{
    printf_program("R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package");
    Common_tools::printf_software_version();
    Eigen::initParallel();
    ros::init(argc, argv, "R3LIVE_main");
    R3LIVE * fast_lio_instance = new R3LIVE();
    ros::Rate rate(5000);
    bool status = ros::ok();
    ros::spin();
}

2. 初始化以及线程创建

r3live.hpp文件中也存在大量的参数初始化,我们主要关注R3LIVE的构造函数。首先构造函数第一部分就是创建一系列Topic,用于订阅和发布信息。并获取一些param信息,在一般的代码中也是如此,没有什么需要特别讲的。

        pubLaserCloudFullRes = m_ros_node_handle.advertise<sensor_msgs::PointCloud2>("/cloud_registered", 100);
        pubLaserCloudEffect = m_ros_node_handle.advertise<sensor_msgs::PointCloud2>("/cloud_effected", 100);
        pubLaserCloudMap = m_ros_node_handle.advertise<sensor_msgs::PointCloud2>("/Laser_map", 100);
        pubOdomAftMapped = m_ros_node_handle.advertise<nav_msgs::Odometry>("/aft_mapped_to_init", 10);
        pub_track_img = m_ros_node_handle.advertise<sensor_msgs::Image>("/track_img",1000);
        pub_raw_img = m_ros_node_handle.advertise<sensor_msgs::Image>("/raw_in_img",1000);
        m_pub_visual_tracked_3d_pts = m_ros_node_handle.advertise<sensor_msgs::PointCloud2>("/track_pts", 10);
        m_pub_render_rgb_pts = m_ros_node_handle.advertise<sensor_msgs::PointCloud2>("/render_pts", 10);
        pubPath = m_ros_node_handle.advertise<nav_msgs::Path>("/path", 10);

        pub_odom_cam = m_ros_node_handle.advertise<nav_msgs::Odometry>("/camera_odom", 10);
        pub_path_cam = m_ros_node_handle.advertise<nav_msgs::Path>("/camera_path", 10);
        std::string LiDAR_pointcloud_topic, IMU_topic, IMAGE_topic, IMAGE_topic_compressed;

        get_ros_parameter<std::string>(m_ros_node_handle, "/LiDAR_pointcloud_topic", LiDAR_pointcloud_topic, std::string("/laser_cloud_flat") );
        get_ros_parameter<std::string>(m_ros_node_handle, "/IMU_topic", IMU_topic, std::string("/livox/imu") );
        get_ros_parameter<std::string>(m_ros_node_handle, "/Image_topic", IMAGE_topic, std::string("/camera/image_color") );
        IMAGE_topic_compressed = std::string(IMAGE_topic).append("/compressed");
        if(1)
        {
            scope_color(ANSI_COLOR_BLUE_BOLD);
            cout << "======= Summary of subscribed topics =======" << endl;
            cout << "LiDAR pointcloud topic: " << LiDAR_pointcloud_topic << endl;
            cout << "IMU topic: " << IMU_topic << endl;
            cout << "Image topic: " << IMAGE_topic << endl;
            cout << "Image compressed topic: " << IMAGE_topic << endl;
             cout << "=======        -End-                =======" << endl;
            std::this_thread::sleep_for(std::chrono::milliseconds(1));
        }

        sub_imu = m_ros_node_handle.subscribe(IMU_topic.c_str(), 2000000, &R3LIVE::imu_cbk, this, ros::TransportHints().tcpNoDelay());
        sub_pcl = m_ros_node_handle.subscribe(LiDAR_pointcloud_topic.c_str(), 2000000, &R3LIVE::feat_points_cbk, this, ros::TransportHints().tcpNoDelay());
        sub_img = m_ros_node_handle.subscribe(IMAGE_topic.c_str(), 1000000, &R3LIVE::image_callback, this, ros::TransportHints().tcpNoDelay());
        sub_img_comp = m_ros_node_handle.subscribe(IMAGE_topic_compressed.c_str(), 1000000, &R3LIVE::image_comp_callback, this, ros::TransportHints().tcpNoDelay());

        m_ros_node_handle.getParam("/initial_pose", m_initial_pose);
        m_pub_rgb_render_pointcloud_ptr_vec.resize(1e3);

然后我们来看一下本章的重点,即线程。在下面的代码中明确了线程数。我们来梳理一下整个代码中的线程。

 m_thread_pool_ptr = std::make_shared<Common_tools::ThreadPool>(6, true, false); // At least 5 threads are needs, here we allocate 6 threads.

2.1 LIO线程

r3live.hpp同文件中,作者就完成了LIO线程的创建,这个线程是直接调用LIO计算ESIKF的操作。

// 启动LIO线程
        m_thread_pool_ptr->commit_task(&R3LIVE::service_LIO_update, this);

2.2 VIO线程—-图像处理线程

r3live_vio.cpp中的回调函数image_callbackimage_comp_callback函数中,会开启一个取出image处理线程service_process_img_buffer:用来专门处理原始的图像数据。

m_thread_pool_ptr->commit_task( &R3LIVE::service_process_img_buffer, this );

2.3 VIO线程—-处理线程

r3live_vio.cpp中,在第一次运行的时候会触发主线程,用于直接调用VIO计算ESIKF的操作。

m_thread_pool_ptr->commit_task( &R3LIVE::service_VIO_update, this); // vio线程

2.4 VIO线程—-地图发布线程

r3live_vio.cpp中,在第一次运行的时候也会触发rgb map的发布线程,将RGB点云地图拆分成了子点云发布。

m_thread_pool_ptr->commit_task( &R3LIVE::service_pub_rgb_maps, this);

2.5 VIO线程—-点云与图像投影

最后一个线程也是在r3live_vio.cpp中,主要作用是将点云地图的active点投影到图像上,用对应的像素对地图点rgb的均值和方差用贝叶斯迭代进行更新。

 m_render_thread = std::make_shared< std::shared_future< void > >( m_thread_pool_ptr->commit_task(
                    render_pts_in_voxels_mp, img_pose, &m_map_rgb_pts.m_voxels_recent_visited, img_pose->m_timestamp ) );

此外在pointcloud_rgbd.cpp中也存在一个线程,只是该线程不是m_thread_pool_ptr线程池中的线程。它属于m_thread_service 线程池,里面只有一个线程service_refresh_pts_for_projection,在VIO线程中会通过
m_map_rgb_pts.update_pose_for_projection( img_pose, -0.4 );
完成调用,并更新位姿,完成active激光点与图像的投影

3. LIO线程

我们在上节中对整个线程进行了梳理,其中我们最熟悉的应该就是LIO线程了。因为这里面本质上整体流程和FAST-LIO2基本一致。我们大致的来看一下LIO线程对应的R3LIVE::service_LIO_update()函数。

首先会对比相机和lidar队列头的时间戳,如果camera的时间戳更早则等待vio线程把更早的图像处理完

 // 检查是否有可用的雷达数据用于处理,这里将等待图像的第一帧先处理,否则不往下继续
 while ( g_camera_lidar_queue.if_lidar_can_process() == false )
 {
     ros::spinOnce();
     std::this_thread::yield();
     std::this_thread::sleep_for( std::chrono::milliseconds( THREAD_SLEEP_TIM ) );
 }

然后通过下面的函数完成IMU与雷达在接收到数据buffer后的处理,完成打包

 // 将激光数据和IMU数据保存到MeasureGroup Measures
 if ( sync_packages( Measures ) == 0 )
 {
     continue;
 }

然后下一步就是调用Process 函数补偿点云畸变,并用imu数据进行系统状态预测。这里对应着FAST-LIO2中的UndistortPcl函数

// 去畸变、状态推算(完成预测步骤)
p_imu->Process( Measures, g_lio_state, feats_undistort );
g_camera_lidar_queue.g_noise_cov_acc = p_imu->cov_acc;
g_camera_lidar_queue.g_noise_cov_gyro = p_imu->cov_gyr;
StatesGroup state_propagate( g_lio_state );

然后就是完成localmap边界更新,便于完成降采样当前帧点云,这里也在我们的FAST-LIO2代码中详细的注释含义了,这里也不细讲了。

 lasermap_fov_segment();
 // 降采样
 downSizeFilterSurf.setInputCloud( feats_undistort );
 downSizeFilterSurf.filter( *feats_down );

然后就是使用下采样后得到的特征点云构造ikd树

 // 如果ikdtree.Root_Node为空,表示地图还没出初始化
 if ( ( feats_down->points.size() > 1 ) && ( ikdtree.Root_Node == nullptr ) )
 {
     // std::vector<PointType> points_init = feats_down->points;
     // 构建ikdtree
     ikdtree.set_downsample_param( filter_size_map_min );
     ikdtree.Build( feats_down->points );
     flg_map_initialized = true;
     continue;
 }

 if ( ikdtree.Root_Node == nullptr )
 {
     flg_map_initialized = false;
     std::cout << "~~~~~~~ Initialize Map iKD-Tree Failed! ~~~~~~~" << std::endl;
     continue;
 }
 int featsFromMapNum = ikdtree.size();

 int feats_down_size = feats_down->points.size();

在拿到ikd-tree构建的树后就是使用激光信息进行ESIKF状态更新

if ( featsFromMapNum >= 5 ) // 地图点要>5个
{
    t1 = omp_get_wtime();

    if ( m_if_publish_feature_map )
    {
        // 清空ikdtree.PCL_Storage
        PointVector().swap( ikdtree.PCL_Storage );
        // 递归填充ikdtree.PCL_Storage
        ikdtree.flatten( ikdtree.Root_Node, ikdtree.PCL_Storage, NOT_RECORD );
        featsFromMap->clear();
        featsFromMap->points = ikdtree.PCL_Storage;

        sensor_msgs::PointCloud2 laserCloudMap;
        pcl::toROSMsg( *featsFromMap, laserCloudMap );
        laserCloudMap.header.stamp = ros::Time::now(); // ros::Time().fromSec(last_timestamp_lidar);
        // laserCloudMap.header.stamp.fromSec(Measures.lidar_end_time); // ros::Time().fromSec(last_timestamp_lidar);
        laserCloudMap.header.frame_id = "world";
        pubLaserCloudMap.publish( laserCloudMap );
    }

    std::vector< bool >               point_selected_surf( feats_down_size, true );
    std::vector< std::vector< int > > pointSearchInd_surf( feats_down_size );
    std::vector< PointVector >        Nearest_Points( feats_down_size );

    int  rematch_num = 0;
    bool rematch_en = 0;
    flg_EKF_converged = 0;
    deltaR = 0.0;
    deltaT = 0.0;
    t2 = omp_get_wtime();
    double maximum_pt_range = 0.0;
    // cout <<"Preprocess 2 cost time: " << tim.toc("Preprocess") << endl;
    // 开始迭代
    for ( iterCount = 0; iterCount < NUM_MAX_ITERATIONS; iterCount++ )
// ................

这里将FASTLIO中的的kf.update_iterated_dyn_share_modified(LASER_POINT_COV, solve_H_time)函数变成了上面一大堆的计算。主要的步骤总结如下:

  1. 在ros上发布ikd特征点云数据,默认不发布
  2. 遍历所有(下采样后)特征点,搜索每个点在树上的最近点集(5个点),
                       由最近点集通过PCA方法拟合最近平面,再计算点-面残差,残差定义为点到平面的距离,对应FAST-LIO公式(12)
    
  3. 从找到平面的点中筛选满足条件的点,用新的变量保存该点和对应平面(下标对齐)
  4. 计算测量Jacobian矩阵和测量向量.
  5. Iterative Kalman Filter Update
  6. 收敛判断和协方差更新
  7. 更新维持的固定大小的map立方体
  8. 将Kalman更新后的新lidar帧特征点先转世界坐标系,再加入ikd树
  9. ICP迭代+Kalman更新完成
    详细的代码注释可以参考:https://github.com/qpc001/r3live_commit

最后就是向rgb点云地图中加入新点,并发布laserCloudFullResColor、Maps、Odometry、Path等信息。

            if ( 1 ) // append point cloud to global map.
            {
                static std::vector< double > stastic_cost_time;
                Common_tools::Timer          tim;
                // tim.tic();
                // ANCHOR - RGB maps update
                // 等待更新渲染RGB地图
                wait_render_thread_finish();
                // 如果记录MVS数据(后面用于重建mesh)
                if ( m_if_record_mvs )
                {
                    // 如果记录MVS数据,在添加点到地图时,同时输出 最新击中的voxel点,保存到pts_last_hitted
                    std::vector< std::shared_ptr< RGB_pts > > pts_last_hitted;
                    pts_last_hitted.reserve( 1e6 );
                    // 添加RGB地图点到全局地图(VIO也用的地图)
                    m_number_of_new_visited_voxel = m_map_rgb_pts.append_points_to_global_map(
                        *laserCloudFullResColor, Measures.lidar_end_time - g_camera_lidar_queue.m_first_imu_time, &pts_last_hitted,
                        m_append_global_map_point_step );
                    m_map_rgb_pts.m_mutex_pts_last_visited->lock();
                    m_map_rgb_pts.m_pts_last_hitted = pts_last_hitted;  //最新击中的voxel点
                    m_map_rgb_pts.m_mutex_pts_last_visited->unlock();
                }
                else
                {
                    // 添加RGB地图点到全局地图
                    m_number_of_new_visited_voxel = m_map_rgb_pts.append_points_to_global_map(
                        *laserCloudFullResColor, Measures.lidar_end_time - g_camera_lidar_queue.m_first_imu_time, nullptr,
                        m_append_global_map_point_step );
                }
                stastic_cost_time.push_back( tim.toc( " ", 0 ) );
            }

4. 参考链接

https://blog.csdn.net/handily_1/article/details/122360134

https://icode.best/i/77363847009772

https://zhuanlan.zhihu.com/p/480275319

https://blog.csdn.net/handily_1/article/details/124230917

https://vincentqin.tech/posts/r3live/

https://www.jianshu.com/p/a35fa8ac0803