项目介绍

系统环境

  • AutoDL 上租用的服务器的环境:
    • TensorFlow 1.15.5
    • Python 3.8(ubuntu18.04)
    • Cuda 11.4
    • GTX 1080 Ti

配置步骤

Step 1: 克隆项目代码

git clone https://github.com/zgojcic/3DSmoothNet.git

Step 2: 安装 PCL

./install_pcl.sh

这里我对官方的 Shell 脚本做了一些修改,因为我是在服务器上的 root 权限下操作,所以我删掉了所有的 sudo 命令,并且在开头设置了默认同意安装,且对 PCL 代码下载路径作了修改,具体内容如下:

###################################
# 在服务器容器上安装,已经是 root 权限 #
###################################
# 安装时默认 yes 同意安装
echo "y" | apt-get install whatever
# Clone latest PCL
apt-get update
# apt-get install git # 之前已经有 git 的就无需安装

mkdir ../Tools
cd ../Tools
# for clone pcl-1.8.1
# git clone --branch pcl-1.8.1 https://github.com/PointCloudLibrary/pcl.git pcl-trunk 
git clone https://github.com/PointCloudLibrary/pcl.git pcl-trunk
ln -s pcl-trunk pcl
cd pcl

# Install prerequisites
apt-get install g++
apt-get install cmake cmake-gui
apt-get install doxygen
apt-get install mpi-default-dev openmpi-bin openmpi-common
apt-get install libflann1.8 libflann-dev
apt-get install libeigen3-dev
apt-get install libboost-all-dev
apt-get install libvtk6-dev libvtk6.2 libvtk6.2-qt
#sudo apt-get install libvtk5.10-qt4 libvtk5.10 libvtk5-dev  # I'm not sure if this is necessary.
apt-get install 'libqhull*'
apt-get install libusb-dev
apt-get install libgtest-dev
apt-get install git-core freeglut3-dev pkg-config
apt-get install build-essential libxmu-dev libxi-dev
apt-get install libusb-1.0-0-dev graphviz mono-complete
apt-get install qt-sdk openjdk-9-jdk openjdk-9-jre
apt-get install phonon-backend-gstreamer
apt-get install phonon-backend-vlc
apt-get install libopenni-dev libopenni2-dev
apt-get install libflann-dev

# Compile and install PCL
mkdir release
cd release
cmake -DCMAKE_BUILD_TYPE=None -DBUILD_GPU=OFF -DBUILD_apps=ON -DBUILD_examples=ON ..
make -j16
make install

Step 3: 安装深度学习环境

pip install -r requirements.txt

因为服务器上已经有了 glob 和 tensorflow-gpu 1.15.5,所以我将这两条注释掉了,在进行测试时无法正常 import open3d ,安装 open3d 而不是 open3d-python 可以解决问题。最终我的 requirements.txt 如下:

numpy==1.14.5
open3d
open3d-python
parse==1.12.0
scikit-learn==0.20.3
tensorboard==1.10
tqdm==4.31.1

Step 4: 编译项目代码
main.cpp 的开头加上 #include <boost/filesystem.hpp> ,然后执行编译(否则会报错):

cmake -DCMAKE_BUILD_TYPE=Release . && make

这里作者没有新建 build 文件夹,而是直接在项目根目录生成编译文件。

测试效果

我修改后的 demo.py 在附录中。

python ./demo.py

终端打印信息如下:

File: ./data/demo/cloud_bin_0.ply
Number of Points: 258342
Size of the voxel grid: 0.3
Number of Voxels: 16
Smoothing Kernel: 1.75
Number of keypoints:1000

Starting SDV computation!
56 threads will be used!!
Saving Features to a CSV file:
./data/demo/sdv/cloud_bin_0.ply_0.150000_16_1.750000.csv

---------------------------------------------------------
LRF computation took 4025 miliseconds
SDV computation took 7079 miliseconds
---------------------------------------------------------
Config parameters successfully read in!! 

File: ./data/demo/cloud_bin_1.ply
Number of Points: 268977
Size of the voxel grid: 0.3
Number of Voxels: 16
Smoothing Kernel: 1.75
Number of keypoints:1000

Starting SDV computation!
56 threads will be used!!
Saving Features to a CSV file:
./data/demo/sdv/cloud_bin_1.ply_0.150000_16_1.750000.csv

---------------------------------------------------------
LRF computation took 4457 miliseconds
SDV computation took 7714 miliseconds
---------------------------------------------------------

Run mode "test" selected.
Loaded saved model ./models/32_dim/3DSmoothNet_32_dim.ckpt.
Loading test file: ./data/demo/sdv/cloud_bin_0.ply_0.150000_16_1.750000.csv
1000 features computed in 6.972553253173828 seconds.
Wrote file cloud_bin_0.ply_0.150000_16_1.750000_3DSmoothNet.npz
Loading test file: ./data/demo/sdv/cloud_bin_1.ply_0.150000_16_1.750000.csv
1000 features computed in 0.30720067024230957 seconds.
Wrote file cloud_bin_1.ply_0.150000_16_1.750000_3DSmoothNet.npz
Inference completed perform nearest neighbor search and registration
RegistrationResult with fitness=5.020000e-01, inlier_rmse=3.411006e-02, and correspondence_set size of 502
Access transformation to get result.

由于我是在服务器上配置的环境,没有可视化界面,所以保存了点云文件用 cloudcompare 打开,效果如下(蓝色是源点云,红色是目标点云,绿色是源点云配准后的结果):

常见问题

问题一:安装 glob3 报错,经 import glob 测试,我的环境中本来就有 glob。

$ pip install glob3== # 查看包版本
Looking in indexes: http://mirrors.aliyun.com/pypi/simple, https://pypi.ngc.nvidia.com
ERROR: Could not find a version that satisfies the requirement glob3== (from versions: none)
ERROR: No matching distribution found for glob3==

问题二No module named 'sklearn' ,之前 scikit-learn 安装失败。

pip install scikit-learn

问题三:测试 demo.py 时报错如下,将 open3d.registration.Feature() 改为 open3d.pipelines.registration.Feature() 即可。

Traceback (most recent call last):
  File "./demo.py", line 87, in <module>
    ref = open3d.registration.Feature()
AttributeError: module 'open3d' has no attribute 'registration'

问题四:测试 demo.py 时报错如下,由于 open3d 版本的问题,导致函数发生了变化,我这里安装的是 open3d==0.15.2open3d-python==0.3.0.0open3d-python 是旧版本的 open3d ,对应的 python 版本也比较旧,我这里 python 版本是 3.8 ,所以我只能改函数),需要修改的如下:

  • read_point_cloud 改为 open3d.io.read_point_cloud
  • PointCloud() 改为 open3d.geometry.PointCloud()
  • registration_ransac_based_on_feature_matching 改为 open3d.pipelines.registration.registration_ransac_based_on1_feature_matching
  • TransformationEstimationPointToPoint 改为 open3d.pipelines.registration.TransformationEstimationPointToPoint
  • ……
    NameError: name 'read_point_cloud' is not defined
    NameError: name 'PointCloud' is not defined
    NameError: name 'Vector3dVector' is not defined
    NameError: name 'registration_ransac_based_on_feature_matching' is not defined
    NameError: name 'TransformationEstimationPointToPoint' is not defined
    NameError: name 'draw_geometries' is not defined
    

附录

demo.py

import tensorflow as tf
import copy
import numpy as np
import os
import subprocess
from open3d import *


def draw_registration_result(source, target, transformation):
    source_temp = copy.deepcopy(source)
    target_temp = copy.deepcopy(target)
    source_temp.paint_uniform_color([1, 0.706, 0])
    target_temp.paint_uniform_color([0, 0.651, 0.929])
    source_temp.transform(transformation)
    open3d.visualization.draw_geometries([source_temp, target_temp])


def execute_global_registration(
        source_down, target_down, reference_desc, target_desc, distance_threshold):

    result = open3d.pipelines.registration.registration_ransac_based_on_feature_matching(
            source_down, target_down, reference_desc, target_desc,
            False,
            distance_threshold,
            open3d.pipelines.registration.TransformationEstimationPointToPoint(False), 4,
            [open3d.pipelines.registration.CorrespondenceCheckerBasedOnEdgeLength(0.9),
            open3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance(distance_threshold)],
            open3d.pipelines.registration.RANSACConvergenceCriteria(4000000, 500))
    return result

def refine_registration(source, target, source_fpfh, target_fpfh, voxel_size):
    distance_threshold = voxel_size * 0.4
    print(":: Point-to-plane ICP registration is applied on original point")
    print("   clouds to refine the alignment. This time we use a strict")
    print("   distance threshold %.3f." % distance_threshold)
    result = registration_icp(source, target, distance_threshold,
            result_ransac.transformation,
            TransformationEstimationPointToPlane())
    return result

# Run the input parametrization
point_cloud_files = ["./data/demo/cloud_bin_0.ply", "./data/demo/cloud_bin_1.ply"]
keypoints_files = ["./data/demo/cloud_bin_0_keypoints.txt", "./data/demo/cloud_bin_1_keypoints.txt"]



for i in range(0,len(point_cloud_files)):
    args = "./3DSmoothNet -f " + point_cloud_files[i] + " -k " + keypoints_files[i] +  " -o ./data/demo/sdv/"
    subprocess.call(args, shell=True)

print('Input parametrization complete. Start inference')


# Run the inference as shell 
args = "python main_cnn.py --run_mode=test --evaluate_input_folder=./data/demo/sdv/  --evaluate_output_folder=./data/demo"
subprocess.call(args, shell=True)

print('Inference completed perform nearest neighbor search and registration')


# Load the descriptors and estimate the transformation parameters using RANSAC
reference_desc = np.load('./data/demo/32_dim/cloud_bin_0.ply_0.150000_16_1.750000_3DSmoothNet.npz')
reference_desc = reference_desc['data']


test_desc = np.load('./data/demo/32_dim/cloud_bin_1.ply_0.150000_16_1.750000_3DSmoothNet.npz')
test_desc = test_desc['data']

# Save as open3d feature 
ref = open3d.pipelines.registration.Feature()
ref.data = reference_desc.T

test = open3d.pipelines.registration.Feature()
test.data = test_desc.T

# Load point cloud and extract the keypoints
reference_pc = open3d.io.read_point_cloud(point_cloud_files[0])
test_pc = open3d.io.read_point_cloud(point_cloud_files[1])

indices_ref = np.genfromtxt(keypoints_files[0])
indices_test = np.genfromtxt(keypoints_files[1])

reference_pc_keypoints = np.asarray(reference_pc.points)[indices_ref.astype(int),:]
test_pc_keypoints = np.asarray(test_pc.points)[indices_test.astype(int),:]


# Save ad open3d point clouds
ref_key = open3d.geometry.PointCloud()
ref_key.points = open3d.utility.Vector3dVector(reference_pc_keypoints)

test_key = open3d.geometry.PointCloud()
test_key.points = open3d.utility.Vector3dVector(test_pc_keypoints)

result_ransac = execute_global_registration(ref_key, test_key,
            ref, test, 0.05)


# First plot the original state of the point clouds
draw_registration_result(reference_pc, test_pc, np.identity(4))


# Plot point clouds after registration
print(result_ransac)
draw_registration_result(reference_pc, test_pc,
            result_ransac.transformation)

# 保存配准点云结果
source_reg = copy.deepcopy(reference_pc)
source_reg.transform(result_ransac.transformation)
open3d.io.write_point_cloud("reg_result.pcd", source_reg)
open3d.io.write_point_cloud("target.pcd", test_pc)
open3d.io.write_point_cloud("source.pcd", reference_pc)