继续昨天的内容总结,之前已经成功将识别到的物体位置发布出来作为一个全局变量。这一篇则是讲最终的问题,虽然前面的步骤都做完了,而且最后这步没什么实质性的编程,但是一直机械臂一直有问题,且运行错误或者到达不了预设的位置,本篇即以此为基础进行总结,分析最终不能成功可能的原因,并给出一个合理的开发调试建议。

首先惯例先给出几篇参考网址

1.这个是之前写的文章下面交流的时候有提到的参考网址,应该也是一个抓取任务或者可能包含更多内容,只是我已经走到了后面几步所以没有时间去参考,这里列出来希望对大家有帮助,以后的话可以慢慢看看是什么用处。

https://github.com/atenpas/gpd/tree/forward

2.关于tf点转换过程用到的callback函数

https://github.com/hu7241/nao_slam_amcl/blob/master/voice_nav_script/gui.py callback

在这个项目的开发过程中,不仅仅是ros机器人相关知识的积累实践,更重要的是和周围的人学习到的开发方法,各种工具。

1.要用fanqiang 2.学知识一定要去官网这样更系统 3.勤用谷歌(英语没问题的话)4.向他人请教  不仅是知识也包括使用的工具 开发调试思路等。

好了开始最后的内容总结。


从上一篇继续,在转换为点并把全局变量在最终的python程序订阅 接收到了全局变量以后发现位置并不能到达,并出现了如下的问题:

No motion plan found.No execution attempted.

出现到不了的情况原因很多,这里说明一下可能的原因:

1.位置并不能达到指定的位置,这是由于机械臂的活动范围有限制,所以如果提供的位置太远便不能plan

2.位置不准确或者是错误位置,这可能是由于linemod算法的限制。如果是这个原因的话等排查出来就很麻烦了,需要重新换新的检测方法,换其他的算法,再调试再对接,这样就要花很长很长时间了。

3.位置正确但是转换结果导致位置错误:

这包括

(1)TF_listener转换的问题,因为之前开发程序在转坐标的时候坐标系常常不注意,有时候没有弄懂程序是什么意思的情况下只知道用处便直接使用。事实上最终原因确实是这边的一个程序的坐标系没有弄准确。linemod识别到物体的位置所在的坐标系是camera_rgb_optical_frame而非camera_rgb_frame或者camera_link 这里一定要注意,不然位置错了都不知道是哪步错的。

(2)transform转换错误。这是由于用于调试的相机只是放置在设置位置但并没有固定,所以在机械手臂运动过程中发生的震动会导致之前标记的camera_link和base_link转换关系不准确。这样理应是全部重新标定修改新的转换关系。不过可以通过添加偏移量降低这个不准确转换关系的影响。

首先评估这几个原因,原因3(2)和1的可能性最高。而且linemod算法虽然不完善的地方有很多,但是对于抓取任务是足够的。

针对这个问题进行排查,思路是

1.用一个点来试(也就是之前所提到的OnePosetransform.cpp来协助调试)这样就可以在rviz里面通过订阅话题查看到位置的变化,也有助于排查原因。

2.固定移动机械手臂得到固定值 取其中的orentation ,再用转换后的一个点的position值带入,并且要发布出去在rviz里面查看。

3.重新完成标定过程。这意味着不仅仅是camera_link到base_link的标定还包括TF点的转换,更加深入的理解例程序是什么意思,把坐标系重新改。

 

这里可以看到物体位置并不在实际位置上。调试_实际位置在rg2但测得位置在上面(测得位置与转换位置重合证明转换过程没有错误)

那么之前已经成功转换一个点并发布,OnePosetransform.cpp核心的作用是省掉中间的tf转换过程,直接转相机发布的位置并直接给到最终的py控制程序,并把转换过程中的点发布出去以供rviz里面查看。

(#rosrun robot_setup_tf tf_listener_OnePosetransform转换实验一个点是否重合(可以测试在rviz里面显示))

#include <ros/ros.h>
// #include <geometry_msgs/PointStamped.h>
#include "geometry_msgs/PoseStamped.h"
#include "geometry_msgs/Pose.h"
#include <tf/transform_listener.h>
#include <tf/transform_broadcaster.h>
#include <visualization_msgs/MarkerArray.h> 
 
geometry_msgs::PoseStamped real_pose;
geometry_msgs::PoseStamped transed_pose;
 
void transformPose(const tf::TransformListener& listener){
  //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame
  // geometry_msgs::PointStamped laser_point;
  // geometry_msgs::PoseStamped msg;
  // laser_point.header.frame_id = "camera_link";
  // real_pose.header.frame_id = "camera_link";
  real_pose.header.frame_id = "camera_rgb_optical_frame";
  // real_pose.header.frame_id = "camera_rgb_frame";
  //we'll just use the most recent transform available for our simple example
  // laser_point.header.stamp = ros::Time();
  real_pose.header.stamp = ros::Time();
  real_pose.pose.position.x = -0.0521919690073; 
  real_pose.pose.position.y = 0.0947469994426; 
  real_pose.pose.position.z = 0.928996682167;   
  real_pose.pose.orientation.w = 0.487851679325; 
  real_pose.pose.orientation.x = 0.518789410591; 
  real_pose.pose.orientation.y = -0.516209602356; 
  real_pose.pose.orientation.z = 0.47580036521; 
  //just an arbitrary point in space
 
  try{
    // geometry_msgs::PointStamped base_point;
    // listener.transformPoint("base_link", laser_point, base_point);
    listener.transformPose("base_link", real_pose, transed_pose);
 
    ROS_INFO("camera_rgb_frame: (%.2f, %.2f. %.2f, %.2f, %.2f, %.2f, %.2f) -----> base_link: (%.4f, %.4f, %.4f, %.4f, %.4f, %.4f, %.4f) at time %.2f",
        real_pose.pose.position.x, real_pose.pose.position.y, real_pose.pose.position.z, real_pose.pose.orientation.x, real_pose.pose.orientation.y, real_pose.pose.orientation.z, real_pose.pose.orientation.w,
        transed_pose.pose.position.x, transed_pose.pose.position.y, transed_pose.pose.position.z, transed_pose.pose.orientation.x, transed_pose.pose.orientation.y, transed_pose.pose.orientation.z, transed_pose.pose.orientation.w, transed_pose.header.stamp.toSec());
  }
  catch(tf::TransformException& ex){
    ROS_ERROR("Received an exception trying to transform a point from \"camera_rgb_optical_frame\" to \"base_link\": %s", ex.what());
  }
}
 
int main(int argc, char** argv){
  ros::init(argc, argv, "tf_listener_Posetransform");
  ros::NodeHandle n;
 
  tf::TransformListener listener(ros::Duration(10));
 
  //we'll transform a point once every second
  ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPose, boost::ref(listener)));
  ros::Publisher my_publisher_realPose = n.advertise<geometry_msgs::PoseStamped>("real_pose", 1000);
  ros::Publisher my_publisher_transedPose = n.advertise<geometry_msgs::PoseStamped>("transed_pose", 1000);
  ros::Rate loop_rate(10);
  while (ros::ok())
      {
        my_publisher_transedPose.publish(transed_pose);
        my_publisher_realPose.publish(real_pose);
 
        ros::spinOnce();
        loop_rate.sleep();
      }
  ros::spin();
  return 0;
}

而且可以看到我也在这里试验是不是坐标系的名称不对导致位置过不去(点的位置就是识别到的物体的位置),最终选定了camera_optical_rgb_frame,然后点也到了对应的位置。

所以确定了固定点的坐标位置并不在罐子实际位置是因为real_pose.header.frame_id = "camera_rgb_optical_frame"; 而非camera_link 而且从这个图也可以确定另一个事实就是转换过程没有错误(从rviz里面可看到固定一个点的转换后其点在同一个位置,转换后的点transed_pose和real_pose重合),也就是之前所说 的可能的原因的3(1)(2)确定不是。当然随着相机的开启位置可能会跳跃到不正常的位置,但是基本上已经很少出现。

运行tf_listener.cpp(上一篇已经有程序这里不在添加)

#rosrun robot_setup_tf tf_listener

可以看到转换后的点也发布出去(话题为transformPose)

这里要注意的是transed_pose和real_pose是Oneposetransform.cpp帮助我们调试的固定位置,而transformPose则是tf_listener实际所用的tf转换过程的总程序。如果三点重合则证明标定过程准确无误。运行ork检测程序:

#rosrun object_recognition_core detection -c `rospack find object_recognition_linemod`/conf/detection.ros.ork

这样就将tf转换过程中的错误排查完毕,也全部修改,正确的结果也已经出现。

接着就是解决可能的原因1,位置过不去。把转换后的一个值直接送到最终的python控制程序依然是:

#rosrun ur10_rg2_moveit_config new_moveit_controller.py

在所设计到达位置不能达到

将转换后的值与预设的固定值比较发现在position中z值偏小或者错误,可能原因就是之前所提到的1的震动所致,所以添加偏移量

 
    print "============ Press `Enter` to plan pepsi0"
    raw_input()
    pose_goal.orientation.x = -0.702487814174
    pose_goal.orientation.y = -0.01293060105
    pose_goal.orientation.z = 0.0153163491798
    pose_goal.orientation.w = 0.711413438122
    pose_goal.position.x = transed_pose.x+0.05
    # 0.01 0.05 0.0119
    pose_goal.position.y = transed_pose.y-0.05
    pose_goal.position.z = transed_pose.z+0.438
    group.set_pose_target(pose_goal)
    plan00=group.plan()
    print "============ Press `Enter` to go pepsi0"
    raw_input()
    plan = group.go(wait=True)

这里要提的是transed_pose已经是全局变量,在实验过程中我是用固定值(当然tf_listener里面的变量的值就是固定值,只是直接写出来避免不必要的错误),一步步修改+_的值作为偏移量,以使机械手抓到可乐罐。

这样就确定了不是linemod算法的限制导致,抓取任务完成。所以机械手臂python控制程序是:

def go_to_pose_goal(self):
    
    group = self.group
 
    group.set_max_velocity_scaling_factor(0.06)
    group.set_max_acceleration_scaling_factor(0.01)
    
    pose_goal = geometry_msgs.msg.Pose()
 
    global transed_pose
    global transed_orientation
    if(transed_pose!=(0,0,0)and transed_orientation!=(0,0,0,0)):
        print "===========TransPose-completed============="
        rospy.loginfo(rospy.get_caller_id() + 'heard transed_pose %s', transed_pose)
        rospy.loginfo(rospy.get_caller_id() + 'heard transed_orientation %s', transed_orientation)
    else:
        print "===========nothing detected!!!"
 
    print "~~~~~~~~~~~~~~start to compile UR10+RG2~~~~~~~~~~~~~~~~~~~~~~~"
 
    print "============ Press `Enter` to rg2-0"
    raw_input()
    # rospy.sleep(0.5)
    self.request.target_width.data=100.0
    resp1=self.val(self.request)
    # print "============ Press `Enter` to plan button0"
    # raw_input()
    # # rospy.sleep(0.5)
    # q = tf.transformations.quaternion_from_euler(math.radians(-90.0), 0.0, 0.0)
    # pose_goal.orientation.x = q[0]
    # pose_goal.orientation.y = q[1]
    # pose_goal.orientation.z = q[2]
    # pose_goal.orientation.w = q[3]
    # pose_goal.position.x = 0.6
    # pose_goal.position.y = -0.65
    # pose_goal.position.z = 1.40
 
    print "===========this is a test====================="
    # group.set_pose_target(pose_goal)
    # plan00=group.plan()
    # print "============ Press `Enter` to go button0"
    # raw_input()
    # # rospy.sleep(0.5)
    # plan = group.go(wait=True)
 
 
    print "============ Press `Enter` to plan pepsi0"
    raw_input()
    pose_goal.orientation.x = -0.702487814174
    pose_goal.orientation.y = -0.01293060105
    pose_goal.orientation.z = 0.0153163491798
    pose_goal.orientation.w = 0.711413438122
    pose_goal.position.x = transed_pose.x+0.05
    # 0.01 0.05 0.0119
    pose_goal.position.y = transed_pose.y-0.05
    pose_goal.position.z = transed_pose.z+0.438
    group.set_pose_target(pose_goal)
    plan00=group.plan()
    print "============ Press `Enter` to go pepsi0"
    raw_input()
    plan = group.go(wait=True)
 
    print "============ Press `Enter` to plan pepsi1"
    raw_input()
    pose_goal.orientation.x = -0.702487814174
    pose_goal.orientation.y = -0.01293060105
    pose_goal.orientation.z = 0.0153163491798
    pose_goal.orientation.w = 0.711413438122
    pose_goal.position.x = transed_pose.x+0.13
    # 0.01 0.05 0.0119
    pose_goal.position.y = transed_pose.y-0.03
    pose_goal.position.z = transed_pose.z+0.399
    group.set_pose_target(pose_goal)
    plan00=group.plan()
    print "============ Press `Enter` to go pepsi1"
    raw_input()
    plan = group.go(wait=True)
    # print "============ Press `Enter` to plan pepsi1"
    # raw_input()
    # pose_goal.orientation.x = -0.536990182186
    # pose_goal.orientation.y = -0.458113016546
    # pose_goal.orientation.z = 0.469381574743
    # pose_goal.orientation.w = 0.530523275265
    # pose_goal.position.x = 1.05278668153
    # # 0.01 0.05 0.01
    # pose_goal.position.y = 0.0549727717948
    # pose_goal.position.z = 0.281837835693
    # group.set_pose_target(pose_goal)
    # plan01=group.plan()
    # print "============ Press `Enter` to go pepsi1"
    # raw_input()
    # plan = group.go(wait=True)
 
    # print "============ Press `Enter` to plan1"
    # # pose_goal.orientation.x = -0.591549509239
    # # pose_goal.orientation.y = 0.423087935108
    # # pose_goal.orientation.z = 0.369947290233
    # # pose_goal.orientation.w = 0.578104471298
    # # pose_goal.position.x = 1.17165389661
    # # pose_goal.position.y = 0.128525502785
    # # pose_goal.position.z = 0.341104864008
 
    # pose_goal.orientation.x = -0.697785749856
    # pose_goal.orientation.y = 0.130047030626
    # pose_goal.orientation.z = 0.06811187155
    # pose_goal.orientation.w = 0.701101697385
    # pose_goal.position.x = 1.30709340958
    # pose_goal.position.y = 0.121683281188
    # pose_goal.position.z = 0.194759575987
    # group.set_pose_target(pose_goal)
    # plan1=group.plan()
    # print "============ Press `Enter` to go1"
    # raw_input()
    # plan = group.go(wait=True)
 
 
 
 
    # print "============ Press `Enter` to plan2"
    # raw_input()
    # # pose_goal.orientation.x = -0.691356864871
    # # pose_goal.orientation.y = 0.17258390508
    # # pose_goal.orientation.z = 0.113781808587
    # # pose_goal.orientation.w = 0.692310754747
    # # pose_goal.position.x = 1.18815572137
    # # pose_goal.position.y = 0.128383418654 
    # # pose_goal.position.z = 0.178566985603
    # pose_goal.orientation.x = -0.701759134111
    # pose_goal.orientation.y = 0.0816647083548
    # pose_goal.orientation.z = 0.0196895921354
    # pose_goal.orientation.w = 0.707444211979
    # pose_goal.position.x = 1.46766476156
    # pose_goal.position.y = 0.118317513539
    # pose_goal.position.z = 0.194995276671
    # group.set_pose_target(pose_goal)
    # plan2=group.plan()
    # print "============ Press `Enter` to go2"
    # raw_input()
    # plan = group.go(wait=True)
 
 
    print "============ Press `Enter` to rg2"
    raw_input()
    # rospy.sleep(0.5)
    self.request.target_width.data=73.0
    resp1=self.val(self.request)
 
 
 
    print "============ Press `Enter` to plan pepsi2"
    raw_input()
    pose_goal.orientation.x = -0.702080879273
    pose_goal.orientation.y = 0.0230291512084
    pose_goal.orientation.z = -0.0353026755348
    pose_goal.orientation.w = 0.710848660584
    pose_goal.position.x = 0.979028034434
    pose_goal.position.y = 0.131569260388
    pose_goal.position.z = 1.10972751302
    group.set_pose_target(pose_goal)
    plan3=group.plan()
    print "============ Press `Enter` to go pepsi2"
    raw_input()
    plan = group.go(wait=True)
 
    print "============ Press `Enter` to plan Tochair0"
    raw_input()
    pose_goal.orientation.x = -0.484546258297
    pose_goal.orientation.y = -0.508480168279
    pose_goal.orientation.z = 0.525418382276
    pose_goal.orientation.w = 0.480206586381
    pose_goal.position.x = 0.999753371747
    pose_goal.position.y = 0.534242702225+0.01
    pose_goal.position.z = 0.599301582776-0.01
    group.set_pose_target(pose_goal)
    plan3=group.plan()
    print "============ Press `Enter` to go Tochair0"
    raw_input()
    plan = group.go(wait=True)
 
    print "============ Press `Enter` to Tochair1"
    raw_input()
    # rospy.sleep(0.5)
    self.request.target_width.data=100.0
    resp1=self.val(self.request)
 
    print "============ Press `Enter` to plan Tochair2"
    raw_input()
    pose_goal.orientation.x = -0.484546258297
    pose_goal.orientation.y = -0.508480168279
    pose_goal.orientation.z = 0.525418382276
    pose_goal.orientation.w = 0.480206586381
    pose_goal.position.x = 0.999753371747
    pose_goal.position.y = 0.534242702225-0.1
    pose_goal.position.z = 0.599301582776-0.015
    group.set_pose_target(pose_goal)
    plan3=group.plan()
    print "============ Press `Enter` to go Tochair2"
    raw_input()
    plan = group.go(wait=True)
    print "============ Press `Enter` to plan Tochair3"
    raw_input()
    pose_goal.orientation.x = 6.79752193197e-05
    pose_goal.orientation.y = 6.79780791568e-05
    pose_goal.orientation.z = 0.707089856963
    pose_goal.orientation.w = 0.707123698471
    pose_goal.position.x = 0.125088661163
    pose_goal.position.y = 0.451141001514
    pose_goal.position.z = 1.90037518571
    group.set_pose_target(pose_goal)
    plan3=group.plan()
    print "============ Press `Enter` to go Tochair3"
    raw_input()
    plan = group.go(wait=True)
 
    # print "============ Press `Enter` to plan button1"
    # raw_input()
    # # rospy.sleep(0.5)
    # q = tf.transformations.quaternion_from_euler(math.radians(-90.0), 0.0, 0.0)
    # pose_goal.orientation.x = q[0]
    # pose_goal.orientation.y = q[1]
    # pose_goal.orientation.z = q[2]
    # pose_goal.orientation.w = q[3]
    # pose_goal.position.x = 0.65
    # pose_goal.position.y = -0.65
    # pose_goal.position.z = 1.40
    # group.set_pose_target(pose_goal)
    # plan00=group.plan()
    # print "============ Press `Enter` to go button1"
    # raw_input()
    # # rospy.sleep(0.5)
    # plan = group.go(wait=True)
 
 
 
    # print "============ Press `Enter` to plan button2"
    # raw_input()
    # # rospy.sleep(0.5)
    # q = tf.transformations.quaternion_from_euler(math.radians(-90.0), 0.0, 0.0)
    # pose_goal.orientation.x = q[0]
    # pose_goal.orientation.y = q[1]
    # pose_goal.orientation.z = q[2]
    # pose_goal.orientation.w = q[3]
    # pose_goal.position.x = 0.6
    # pose_goal.position.y = -0.65
    # pose_goal.position.z = 1.40
    # group.set_pose_target(pose_goal)
    # plan00=group.plan()
    # print "============ Press `Enter` to go button2"
    # raw_input()
    # # rospy.sleep(0.5)
    # plan = group.go(wait=True)
 
 
 
    group.stop()
    group.clear_pose_targets()
 
    
    current_pose = self.group.get_current_pose().pose
    print self.group.get_current_pose()
 
 
    return all_close(pose_goal, current_pose, 0.01)


关于测试视频:

https://pan.baidu.com/s/1H4Ul8tlpuHkyjuyL1qttjQ

当然理应是放在youtube上面的,以后再传吧。

总的程序的运行步骤:

执行步骤:
#roslaunch realsense_camera zr300_nodelet_rgbd.launch 
#roslaunch ur_control ur10_control.launch robot_ip:=192.168.2.24
#roslaunch easy_handeye static_transform_publisher.launch 
#rosrun object_recognition_core detection -c `rospack find object_recognition_linemod`/conf/detection.ros.ork
(#rosrun rqt_reconfigure rqt_reconfigure)
#roslaunch ur10_rg2_moveit_config demo.launch
 
#rosrun robot_setup_tf tf_broadcaster
#rosrun robot_setup_tf tf_listener 
(#rosrun ur10_rg2_moveit_config my_pose_print.py打印当前机械手位置)
(#rosrun robot_setup_tf tf_listener_OnePosetransform转换实验一个点是否重合(可以测试在rviz里面显示))
(#rosrun ur10_rg2_moveit_config listener.py打印识别到的物体位置)
机器人控制
#rosrun ur10_rg2_moveit_config moveit_controller.py
//加限制
#rosrun ur10_rg2_moveit_config new_moveit_controller.py


完成所有内容的总结,前后一共三个月,从零做到现在。之后再把这些程序整合一下,就可以放在算法的创新上,以便之后的发论文,继续努力↖(^ω^)↗

--------------------------------------------------更新2019.7.19--------------------------------------------------------------

因为最近半年都在准备发论文以及准备雅思,所以都没有登csdn回复。希望8.24雅思顺利,继续努力把。之后准备好雅思及申请好后,会总结一下今年前两个半月paper做的3D 物体识别、6D位姿估计以及重叠物体的检测等内容。

另外,有很多人通过邮箱联系到了我我也给出了我的解答。如有问题请发邮件到: 1701210376@pku.edu.cn 最近依然不会回复csdn。

-------------------------------------------------------------------------------2020.3.2更新-------------------------------------------------------------

有问题可以联系上面的邮箱。最终的文章代码链接以及原文、实现步骤已经在系列终结篇给出。

机器人控制系统部分的程序代码

链接:https://pan.baidu.com/s/1c-c087wvqdmnfyIizfd-AA

提取码:js1q

复制这段内容后打开百度网盘手机App,操作更方便哦