Dynamic Path Tracking of Industrial Robots With High Accuracy Using Photogrammetry Sensor

Telechargé par rodyhamod117
IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 23, NO. 3, JUNE 2018 1159
Dynamic Path Tracking of Industrial Robots With
High Accuracy Using Photogrammetry Sensor
Tingting Shu , Sepehr Gharaaty, WenFang Xie , Senior Member, IEEE, Ahmed Joubair,
and Ilian A. Bonev , Senior Member, IEEE
AbstractIn this paper, a practical dynamic path track-
ing (DPT) scheme for industrial robots is presented. The
DPT scheme is a position-based visual servoing to real-
ize three-dimensional dynamic path tracking by correcting
the robot movement in real time. In the traditional task-
implementation mode for industrial robots, the task plan-
ning and implementation are taught manually and hence
the task accuracy largely depends on the repeatability of
industrial robots. The proposed DPT scheme can realize au-
tomatic preplanned task and improve the tracking accuracy
with eye-to-hand photogrammetry measurement feedback.
Moreover, an adaptive Kalman filter is proposed to obtain
smooth pose estimation and reduce the influence caused by
image noise, vibration, and other uncertain disturbances.
Due to high repeatability of the photogrammetry sensor,
the proposed DPT scheme can achieve a high path tracking
accuracy. The developed DPT scheme can be seamlessly in-
tegrated with the industrial robot controller and improve the
robot’s accuracy without retrofitting with high-end encoder.
By using C-track 780 from Creaform as the photogramme-
try sensor, the experimental tests on Fanuc M20-iA with
the developed DPT scheme demonstrate the tracking accu-
racy is significantly improved (±0.20 mm for position and
±0.10 deg for orientation).
Index TermsDynamic path tracking (DPT), industrial
robots, robot accuracy, visual servoing.
I. INTRODUCTION
IN INDUSTRIAL manufacturing fields, many tasks, such as
cutting, milling, and lathing, are expected to use robots to
implement operation automatically. According to the standard
Manuscript received March 21, 2017; revised October 18, 2017 and
February 5, 2018; accepted March 13, 2018. Date of publication March
30, 2018; date of current version June 12, 2018. Recommended by Tech-
nical Editor Prof. Hong Qiao. This work was supported in part by the Nat-
ural Sciences and Engineering Research Council of Canada (NSERC),
in part by the Collaborative Research and Development (CRD) program,
and in part by the Consortium for Research and Innovation in Aerospace
in Quebec (CRIAQ). (Corresponding author: WenFang Xie.)
T. Shu is with the Department of Mechanical, Industrial & Aerospace
Engineering, Faculty of Engineering and Computer Science, the
Concordia University, Montreal, QC H3G 1M8, Canada (e-mail:
S. Gharaaty and W. Xie are with the Concordia University, Montreal,
QC H3G 1M8, Canada (e-mail: gharaaty[email protected]; wfxie@
encs.concordia.ca).
A. Joubair and I. A. Bonev are with the ´
Ecole de technologie
suprieure, Montreal, QC H3C 1K3, Canada (e-mail: ahmed.joubair.
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TMECH.2018.2821600
process specifications in the aerospace industry (e.g, Airbus [1]),
the desired accuracy of robot manipulation for manufacturing is
around ±0.20 mm. However, due to the mechanical tolerances
and deflection in the robot structure, the typical difference of a
virtual robot in simulation and a real robot can be 8–15 mm [2],
which is inadequate to meet the precision requirements of many
potential applications. Therefore, the relatively low accuracy
of current robots is the main problem for the industrial manu-
facturing applications. Especially, it poses a critical obstacle to
use advanced task planning techniques which integrate offline
simulation and computer aided design-based methods.
In the past decades, many researchers have tried to improve
the accuracy of industrial robots. One direct method is to install
secondary high-accuracy encoders at each robot joint [3] and
[4]. Since the encoder installation for each robot needs to be
customized and the price of high-accuracy encoder tends to be
high, this solution is costly and sometimes infeasible for all
the joints. The other solutions may be classified into two main
categories: 1) static calibration [5], [6] and [7]; and 2) dynamic
position or pose (i.e., position and orientation) adjustment [8],
[9] and [10]. The static calibration is mostly based on kinematic
models. Sometimes a partial dynamic model may be involved
as well, such as joint or link compliance, or backlash. Because
the static calibration considers factors that are time invariant, it
can only increase the robot accuracy to a limited extent under
static or quasi-static conditions. In a robot application where
a high-precision tracking of various trajectories is demanded,
dynamic continuous strategies for enhancing real-time tracking
accuracy become necessary. Hence, visual servoing is the main
approach to meet the demand.
Visual servoing for industrial robots is based on the visual
measurement feedback of the end-effector or reference objects.
Visual servoing is a multidisciplinary research topic involv-
ing computer vision, robotic kinematics, nonlinear dynamics,
control theory, as well as real-time systems. According to the
features extracted as feedback of controller, visual servoing can
be classified into three categories:
1) position-based visual servoing (PBVS) [8] and [11];
2) image-based visual servoing [12];
3) hybrid visual servoing [13].
According to the location of the vision system, the config-
uration of visual seroving is classified as either eye-to-hand
or eye-in-hand. In PBVS, the actual pose of the related object
in Cartesian space is estimated according to the feature points
on the image plane. Correspondingly, the control input for the
1083-4435 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
Authorized licensed use limited to: University of Nantes. Downloaded on April 06,2022 at 09:00:14 UTC from IEEE Xplore. Restrictions apply.
1160 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 23, NO. 3, JUNE 2018
robots is based on the errors between the current pose and de-
sired pose of the end-effector. In this paper, since dynamic path
tracking (DPT) aims at correcting the end-effector’s pose in the
industrial applications by using a photogrammetry sensor, an
eye-to-hand PBVS scheme is adopted.
In PBVS, the pose measurements are computed from the
current image information of the feature points, which tend to
include distortion, blur, and other uncertain noise. In the re-
cent decades, Kalman filter has been widely applied in robot
vision for optimal or suboptimal estimation [14] and [15], and
has started to be used in coordinate measuring machine (CMM)
[16]. Normally, a linear dynamic process model is required, and
aprioristatistical information of the process and measurement
noise are assumed to be known. However, industrial robots’
mechanism is highly nonlinear and the pose estimation from
image information captured by the eye-to-hand cameras is a
nonlinear computation. In order to apply Kalman filter to es-
timate the pose of industrial robots, an extended Kalman filter
(EKF) was employed by many researchers [8], [17]. A compre-
hensive comparison on various Kalman filter schemes for vision
system is carried out in [17]. However, the optimality of EKF
depends on the accuracy of dynamic process model, which is
hard to obtain. Recently, the adaptive Kalman filter (AKF) has
attracted more and more attention from researchers attempting
to overcome the problems and adapting to complicated situ-
ations without knowing the certain system model and precise
priori information about the noises [14], [18]. For industrial
robots, the process and measurement noises are time variant as
the robots move at different speeds, with various configurations,
and along different trajectories. The Kalman filter method for
metrology tool with application to CMM in [16] has not con-
sidered this time-varying effect. In this paper, an effective AKF
is proposed to achieve the smooth pose estimation considering
both the noise features from photogrammetry sensor and the
vibration during robot movement. The proposed AKF can opti-
mize the measurement-noise covariance according to the current
velocity of the industrial robot and the process-noise covariance
by using an adaptive method.
A lot of research work has been dedicated to using visual
information to improve the robot accuracy. For example, an off-
line compensation method is presented in [19], which measures
the sources of robot error and applies them to predict reference
path off-line by using Nikon K-Series optimal CMM. Although
it is reported that the robot accuracy can be increased within
0.2 mm, this method needs experts to carry out extensive tests
to predict the reference path for each different path and robot.
An on-line compensation method is demanded, which does not
need off-line reference path prediction. In [20], a feed-forward
torque control cascaded with an iterative learning control (ILC)
is proposed to enhance the path accuracy of industrial robots.
The experimental results on a KUKA KR 6-2 are provided only
in position errors and the maximum error on single axis can
be up to 2.0 mm, which apparently cannot satisfy the afore-
mentioned accuracy for manufacturing in aerospace industry.
Moreover, both position and orientation accuracy, i.e., pose ac-
curacy, are sought after by industrial manufacturers. In [21], an
ILC combined with a PD feedback control is applied to improve
the tracking performance of an industrial robot. The accuracy
Fig. 1. Definition and relation of the coordinate reference frames in the
workspace.
is demonstrated in joint space and maximum error of the joint
angle is larger than 0.1 deg. In addition, pose accuracy is not
equivalent to the joint angle accuracy, which is based on the
kinematic model with both geometric and nongeometric errors.
In this paper, motivated by the industrial need for high accu-
racy, we propose the DPT scheme to improve the accuracy of
industrial robots up to 0.2 mm in distance error and 0.05 deg in
orientation error on-line, and adapt to different paths automati-
cally. The DPT scheme can be seamlessly integrated with robot
controller and the low-cost CMM photogrammetry sensor, i.e.,
C-track 780 from Creaform. The stability of the DPT scheme
is proved by using Lyapunov stability theory. The successful
application on industrial Fanuc M20-iA using C-track 780 as
visual measurement sensor demonstrates the effectiveness of
the proposed DPT scheme.
The rest of this paper is organized as the following. In
Section II, the workspace description and problem statement are
provided. Section III presents the complete process to obtain the
pose estimation of the end-effector from the dual-cameras image
information. Then, Section IV describes an adaptive Kalman
filter to reduce the noise influence from the pose estimation.
Section V introduces the basic control configuration of the DPT
scheme, tracking control-law design and stability analysis. The
simulation results on Puma 560 are provided to prove the ro-
bustness of the DPT scheme. The experimental results on Fanuc
M20-iA are presented in Section VI. Finally, the concluding
remarks and future work are summarized in Section VII.
II. WORKSPACE DESCRIPTION AND PROBLEM STATEMENT
A. Workspace Description
Consider a 6-DOF industrial robot with an end-effector in
Fig. 1. Generally, the default tool frame is defined at the center
of the flange of Joint 6, P0. A tool frame FEwith origin at
the tool center point (TCP) can be defined by offsetting the
default tool frame to the TCP with the known relative position
information. Assume that the end-effector can reach anywhere
in the reachable workspace WRof the robot. In the workspace,
a user frame FURfor the robot can be defined with respect to
the base frame FB.
On another note, when a task is given, the task space WT
(WT⊂W
R)can be defined. In the task space, a user frame
Authorized licensed use limited to: University of Nantes. Downloaded on April 06,2022 at 09:00:14 UTC from IEEE Xplore. Restrictions apply.
SHU et al.: DYNAMIC PATH TRACKING OF INDUSTRIAL ROBOTS WITH HIGH ACCURACY USING PHOTOGRAMMETRY SENSOR 1161
FUTfor the task can be defined. Correspondingly, the path
planned for this task can be represented in FUT. The task path
is used as the desired trajectory which the TCP should follow
precisely.
Moreover, in terms of the three-dimensional (3-D) visual
measurement instrument (VMI), there is default sensor frame
FSwhose origin is at the center of VMI. User frame FUTcan
be calibrated by using VMI to be represented in FS. In order to
locate TCP during any movement, the end-effector should be in
the field-of-view (FOV) of VMI. The pose of TCP in FSis rep-
resented by the pose of the tool frame FEin FS. Therefore, the
homogeneous transform matrix UT
SHfrom FUTto FSand S
EH
from FSto FEat any time can be obtained. The homogeneous
transformation matrix UT
EHfrom FUTto FEcan be obtained as
follows:
UT
EH=UT
SHS
EH.(1)
In order to set up a relation between FURand FUT, an-
other equivalent user frame FUEq is introduced. Ideally, there
is only translation and no rotation between FUEq and FURin
workspace. The main difference is that FURis represented in
FBwhile FUEq in FS.FUEq in FScan be defined by using these
three points. The definition and relation of the abovementioned
coordinate frames are illustrated in Fig. 1.
B. Problem Statement
When the industrial robots are designated for cutting, paint-
ing, and fiber placement in the aerospace industry, the corre-
sponding finite task path with a start point and end point should
be planned in advance. The robots are expected to move from
the start point to end point by strictly following the task path
with high accuracy.
Traditionally, the task path is planned in robot user frame
FURby using teach pendant to record the point nodes or to
input the point nodes manually. Between the position nodes, the
path is produced by using certain interpolation of several exist-
ing interpolation methods in the robot controller. In this paper,
the task path defined in task user frame FUTis continuously
differentiable and independent of the robot controller so that the
task path can be preplanned in task space WT. Any task path P
in FUTcan be described as follows:
px=x(η),p
y=y(η),p
z=z(η)
pγ=γ(η),p
β=β(η),p
α=α(η)(2)
where η[0,1]is a normalized variable. η=0 corresponds to
the start point on Pwhile η=1 corresponds to the end point on
P.During0 <1, continuous position (px,p
y,p
z)forms the
geometry profile of Pwhile (pγ,p
β,p
α)is the orientation of FE
in FUT.Pdefined in (2)is served as the desired path. In order
to carry out the task, the control objective is to control the TCP
to track (px,p
y,p
z)along the desired path with (pγ,p
β,p
α)
satisfying the error tolerance.
III. POSE ESTIMATION OF THE END-EFFECTOR
The end-effector pose estimation is the localization of the
tool frame FEwith respect to the sensor frame FS.Thisis
achieved by first mapping FEin FSand then projecting them
onto the image plane. Since a dual-camera sensor C-track 780
is employed as the visual tacking instrument, the pose estima-
tion principle about binocular vision is presented in this sec-
tion. C-track 780 can provide continuous image acquisition and
transmission in real time. Moreover, C-track 780 can track the
reference model, including a rigid set of reflectors, which work
as the feature points. It is assumed that there are nfeature
points, n>3, on the rigid end-effector, and the homogeneous
coordinates of each feature point in sensor frame are denoted as
SPi=(xi,y
i,z
i,1),i=1...n, while the projection coordi-
nates of each feature point on the image plane of the jth camera
is represented as CPij =(uij ,v
ij ,1),i=1...nand j=1,2,
where jis the number of dual cameras. The perspective projec-
tion can be given as follows [22]:
CPij =CXjSPi,CXj=Bj
C
SHj(3)
where j=1,2; CXj, is the projection matrix of each camera;
Bjis the camera matrix, integrating the intrinsic parameters of
the jth camera; C
SHjis the homogeneous transformation matrix
from the sensor frame FSto the jth camera frame. When the
dual-cameras’ sensor is calibrated, Bjand C
SHjare known.
Therefore, when CPij is obtained in the image plane, SPican
be computed.
Ideally, SPicomputed from each camera should be the same.
However, due to the distortion, calibration errors, and other
noises, there is a difference between two results of SPifrom
dual cameras. The triangulation is the main way to balance the
difference in the results [23]. In order to ensure a matching
pair of points, CPi1and CPi2, meet in space, the following
constraint should be satisfied:
CPT
i1GCPi2=0(4)
where Gis the fundamental matrix that can be computed when
dual camera projection matrices, CM1and CM2, are given.
Due to the uncertainty of image processing, (4)may not be
satisfied accurately. According to optimal correction principle
of Kanatani [24], the objective function is
min
ˆ
CPi1
TGˆ
CPi2=0dCPi1,ˆ
CPi1+dCPi2,ˆ
CPi2 (5)
where min(.)represents the minimization function subject to the
constraint ˆ
CPi1
TGˆ
CPi2=0, d(,)denotes Euclidean dis-
tance, ˆ
CPi1and ˆ
CPi2are the estimated points of CPi1and
CPi2, respectively. As a result, ˆ
CPi1and ˆ
CPi2can be obtained
by the following formulas:
ˆ
CPi1=CPi1CPi1,GCPi2VGCPi2
ω
ˆ
CPi2=CPi2CPi1,GCPi2VGTCPi1
ω
ω=GCPi2,VGCPi2+GTCPi1,VGTCPi1(6)
where in this paper, the inner product of two vectors aand b
is denoted as (a, b); projection matrix Vdiag(1,1,0). Then,
by (3), the coordinate of ith feature point in sensor frame, SPi
can be obtained.
Authorized licensed use limited to: University of Nantes. Downloaded on April 06,2022 at 09:00:14 UTC from IEEE Xplore. Restrictions apply.
1162 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 23, NO. 3, JUNE 2018
After the position information of all the feature points on
the end-effector is prepared, the pose estimation of the end-
effector can be developed. Suppose nfeature points on the rigid
end-effector are fixed and known from the definition of the
tool frame FE, whose homogeneous coordinates are denoted
as EPi=(
Exi,Eyi,Ezi,1). It is assumed the current pose
of FEin FSis denoted as (xc,y
c,z
c
c
c
c).(xc,y
c,z
c)
represents the origin position of FEin FS, while the orienta-
tion (γc
c
c)represents the Euler-angle rotation from FE
in FS. With the pose (xc,y
c,z
c
c
c
c)of FEin FS,the
homogeneous transformation matrix S
EHfrom FEto FScan be
formulated as follows:
S
EH=R(γc
c
c)(xc,y
c,z
c)T
000 1 (7)
where R(γc
c
c)is rotation matrix from FEto FS.Corre-
spondingly, the transformation equation of ith feature point can
be written as
SPi=S
EHEPi(8)
which can be unfolded into three nonlinear equations with six
unknown variables (xc,y
c,z
c
c
c
c)as follows:
xi=xc+CγcCβc
Exi+(CγcSβcSαcSγcCαc)Eyi
+(CγcSβcCαc+SγcSαc)Ezi
yi=yc+SγcCβc
Exi+(SγcSβcSαc+SγcCαc)Eyi
+(SγcSβcCαcCγcSαc)Ezi
zi=zcSβc
Exi+CβcSαc
Eyi+CβcCαc
Ezi(9)
where in this paper, Ca=cos(a)and Sa=sin(a). In order to
solve (9)for (xc,y
c,z
c
c
c
c), at least three noncollinear
feature points are required [8]. However, as indicated in [25],
at least four coplanar feature points are necessary for a unique
solution while the additional noncoplanar feature points can
be used to improve the estimation accuracy with measurement
noise. Since the number of the feature points on the end-effector
nis more than three, (xc,y
c,z
c
c
c
c)can be determined
uniquely. By using the proprietary software VXelements pro-
vided by Creaform, the end-effector is defined as the tracking
model built based on the selected reflectors on the surface of the
end-effector. The positional and rotational information of the
tracking model with respect to sensor frame can be acquired,
recorded, or displayed simultaneously. Therefore, the compu-
tation to obtain the pose of the end-effector is carried out by
VXelements.
IV. ADAPTIVE KALMAN FILTER FOR SMOOTHING ESTIMATED
POSE OF THE END-EFFECTOR
The presence of noise is inevitable in the image information
from the 3-D VMI. Moreover, the movement of the end-effector
is highly likely to cause vibration, blur, and distortion to the
images. Therefore, in this Section, an adaptive Kalman filter is
presented to smooth the estimated pose data of the end-effector.
The sampling interval of the VMI is denoted as Ts. For conve-
nience, kinstead of kTsrepresents the current time instant and
k1 is the previous time instant. Suppose that the current state
vector of the industrial robot is represented as follows:
ρk,k =(x(k),y(k),z(k)(k)(k)(k),
˙x(k),˙y(k),˙z(k),˙γ(k),˙
β(k),˙α(k))T(10)
i.e., the pose and velocity of the end-effector at the current
time instant. The current measurement vector obtained from
Section III is denoted as ok=(xc,y
c,z
c
c
c
c)T. Then, an
adaptive Kalman filter method is given as the following recursive
equations.
First, the prediction equations are given as follows:
ˆρk,k1=Aˆρk1,k1
Wk,k1=AW k1,k1AT+Qk1(11)
where Ais 12 ×12 state transition matrix which is applied to
the previous state ˆρk1,k1to obtain the current predicted state
ˆρk,k1;Wk,k1is the current prediction of the error covariance
matrix, which is a measure of the accuracy of the state estimate
while Wk1,k1are the previous error covariance matrix; Qk1
is the process noise covariance computed from the information
at time instant k1. All the diagonal elements of Aare 1
and Ai,i+6(i=1...6)are equal to the sampling interval Ts
since these items will multiply the covariance of velocity in
Wk1,k1to predict the states. ˆρ0,0and W0,0are initialized by
using the static samples before path tracking movement. Second,
the Kalman filter gain Dkis computed as follows:
Dk=Wk,k1(Wk,k1+Ωk1)1(12)
where Ωk1is the previous measurement noise covariance.
Third, the estimation updating is formulated as follows:
ˆρk,k ρk,k1+Dk(okˆρk,k1)
Wk,k =Wk,k1DkWk,k1(13)
where ˆρk,k is produced as the optimal pose of the end-effector
at time instant k.
The proper selection or updating of Qkand Ωkare critical
for the accuracy of filtered pose. Qkand Ωkare symmetric
positive definite matrices. When the robot is stationary, Ωkcan
be easily obtained by using the root-mean-square error of the
static measurements from VMI. However, Ωkis a time-varying
matrix when the robot is moving because the higher velocity
will make the delay, blur, vibration, and other uncertain factors
worse.
A new effective method is proposed to update matrix Ωk
based on the velocity of the TCP as follows:
ΔΩk=diagμ1˙x(k)
2˙y(k)
3˙z(k)
4˙γ(k),
μ5˙
β(k)
6˙α(k),Ωk=Ω0Ωk(14)
where diag[] represents a diagonal matrix whose elements are
shown in the bracket; Ω0is the measurement noise covariance
matrix when the TCP is stationary before starting from the start
point; μi,i=1,...,6 are the constant weights determining the
influence of variable velocity, which is estimated by analyzing
the stationary measurements of C-track in workspace.
Authorized licensed use limited to: University of Nantes. Downloaded on April 06,2022 at 09:00:14 UTC from IEEE Xplore. Restrictions apply.
SHU et al.: DYNAMIC PATH TRACKING OF INDUSTRIAL ROBOTS WITH HIGH ACCURACY USING PHOTOGRAMMETRY SENSOR 1163
Fig. 2. Basic configuration of dynamic path tracking control.
An adaptive recursive method is presented to optimize Qk
to compensate the prediction error and uncertain dynamic dis-
turbance [18], [26]. The predictor error ˆkcan be estimated
adaptively as follows:
ˆkρk,k Aˆρk1,k1.(15)
Additionally, let ΔWk=AW k1,k1ATWk,k. Then, Qk
can be computed recursively as follows:
¯kk1+1
NkˆkN)
Qk=Qk1+1
N1((ˆk¯k)(ˆk¯k)T
kN¯k)(ˆkN¯k)T
+1
N(N1)kˆkN)(ˆkˆkN)T
+1
NWkNΔWk)(16)
where Nis the length of the past measurements memory for
updating Qk. Moreover, in order to ensure that Qkis a positive
definite, the diagonal elements need to be reset to their absolute
values.
V. DYNAMIC PAT H TRACKING CONTROL
Since the task path Pis not planned in FURor FB,itis
not compatible in robot controller. Besides, only finite separate
points, not consecutive geometry curves, can be imported into
the robot controller. Most importantly, the low accuracy and
repeatability of the robot cannot guarantee that the TCP will
track the task path accurately with the robot controller.
A. Basic Control Structure
In this paper, a PBVS scheme is presented to realize task path
3-D tracking by correcting the movement of the end-effector in
real time. The basic control configuration consists of four parts
as shown in Fig. 2. The first part includes robot controller and
robot. Normally, each industrial robot comes with its propri-
etary robot controller. Only through the robot controller can the
robot parameters be accessed and robot movement command be
realized. The second part is the visual measurement part, which
Fig. 3. Path analyzing method of dynamic path tracking control.
includes 3-D visual measurement instrument (VMI) and signal
filtering that estimates the pose feedback information. The third
part is the path analyzer, which computes the pose of the clos-
est point on the desired task path according to the current TCP
pose, and decides the next path step. The fourth part is the path
tracking control, which includes the control algorithm to pro-
duce the control input for robot controller based on the current
pose error. In Fig. 2, all the functions in the left dashed-line box
are implemented as software modules running on the computer
that is connected to the physical devices in the right dashed-line
box.
B. Pose Error Computation in Equivalent User Frame
When the end-effector is in the FOV of the 3-D VMI, the
current TCP pose information in FScan be obtained. Assume
η1and η2are both in (0,1)and η2
1.ifΔη=η2η1is
small enough, the curve segment on the task path P, described
in (2), can be regarded as a straight line segment approxi-
mately. During each path segment Δη, the pose information
(x(η2),y(η2),z(η2)(η2)(η2)(η2)) of Pat η2is the cur-
rent desired pose. A path analyzing method shown in Fig. 3 can
calculate the pose error and decide compensation input of the
next step. In this paper, a fixed control interval is used, which is
denoted as Tcand Tc=Δη, where is positive and relevant
to the tracking speed. At the end of each Tc, the current TCP
pose and velocity estimation, denoted as ˆρk,k, is obtained from
the output of Kalman filter.
Given the segment line between (η1
2)and the current
pose, it is simple to obtain the pose pd(k)of the closest
point from current point to the line. Moreover, pc(k)is used
to denote the pose part of ˆρk,k. Therefore, the pose error
ν(k)=(νx(k)
y(k)
z(k)
γ(k)
β(k)
α(k)) can be com-
puted as follows:
ν(k)=pd(k)pc(k).(17)
Since the pose error ν(k)is represented in FUTin task
space, it is completely different from the robot user frame
FUR. In Section II-A, an equivalent user frame FUEq is in-
troduced. Both FUTand FUEq are defined in FS.Ifν(k)is
transformed into FUEq as a vector ς(k)=(ςx(k)
y(k)
z(k),
ςγ(k)
β(k)
α(k)), the translational and rotational errors in
FUEq are the same as those in FUR. Suppose the homogeneous
transformation matrix of FUTand FUEq in FSis denoted as
S
UTHand S
UEq H, respectively. The homogeneous transforma-
tionmatrixofFUTin FUEq is UEq
UTH. The translational part of
Authorized licensed use limited to: University of Nantes. Downloaded on April 06,2022 at 09:00:14 UTC from IEEE Xplore. Restrictions apply.
1 / 12 100%
La catégorie de ce document est-elle correcte?
Merci pour votre participation!

Faire une suggestion

Avez-vous trouvé des erreurs dans linterface ou les textes ? Ou savez-vous comment améliorer linterface utilisateur de StudyLib ? Nhésitez pas à envoyer vos suggestions. Cest très important pour nous !