IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 23, NO. 3, JUNE 2018 1159
Dynamic Path Tracking of Industrial Robots With
High Accuracy Using Photogrammetry Sensor
Tingting Shu , Sepehr Gharaaty, WenFang Xie , Senior Member, IEEE, Ahmed Joubair,
and Ilian A. Bonev , Senior Member, IEEE
Abstract—In this paper, a practical dynamic path track-
ing (DPT) scheme for industrial robots is presented. The
DPT scheme is a position-based visual servoing to real-
ize three-dimensional dynamic path tracking by correcting
the robot movement in real time. In the traditional task-
implementation mode for industrial robots, the task plan-
ning and implementation are taught manually and hence
the task accuracy largely depends on the repeatability of
industrial robots. The proposed DPT scheme can realize au-
tomatic preplanned task and improve the tracking accuracy
with eye-to-hand photogrammetry measurement feedback.
Moreover, an adaptive Kalman filter is proposed to obtain
smooth pose estimation and reduce the influence caused by
image noise, vibration, and other uncertain disturbances.
Due to high repeatability of the photogrammetry sensor,
the proposed DPT scheme can achieve a high path tracking
accuracy. The developed DPT scheme can be seamlessly in-
tegrated with the industrial robot controller and improve the
robot’s accuracy without retrofitting with high-end encoder.
By using C-track 780 from Creaform as the photogramme-
try sensor, the experimental tests on Fanuc M20-iA with
the developed DPT scheme demonstrate the tracking accu-
racy is significantly improved (±0.20 mm for position and
±0.10 deg for orientation).
Index Terms—Dynamic path tracking (DPT), industrial
robots, robot accuracy, visual servoing.
I. INTRODUCTION
IN INDUSTRIAL manufacturing fields, many tasks, such as
cutting, milling, and lathing, are expected to use robots to
implement operation automatically. According to the standard
Manuscript received March 21, 2017; revised October 18, 2017 and
February 5, 2018; accepted March 13, 2018. Date of publication March
30, 2018; date of current version June 12, 2018. Recommended by Tech-
nical Editor Prof. Hong Qiao. This work was supported in part by the Nat-
ural Sciences and Engineering Research Council of Canada (NSERC),
in part by the Collaborative Research and Development (CRD) program,
and in part by the Consortium for Research and Innovation in Aerospace
in Quebec (CRIAQ). (Corresponding author: WenFang Xie.)
T. Shu is with the Department of Mechanical, Industrial & Aerospace
Engineering, Faculty of Engineering and Computer Science, the
Concordia University, Montreal, QC H3G 1M8, Canada (e-mail:
S. Gharaaty and W. Xie are with the Concordia University, Montreal,
encs.concordia.ca).
A. Joubair and I. A. Bonev are with the ´
Ecole de technologie
suprieure, Montreal, QC H3C 1K3, Canada (e-mail: ahmed.joubair.
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TMECH.2018.2821600
process specifications in the aerospace industry (e.g, Airbus [1]),
the desired accuracy of robot manipulation for manufacturing is
around ±0.20 mm. However, due to the mechanical tolerances
and deflection in the robot structure, the typical difference of a
virtual robot in simulation and a real robot can be 8–15 mm [2],
which is inadequate to meet the precision requirements of many
potential applications. Therefore, the relatively low accuracy
of current robots is the main problem for the industrial manu-
facturing applications. Especially, it poses a critical obstacle to
use advanced task planning techniques which integrate offline
simulation and computer aided design-based methods.
In the past decades, many researchers have tried to improve
the accuracy of industrial robots. One direct method is to install
secondary high-accuracy encoders at each robot joint [3] and
[4]. Since the encoder installation for each robot needs to be
customized and the price of high-accuracy encoder tends to be
high, this solution is costly and sometimes infeasible for all
the joints. The other solutions may be classified into two main
categories: 1) static calibration [5], [6] and [7]; and 2) dynamic
position or pose (i.e., position and orientation) adjustment [8],
[9] and [10]. The static calibration is mostly based on kinematic
models. Sometimes a partial dynamic model may be involved
as well, such as joint or link compliance, or backlash. Because
the static calibration considers factors that are time invariant, it
can only increase the robot accuracy to a limited extent under
static or quasi-static conditions. In a robot application where
a high-precision tracking of various trajectories is demanded,
dynamic continuous strategies for enhancing real-time tracking
accuracy become necessary. Hence, visual servoing is the main
approach to meet the demand.
Visual servoing for industrial robots is based on the visual
measurement feedback of the end-effector or reference objects.
Visual servoing is a multidisciplinary research topic involv-
ing computer vision, robotic kinematics, nonlinear dynamics,
control theory, as well as real-time systems. According to the
features extracted as feedback of controller, visual servoing can
be classified into three categories:
1) position-based visual servoing (PBVS) [8] and [11];
2) image-based visual servoing [12];
3) hybrid visual servoing [13].
According to the location of the vision system, the config-
uration of visual seroving is classified as either eye-to-hand
or eye-in-hand. In PBVS, the actual pose of the related object
in Cartesian space is estimated according to the feature points
on the image plane. Correspondingly, the control input for the
1083-4435 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
Authorized licensed use limited to: University of Nantes. Downloaded on April 06,2022 at 09:00:14 UTC from IEEE Xplore. Restrictions apply.