Abstract

This paper proposes a novel discrete-time terminal sliding mode controller (DTSMC) coupled with an asynchronous multirate sensor fusion estimator for rigid-link flexible-joint (RLFJ) manipulator tracking control. A camera is employed as external sensors to observe the RLFJ manipulator’s state which cannot be directly obtained from the encoders since gear mechanisms or flexible joints exist. The extended Kalman filter- (EKF-) based asynchronous multirate sensor fusion method deals with the slow sampling rate and the latency of camera by using motor encoders to cover the missing information between two visual samples. In the proposed control scheme, a novel sliding mode surface is presented by taking advantage of both the estimation error and tracking error. It is proved that the proposed controller achieves convergence results for tracking control in the theoretical derivation. Simulation and experimental studies are included to validate the effectiveness of the proposed approach.

1. Introduction

In many high-performance robotic manipulator applications, positioning of the end effector (or/and links) is critical as the ultimate goal is to track the desired trajectory. However, the challenge of achieving high accuracy and dynamic performance is increased due to the nonlinear flexibilities found in rigid-link flexible-joint (RLFJ) manipulators. Some researchers have designed observers to estimate states in the robot model, since link positions/velocities are typically not measured in the industrial robot system. Nicosia and Tomei [1, 2] designed a controller combined with the observer which estimates motor positions or/and motor velocities for RLFJ manipulators. Dixon et al. [3] developed a globally adaptive output-feedback tracking controller for the RLFJ dynamics, which is based on the link velocity filter. Globally, output-feedback methods are not easily implemented in the real system because of the requirement of link position measurements which are not frequently measured in real systems. A controller was designed in [4, 5] by using a neural network (NN) observer to estimate link and motor positions/velocities and dynamic parameters. But NN observer-based controllers do not take advantage of motor positions. Kalman filter (KF) and extended Kalman filter (EKF) are utilized to estimate positions of joints or/and end effectors, driving torque, and dynamic parameters for manipulators [6]. Lightcap and Banks [7] designed an EKF-RLFJ controller by using EKF to estimate link and motor positions/velocities. Garcła et al. [8] proposed compliant robot motion controllers by using EKF to fuse wrist force sensors, accelerometers, and joint position sensors. EKF-based sensor fusion was presented by Jassemi-Zargani and Necsulescu [9] to estimate the acceleration for operational space control of a robotic manipulator. However, these reported EKF-based control methods do not discuss the case of asynchronous measurements from multirate sensors for RLFJ manipulators.

Observer-based sliding mode control (SMC) is one of the most important approaches to handle systems with uncertainties and nonlinearities [10]. For the RLFJ manipulator system, observer-based SMC of robot manipulators has been widely studied since the state of manipulators (e.g., acceleration velocity of joints) need not always be measured directly [11]. The terminal SMC (TSMC) is used in rigid manipulator control (e.g., robust TSMC and finite-time control) since it has some superior properties compared with SMC such as better tracking precision and fast error convergence [1214]. In particular, the singularity problem of the TSMC was addressed in [15, 16]. However, most of these papers use the continuous-time dynamic model of the manipulator. Indeed, discrete-time models exist widely in the real digital control system. It has been proved that the discrete-time SMC has technological advances in digital electronics, computer control, and robotic system. Corradinia and Orlando [17] presented a robust DSMC coupled with an uncertainty estimator designed for planar robotic manipulators. However, the flexibilities of joints are not involved in those controller designs.

To remedy such limitations, the paper proposes a novel controller, AMSFE-DTSMC, which is implemented based on DTSMC coupled with an asynchronous multirate sensor fusion estimator. The robotic multirate sensor unit contains vision and non-vision-based sensors whose sampling rate and processing time are different. In the proposed scheme, the delayed slow sampling vision measurement is treated as a kind of periodic “out-of-sequence” measurement (OOSM) [18] which is used to update the non-vision-based state estimation in an EKF-based asynchronous multirate sensor fusion algorithm. Using the position and velocity estimation from sensor fusion estimator, the DTSMC is designed by using a novel sliding surface which is implemented by considering estimation error and tracking error. The main contributions of this work are summarized as follows.(i)We propose a novel tracking control scheme, AMSFE-DTSMC, which is based on the DTSMC coupled with the sensor fusion estimator for a RLFJ manipulator. The sliding surface of AMSFE-DTSMC is designed by utilzing estimation error and tracking control error.(ii)We construct an asynchronous multirate measurement model for robotic sensors and design a sensor fusion to fuse such asynchronous multirate data for robotic state estimation.

This paper is organized as follows. Section 2 gives the problem formulation. In Section 3, the multirate sensor data fusion algorithm is presented. Section 4 designs a novel DTSMC for tracking control. The simulation and experimental studies are implemented in Section 5. The paper ends with conclusion about the proposed approach.

2. Problem Formulation

In this paper, a robotic manipulator system is given with the sensor unit including joint motor encoders and cameras fixed in the workspace. The tracking control scheme for RLFJ manipulators can be developed by using the robotic state estimate via multisensor fusion. The state of the robot is estimated by these sensors directly and indirectly; however, there are some limitations for a single sensor to obtain precise information. To fuse asynchronous multirate data from different sensors, the dynamic and sensor models are formulated in this section.

2.1. Discrete Rigid-Link Flexible-Joint Robot Model

The discrete dynamic model of an link RLFJ manipulator can be obtained by the minimization of the action functional suggested by Nicosia [2] as follows:where and denote the position, velocity of the link and motor angles at time, respectively, , is the sampling interval, is the invertible inertia matrix which satisfies , , represents centrifugal, Coriolis and gravitational forces; , and are constant, diagonal, positive-definite matrices representing joint stiffness, motor inertia, motor viscous friction, respectively; the joint deflection is defined as the difference between the motor and link position and denotes the motor torque; the unknown or varying dynamic parameter in the robotic model is defined as which satisfy , the dynamics uncertainties of the links and motors are modeled with random variables and .

Define a state vector

The dynamics in equations (1a)–(1d) can be transformed to a state space representation:where

2.2. Measurement Model

The observation vector is given bywhere are the position and velocity of motor angles and represents the position of the end effector in image space observed by a fixed camera.

By using the standard pinhole camera model, the mapping from the Cartesian space to image space is given bywhere denotes the position of the end effector in the Cartesian space and is a transformation function from task space to image space. From the forward kinematics, the position relationship between robotic joints and end effector is described by a transform relationship as follows:where is the position of the end effector at th time and is a transformation function from the joint space to task space.

From the equations (6) and (7), The joint position can be observed by a fixed camera:where is the mapping from the joint space to image space. With the random noise , the measurement equation is given bywhere is the measurement of the state from different sensor, denotes the camera measurement, and represents the measurement from motors. We assume that the process noise and measurement noise are sampled from the independent and identically distributed white Gaussian noise which satisfies following equations at time :

Since the measurement of the vision sensor at time is obtained from the visual image taken at time step , denotes the delay time. The relation of different sensors’ sampling rates is given bywhere and denote the sampling rate of motor encoders and visual sensors, respectively, and is a positive integer.

We show the sampling rate difference between vision and non-vision measurements in Figure 1, where step lags of vision measurements are described. According to the characteristics of visual sensor, we treat the vision measurements as the periodic -step lag out-of-sequence measurements.

Remark 1. The velocity of joints (or/and end effector) also can be observed by vision sensors, , with the Jacobian matrix mapping from the joint space to the image space. is the image Jacobian matrix, and denotes the Jacobian matrix from joint space to task space. However, the measurement of velocity is always impaired by the noisy image data with slow sampling rate and the dynamic uncertainties in the RLFJ manipulator system. Therefore, the velocity measurement in the image space is not utilized in the measurement model.

Remark 2. Assume that the camera can cover the entire workspace of the robot. With the prior knowledge of motion planning of the robot, it is basically assumed that there is an one-to-one mapping from the image space to joint space in the real robotic system.

3. Asynchronous Multirate Sensor Fusion Estimation

According to the system model described in Section 2, the robotic link position can be estimated by using the multisensor system. For asynchronous multirate sensors, the sensor fusion method is designed to use the late measurements for updating the current estimated state to get a more accurate estimation in two steps:Step 1: when the vision measurement is unavailable, the robotic state is estimated using the non-vision-based sensors which keep the real-time estimate by recovering the missing information between two vision samples.Step 2: as the delayed vision measurement arrives at every vision sample at time , the state should be re-estimated to cope with the limitations of other sensors in the absolute position measurement.

3.1. Estimation by Using Non-Vision Sensor Measurements

From Figure 1, before the th vision frame is available, we have the estimation of the state at time using the non-vision sensor measurements:where represents all encoder motor measurements up to time . Using the extended Kalman filter (EKF), the estimation at time via the motor encoder measurement is given as follows:where , and are Jacobian matrices and is the correction gain vector. According to above equations, the state and covariance estimates are updated with the measurement with a fast sampling rate.

3.2. Re-Estimation via Vision Sensor Measurements

Suppose that at the time the vision measurement at time is obtained. A new estimation is calculated using the information about the delayed th vision measurement . is defined with :

The delayed measurement observed by vision measurement will be used to correct the accumulated estimation errors which are caused by sensors with fast sampling rate. The equations for updating the estimation with the delayed vision measurement are given bywhere represents cross covariance between and , is the covariance of which denotes the estimation of measurement at time , andwith .

Using the EKF, , and are obtained by assuming that the function is invertible. Define which denotes the backward transition function to estimate the state back from to . Since the previous state is not affected by present input signal , we give the state relationship between and bywhere the process noise and covariance are calculated by

The estimation in equation (19) can be determined by

To estimate the process noise as in equations (13)–(17), we have

Then, and are obtained as follows:where the covariances and are derived as follows:

3.3. Summary of the Fusion Estimate Method

The state of a RLFJ manipulator is estimated using the indirect measurements from asynchronous multirate sensors. The sensor fusion estimate can be implemented in the practical application using a switching mechanism in accordance with sampling time. As shown in Figure 1, the update equations are chosen at the different sampling times. The state estimation can be given by

Remark 3. The exponential convergence of the sensor fusion estimate can be proved in the similar way which is presented in [19], which has more detailed conclusion on the stability.

In this section, the discrete-time terminal sliding mode tracking control-based fusion estimation (AMSFE-DTSMC) is presented for rigid-link flexible manipulators whose state is estimated by the sensor fusion method described in the previous section. The controller is designed by using both position and velocity of the link from the sensor fusion estimate. To design the novel controller, the model in (1a)–(1d) can written aswhere denotes the variable which includes dynamic parameters of links and motors.

In order to formulate the tracking control, define the tracking error and estimation error at time aswhere is the desired position and denotes the estimation position.

Define the reference velocity for tracking and estimation:where denotes the estimation of and are constants and diagonal matrices.

Define the filtered variables including the estimation error:

Consider the discrete terminal sliding surface as follows:where is a positive constant diagonal parameter matrix and , in which and are positive odd integers satisfying . Motivated by the reaching law presented by Weibing Gao et al. in [20], we use the reaching law for exponential discrete sliding mode control as follows:where is the signum function, , and .

Since system states cannot be measured directly, parameters which contain variables , and need to be estimated in the controller design. Assume that the estimate error and uncertainties are bounded, and we havewhere denotes ; and represent the estimation of dynamic parameters; and and are bounded variables including the estimation error and system uncertainties, which satisfywhere and are constants.

Theorem 1. Consider the rigid-link flexible-joint manipulator system described by equations (29a) and (29b) and the discrete sliding manifold described by equation (33); by using the reaching law in equation (34), stable control law is designed aswhere and are positive diagonal matrices which satisfy the following inequations:

Proof. Substituting control law (37) in the rigid-link flexible-joint system equations (29a) and (29b), the error dynamics are obtained:For simplicity, define which is invertible and . Substituting (40) and (41) into (34) yieldsStability conditions for the discrete sliding mode control are given by Sarpturk [21]:Combining (42) and (43a), we haveIf , , and , the sliding gain lower bound can be given asEmploy and which satisfy , we can obtain as follows:If , then , and can be calculated byEmploy given by equation (46), and we havewhere is obtained by adopting according to equation (46). Therefore, equation (46) satisfied the Sarpturk stability (43a).
Assuming , Sarpturk condition (3) is formulated by combining equations (42) and (43b):If , , we haveElse if , , we haveThe sampling time which satisfies the following equation can guarantee the condition in (43b):From (52), we know that the stability conditions of DTSMC given by (43a) and (43b) are guaranteed by chosen parameters and defined in (38) and (39). Theorem 1 is proved.

5. Simulation and Experimental Studies

5.1. Simulation Study

The results obtained from the simulation of proposed control scheme on a two-link RLFJ manipulators shown in Figure 2 are presented in this section. In the simulation, the aim is make the RLFJ manipulator track desired trajectories and from the initial position . The robotic dynamic parameters are given bywhere and are given in Table 1. For , and denote mass and length of the th link. .

In order to demonstrate the influence of estimation for tracking performance, a DTSMC controller without estimation is also simulated. The DTSMC control law is given bywith a sliding surface which is different from proposal sliding mode surface.

Parameters of the proposed controller and DTSMC are selected as , , , and . is chosen as a constant model parameter whose initial value is , , , , and . In the simulation, as shown in Figure 3, we assume that the end effector is observed by a fixed camera whose delayed measurements are used directly to calculate the joint position. The delayed time of slow measurement is . The process and measurement noise are chosen as and . The initial position value of joints (joint1, joint2) for tracking control is given by , and for fusion estimator, it is given by . The initial velocity value for tracking control is given by , and for estimation, it is given by .

The estimate errors of position and velocity are plotted in Figure 4. The estimate errors of parameter are shown in Figure 5. The position tracking of the proposed method for two links is shown in Figures 6 and 7 where the comparative result of SMC without fusion estimator is also plotted. In conclusion, the simulation results clearly indicate that the proposed approach guarantees the convergence of tracking errors and has better tracking accuracy.

5.2. Experimental Study

To validate the applicability of the proposed control schemes, a single-link flexible-joint manipulator with a fixed camera is employed as the experimental plant which is shown in Figure 8. The aim is to make the end effector move along the desired trajectory , with , and the initial position of joint is . Parameters of the single-link robot are measured offline: and ; the measurement errors are and .

To observe the state of end effector, the calibrated camera is fixed perpendicular to the robot notion plane. The coordinate relation of Cartesian space and image space is shown in Figure 9. Camera measurements are obtained from the image sequences shown in Figure 10. We define the position in the image coordinate given in Figure 11, and the position in Cartesian coordinate is . By using the pinhole camera model, we calculate the mapping from the joint space to image space in equation (14) as follows:where is the perpendicular distance between the camera and the robot motion plane and denotes the focal length of the camera. In the experiment, camera parameters are , pixels, and . According to the mapping , the joint position can be obtained by calculating the inverse circular trigonometric functions or . The delayed vision measurements are converted to joint measurements which are shown in Figure 12, where the encoder measurements are also plotted. The position estimation of the joint is calculated by using the proposed fusion estimate method shown as red solid line in Figure 12. To show the performance detail of the fusion estimate method, Figure 13 shows the error of joint position estimation.

To validate the performance of the fusion estimate-based DTSMC, comparative experiments are implemented in this paper. The proposed controller and a DTSMC without estimator are employed in this test. Parameters are selected as , , , and , ; , , and . According to the comparative position tracking performances given in Figure 14, it is obvious that the proposed controller provides a superior behavior.

6. Conclusion

A novel RLFJ manipulator tracking controller, AMSFE-DTSMC, is proposed in this paper based on DTSMC coupled with asynchronous multirate sensor fusion. States of the manipulator are estimated by EKF-based sensor fusion algorithm which combines asynchronous multirate measurements from visual and non-vision-based sensors. Compared with the non-vision measurements, visual measurements are considered as periodic out-of-sequence measurements, which are used to re-estimate the state. With the state estimation, DTSMC is designed by using a novel sliding surface that includes tracking error and estimation error. By using the Sarpturk inequalities, boundedness of the controlled variables is proved. The effectiveness of the proposed approach is shown in simulation and experimental studies.

Data Availability

No data were used to support this study.

Disclosure

An earlier version of multirate sensor fusion-based DTSMC has been presented in the 10th International Conference on Control and Automation (ICCA) [12]; however, a totally new controller is designed by using a new sliding surface including both tracking error and estimation error in this paper, and the effectiveness of the proposed approach is proved both in the simulation and experimental studies.

Conflicts of Interest

The authors declare that they have no conflicts of interest.