Abstract

This paper deals with multiobjective optimization techniques for a class of hybrid optimal control problems in mechanical systems. We deal with general nonlinear hybrid control systems described by boundary-value problems associated with hybrid-type Euler-Lagrange or Hamilton equations. The variational structure of the corresponding solutions makes it possible to reduce the original “mechanical” problem to an auxiliary multiobjective programming reformulation. This approach motivates possible applications of theoretical and computational results from multiobjective optimization related to the original dynamical optimization problem. We consider first order optimality conditions for optimal control problems governed by hybrid mechanical systems and also discuss some conceptual algorithms.

1. Introduction

Hybrid and switched systems have been extensively studied in the past decade, both in theory and practice [110]. In particular, driven by engineering requirements, there has been increasing interest in optimal control (OC) of these dynamical systems [13, 6, 8, 9, 1114]. In this paper, we investigate some specific types of hybrid systems, namely hybrid systems of mechanical nature, and the corresponding hybrid optimal control problems. The class of problems to be discussed in this work concerns hybrid systems where discrete transitions are being triggered by the continuous dynamics. The control objective is to minimize a cost functional, where the control parameters are usual control inputs. Recently, there has been considerable effort to develop theoretical and computational frameworks for complex control problems. Of particular importance is the ability to operate such systems in an optimal manner. In many real-world applications a controlled mechanical system presents the main modelling framework and is a strongly nonlinear dynamical system of high order [1517]. Moreover, the majority of applied optimal control problems governed by sophisticated mechanical systems are problems of hybrid nature. The most real-world mechanical control problems are becoming too complex to allow analytical solution. Thus, computational algorithms are inevitable in solving these problems. There is a number of results scattered in the literature on numerical methods for optimal control problems. One can find a fairly complete review in [1, 2, 11, 12, 1820]. The aim of our investigations is to use the variational structure of the solution to the two-point boundary-value problem for the controllable hybrid-type Euler-Lagrange or Hamilton equation and to propose a new computational algorithm for OC problems in mechanics. We consider an optimal control problem in mechanics in a general setting and reduce the initial OC problem to an auxiliary multiobjective optimization problem with constraints. This optimization problem provides a basis for a possible numerical treatment of the original problem.

The outline of our paper is as follows. Section 2 contains some necessary technical facts from the conventional and hybrid mechanics. In Section 3 we formulate and study our main OC problem for hybrid mechanical systems. Section 4 deals with the variational analysis of the optimization problem under consideration. We also briefly discuss the computational aspect of the proposed approach. Section 5 summarizes our contribution.

2. Preliminaries

Let us consider the following variational problem: where are Lagrangian functions of a sequence of noncontrolled mechanical systems, where (a finite set of indices), and () is a continuously differentiable function. is called the set of possible locations associated with a given hybrid system. Moreover, are characteristic functions of the time intervals , . Note that a full time interval is assumed to be separated into disjunct subintervals of the above type for a sequence of switching times: . We refer to [13, 6, 8, 9, 11, 13, 14] for some concrete examples of hybrid systems of the above type. We consider hybrid mechanical systems which can be represented by generalized configuration coordinates . The components of are the so-called generalized velocities. We assume that are twice continuously differentiable convex functions. The necessary optimality conditions for the variational problem (2.1) describe the dynamics of a mechanical system with variable structure. In this case the system is free from some possible external influences or forces. This optimality conditions for (2.1) can be written in the form of the second-order Euler-Lagrange equations (see [21]) for all . The Hamilton Principle (see, e.g., [21]) gives a variational description of the solution of the two-point boundary-value problem (2.2).

For hybrid mechanical systems with Lagrangians we also can consider the equations of motion where is a control function from the set of admissible controls . Let where , , are constants. The introduced set provides a standard example of an admissible control set. In this specific case we deal with the following set of admissible controls . Note that depends directly on the control function . Let us assume that functions are twice continuously differentiable functions and every is a continuously differentiable function. For a fixed admissible control we obtain for all the above hybrid mechanical system with . It is also assumed that are strongly convex functions, that is, for any and the following inequality holds for all . This natural convexity condition is a direct consequence of the classical representation for the kinetic energy of a conventional mechanical system. Under the above-mentioned assumptions, the two-point boundary-value problem (2.3) has a solution for every admissible control [22]. We assume that every equation of the type of (2.3) has a unique absolutely continuous solution for every . For an admissible control the full solution to the boundary-value problem (2.3) is denoted by . We call (2.3) the hybrid Euler-Lagrange control system. Note that the complete trajectory of the hybrid Euler-Lagrange control system introduced above is not obligatory an absolutely continuous function on .

Example 2.1. We consider a variable linear mass-spring system attached to a moving frame that is a generalization of the corresponding system from [17]. The considered control is the velocity of the frame. By we denote the variable masses of the system. The kinetic energy depends directly on . Therefore, , where and By we denote here the elasticity coefficient of the spring system.

Note that some important controlled mechanical systems have Lagrangian functions of the following type (see, e.g., [17]): In this special case we easily obtain Note that the control function is interpreted here as an external force.

Let us now consider the Hamiltonian reformulation for the Euler-Lagrange control system (2.3). For every location from we introduce the generalized momenta and define the Hamiltonian function as a Legendre transform applied to every , that is, In the case of hyperregular Lagrangians (see, e.g., [21]) the Legendre transform, namely, , is a diffeomorphism for every . Using the introduced Hamiltonian and the vector of generalized momenta , we can rewrite system (2.3) in the following Hamilton-type form: Under the above-mentioned assumptions, the boundary-value problem (2.10) has a solution for every . We will call (2.10) a Hamilton control system. The main advantage of (2.10) in comparison with (2.3) is that (2.10) immediately constitutes a control system in standard state space form with state variables (in physics usually called the phase variables).

Consider the system from Example 2.1, and compute The maximization procedure applied to implies . The Hamilton equations can now be written in the explicit form Note that for we obtain the associated Hamilton functions in the form where is the Legendre transform of .

3. Optimal Control Processes Governed by Hybrid Mechanical Systems

Let us formally introduce the class of optimal control problems investigated in this paper: where is continuous and convex on objective functions. We have assumed that the boundary-value problems (2.3) have a unique solution and that the optimization problem (3.1) also has a solution. Let be an optimal solution of (3.1). Note that we can also use the associated Hamiltonian-type representation of the initial optimal control problem (3.1). We mainly focus our attention on the application of direct numerical algorithms to the hybrid optimization problem (3.1). A great amount of works is devoted to the direct or indirect numerical methods for conventional and hybrid OC problems (see [11, 1820] and references therein). Evidently, an OC problem involving ordinary differential equations can be formulated in various ways as an optimization problem in a suitable function space and solved by some standard numerical algorithms (e.g., by applying a first-order method [1, 18]).

Example 3.1. Using the Euler-Lagrange control system from Example 2.1, we now examine the following optimal control problem: where are given (variable) coefficients associated with every location. The solution of the above boundary-value problem can be written as follows: where , , and is a constant in every location. Consequently, we have Let now for all . Using the Hybrid Maximum Principle (see [2]), we conclude that the admissible control is an optimal solution of the given optimal control problem. Note that this result is also consistent with the Bauer Maximum Principle (see, e.g., [23]). For we can compute the corresponding optimal trajectory given as follows: Note that the optimal trajectory obtained above is not an absolutely continuous function. Evidently we have , and the optimal dynamics is of impulsive nature. Otherwise, all restrictions of function on every time interval are absolutely continuous functions.

As we can see from the above example, an optimal trajectory is not obligatory an absolutely continuous function on the full time interval . The Hybrid Maximum Principle mentioned above guarantees the absolute continuity of trajectories only on the time intervals associated with the corresponding locations. In general, an optimal hybrid system of the mechanical nature can has jumps in the state. We refer to [24] for some theoretical and computational details related to impulsive hybrid systems. Finally, note that a wide family of classical impulsive control systems can be described by the conventional controllable Euler-Lagrange or Hamilton equations (see [25]). Thus the impulsive hybrid mechanical systems can also be incorporated into the above modeling framework (2.3)–(3.1).

4. The Variational Approach to OC Problems and Some Computational Methods

An effective numerical procedure, as a rule, uses the specific structure of the problem under consideration. Our aim is to consider the variational structure of the optimal control problem (3.1) from Section 3. Let where , the vectors , where , are defined by the corresponding switching mechanism of a concrete hybrid system. We refer to [1, 2, 8] for some possible switching rules determined for various classes of hybrid control systems. We now present an immediate consequence of the classical Hamilton Principle from analytical mechanics.

Theorem 4.1. Let all Lagrangians be a strongly convex function with respect to . Assume that every boundary-value problem from (2.3) has a unique solution for every . A piecewise absolutely continuous function , where , is a solution of the sequence of boundary-value problems (2.3) if and only if a restriction of this function on can be found as follows:

For an admissible control function from we now introduce the following two functionals: for all indexes . Generally, we define every restriction of on intervals as an element of the corresponding Sobolev spaces , that is, the space of absolutely continuous functions with essentially bounded derivatives. Let us give a variational interpretation of the admissible solutions to a sequence of problems (2.3).

Theorem 4.2. Let all Lagrangians be strongly convex functions with respect to , . Assume that every boundary-value problem from (2.3) has a unique solution for every . A piecewise absolutely continuous function , where , is a solution of the sequence of problems (2.3) if and only if every restriction of this function on can be found as follows:

Proof. Let be a unique solution of a partial problem (2.3) on the corresponding time interval, where . Using the Hamilton Principle in every location , we obtain the following relations: If the condition (4.4) is satisfied, then is a solution of the sequence of the boundary-value problem (2.3). This completes the proof.

Theorems 4.1 and 4.2 make it possible to express the initial optimal control problem (3.1) as a multiobjective optimization problem over the set of admissible controls and generalized coordinates or where and . The auxiliary minimizing problems (4.6) and (4.7) are multiobjective optimization problems (see, e.g., [26, 27]). The set of restrictions is a convex set. Since is a convex function, is also convex. If (or ) is a convex functional, then we deal with a convex multiobjective minimization problem (4.6) (or (4.7)).

The variational representation of the solution of the two-point boundary-value problem (2.3) eliminates the differential equations from the consideration. The minimization problems (4.6) and (4.7) provide a basis for numerical algorithms to the initial optimal control problem (3.1). The auxiliary optimization problem (4.6) has two objective functionals. For (4.6) we now introduce the Lagrange function [27] where denotes the distance function Note that the above distance function is associated with the following Cartesian product: Recall that a feasible point is called weak Pareto optimal for the multiobjective problem (4.7) if there is no feasible point for which A necessary condition for to be a weak Pareto optimal solution to (4.7) in the sense of Karush-Kuhn-Tucker (KKT) condition is that for every sufficiently large there exist such that By we denote here the generalized gradient of the Lagrange function . We refer to [27] for further theoretical details. If is a convex functional, then the necessary condition (4.13) is also sufficient for to be a weak Pareto optimal solution to (4.7). Let be a set of all weak Pareto optimal solutions for problem (4.6). Since , the above conditions (4.13) are satisfied also for this optimal pair .

It is necessary to stress that it is a challenging issue to develop necessary optimality conditions for the proper Pareto optimal (efficient) solutions. A number of theoretical papers concerning multiobjective optimization are related to this type of Pareto solutions. One can find a fairly complete review in [28]. Note that the formulation of the necessary optimality conditions (4.13) involves the Clarke generalized gradient of the Lagrange function. On the other hand, there are more effective necessary conditions for optimality based on the concept of the Mordukhovich limiting subdifferentials [28]. The use of the above-mentioned Clarke approach is motivated here by the availability of the corresponding powerful software packages.

When solving constrained optimization based on some necessary conditions for optimality one is often faced with a technical difficulty, namely, with the irregularity of the Lagrange multiplier associated with the objective functional [28, 29]. Various supplementary conditions (constraint qualifications) have been proposed under which it is possible to assert that the Lagrange multiplier rule holds in “normal” form, that is, that the first Lagrange multiplier is nonequal to zero. In this case we call the corresponding minimization problem regular. Examples of the constraint qualifications are the well known Slater (regularity) condition for classic convex programming and the Mangasarian-Fromovitz regularity conditions for general nonlinear optimization problems. We refer to [28, 29] for details. In the case of a multiobjective optimization problem the corresponding regularity conditions can be given in the form of so-called KKT constraint qualification (see [27] for details). In the following, we assume that problems (4.6) and (4.7) are regular.

Recall that discrete approximation techniques have been recognized as a powerful tool for solving optimal control problems [1, 19, 20]. Our aim is to use a discrete approximation of (4.6) and to obtain a finite-dimensional auxiliary optimization problem. Let be a sufficiently large positive integer number and be a (possible nonequidistant) partition of every time interval such that and for every . Define for and consider the corresponding finite-dimensional optimization problem where and are discrete variants of the objective functionals and from (4.6). Moreover, is a correspondingly discretized set and is set of suitable discrete functions that approximate the trajectories set . Note that the initial continuous optimization problem can also be presented in a similar discrete manner. For example, we can introduce the (Euclidean) spaces of piecewise constant trajectories and piecewise constant control functions . As we can see the Banach space and the Hilbert space will be replaced in this case by some appropriate finite-dimensional spaces.

The discrete optimization problem (4.15) approximates the infinite-dimensional optimization problem (4.6). We assume that the set of all weak Pareto optimal solution of the discrete problem (4.15) is nonempty. Moreover, similarly to the initial optimization problem (4.6) we also assume that the discrete problem (4.15) is regular. If is a convex functional, then the discrete multiobjective optimization problem (4.15) is also a convex problem. Analogously to the continuous case (4.6) or (4.7) we also can write the corresponding KKT optimality conditions for a finite-dimensional optimization problem over the set of variables . The necessary optimality conditions for a discretized problem (4.15) reduce the finite-dimensional multiobjective optimization problem to a system of nonlinear equations. This problem can be solved by some gradient-based or Newton-like methods (see, e.g., [18]).

Finally, note that the proposed numerical approach uses the necessary optimality conditions, namely the KKT conditions, for the discrete variant (4.15) of the initial optimization problem (4.6). It is common knowledge that some necessary conditions of optimality for discrete systems, for example the discrete version of the classical Pontryagin Maximum Principle, are noncorrect in the absence of some restrictive assumptions. For a constructive numerical treatment of the discrete optimization problem it is necessary to apply some suitable modifications of the conventional optimality conditions. For instance, in the case of discrete optimal control problems one can use so-called Approximate Maximum Principle which is specially designed for discrete approximations of general OC problems [30].

5. Concluding Remarks

In this paper we propose new theoretical and computational approaches to some classes of hybrid OC problems motivated by general mechanical systems. Using a variational structure of these nonlinear Euler-Lagrange or Hamilton-type dynamical systems, one can formulate an auxiliary equivalent problem of multiobjective optimization. This problem and the corresponding theoretical and numerical optimization techniques can provide a basis for a numerical treatment of the given hybrid OC problem.

The proofs of our results and the main numerical concepts use some differentiability and convexity assumptions. These restrictive conditions are motivated by the "classical" structure of the mechanical hybrid systems under consideration. On the other hand, the modern variational analysis proceeds without the above restrictive smoothness assumptions. We refer to [28, 30] for theoretical details. Evidently, the nonsmooth variational techniques and numerical algorithms can be considered as a possible mathematical tool for the analysis of discontinuous and impulsive (nonsmooth) hybrid mechanical systems.

Finally, note that the theoretical approach and the conceptual numerical aspects presented in this paper need to be extended by some adequate implementable algorithms and computational schemes. Moreover, some general classes of hybrid OC problems with additional state and/or mixed constraints can be taken into consideration. In this case one needs to choose a suitable discretization procedure for the complete constrained OC problem and to use some sophisticated necessary optimality conditions. It seems also be possible to apply our theoretical and computational ideas presented in this paper to some practically motivated nonlinear hybrid and switched OC problems in mechanics, for example, to optimization problems of robots dynamics. It is naturaly to assume that applications of hybrid-type OC methodology to sophisticated mechanical objects can imply more detailed and precise dynamical behavior of these controllable systems of mechanical nature.