Numerical Methods and Applications in Biomechanical Modeling 2014
View this Special IssueResearch Article  Open Access
Comparative Sensitivity Analysis of Muscle Activation Dynamics
Abstract
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating secondorder sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze’s nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac’s linear model. Other than Zajac’s model, Hatze’s model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze’s model that combines best with a particular muscle forcelength relation.
1. Introduction
Scientific knowledge is gained by an interplay between quantitative real world measurements of physical, chemical, or biological phenomena and the development of mathematical models for understanding the dynamical processes behind. In general, such phenomena are determined as spatiotemporal patterns of physical measures (state variables). Modelling consists of distinguishing the surrounding world from the system that yields the phenomena and formulating a mathematical description of the system, a model, that can predict values of the state variables. The calculations depend on model parameters and often on giving measured input variables. By changing parameter values and analysing the resulting changes in the values of the state variables, the model may then be used as a predictive tool. This way, the model’s validity can be verified. If the mathematical model description is moreover derived from first principles, the model has the potential to explain the phenomena in a causal sense.
Calculating the sensitivities of a model’s predicted output, that is, the system’s state variables, with respect to model parameters is a means of eliminating redundancy and indeterminacy from models and thus helps to identify valid models. Sensitivity analyses can be helpful both in modelbased experimental approaches and in purely theoretical work. A modelling theoretician could be looking for parameters to which all state variables are nonsensitive. Such parameters might be superfluous. An experimenter may inspect the model that represents his working hypothesis and analyse which of the model’s state variables are specifically sensitive to a selected parameter. Hence, the experimenter would have to measure exactly this state variable to identify the value of the selected parameter.
In a biomechanical study Scovil and Ronsky [1] applied sensitivity analysis to examine the dynamics of a mechanical multibody system: a runner’s skeleton coupled to muscle activationcontraction dynamics. They calculated specific sensitivity coefficients in three slightly different ways. A sensitivity coefficient is the difference quotient calculated from dividing the change in a state variable by the change in a model parameter value, evaluated in a selected system state [2]. The corresponding partial derivative may be simply called “sensitivity.” Therefore, a sensitivity function is the time evolution of a sensitivity [2]. Accordingly, Lehman and Stark [2] had proposed a more general and unified approach than Scovil and Ronsky [1], which allows systematically calculating the sensitivities of any dynamical system described in terms of ordinary differential equations. As an example for sensitivity functions, Lehman and Stark [2] had applied their proposed method to a muscledriven model of saccadic eye movement. By calculating a percentage change in a state variable value per percentage change in a parameter value, all sensitivities can be made comprehensively comparable, even across models.
A sensitivity as defined so far is of first order. Methodically, we aim at introducing a step beyond, namely, at calculating second order sensitivities. These measures are suited to quantify how much the sensitivity of a state variable with respect to one model parameter depends on changing another parameter. By analysing second order sensitivities, the strength of their interdependent influence on model dynamics can be determined. In addition to this socalled local sensitivity analysis, we will take the whole parameter variability into account by calculating global sensitivities according to Chan et al. [3] and Saltelli and Chan [4]. This approach allows translating the impact of one parameter on a state variable into a parameter’s importance, by completely comprising its interdependent influence in combination with all other parameters’ sensitivities.
In this study, we will apply the sensitivity analysis to models that predict how the activity of a muscle (its chemical state) changes when the muscle is stimulated by neural signals (electrical excitation). Such models are used for simulations of muscles’ contractions coupled to their activation dynamics. Models for coupled muscular dynamics are often part of neuromusculoskeletal models of biological movement systems. In particular, we want to try and rate two specific model variants of activation dynamics formulated by Zajac [5] and by Hatze [6]. As a first result, we present an example of a simplified version of the Zajac [5] model, in which sensitivity functions can in fact be calculated in closed form. Subsequently we calculate the sensitivities numerically with respect to all model parameters in both models, aiming at an increased understanding of the influence of changes in model parameters on the solutions of the underlying ordinary differential equations (ODEs). Additionally, we discuss which of both models may be physiologically more accurate. The arguments come from a mixture of three different aspects: sensitivity analysis, others’ experimental findings, and an additional attempt to best fit different combinations of activation dynamics and forcelength relations of the contractile element (CE) in a muscle to known data on shifts in optimal CE length with muscle activity [7].
2. Two Models for Muscle Activation Dynamics
Macroscopically, a muscle fibre or an assembly thereof, a muscle belly, is often mapped mathematically by a onedimensional massless thread called “contractile component” or “contractile element” (CE) [8–12]. Its absolute length is which may be normalised to the optimal fibre length by . In macroscopic muscle models, the CE muscle force is usually modelled as a function of a force(CE)length relation, a force(CE)velocity relation, and (CE)activity . Commonly the muscle activity represents the number of attached crossbridges within the muscle, normalised to the maximum number available (). It can also be considered as the concentration of bound ions in the muscle sarcoplasma relative to its physiological maximum. The parameter represents the minimum activity that is assumed to occur without any stimulation [6].
We analyse two different formulations of muscle activation dynamics, that is, the time (its symbol: ) evolution of muscle activity . One formulation of muscle activation dynamics was suggested by Zajac [5], which we modified slightly to take into account:with the initial condition . In this context, is supposed to represent the (electrical) stimulation of the muscle, being a parameter for controlling muscle dynamics. It represents the output of the nervous system’s dynamics applied to the muscle which in turn interacts with the skeleton, the body mass distribution, the external environment, and therefore the nervous system in a feedback loop. Electromyographic (EMG) signals can be seen as a compound of such neural stimulations collected in a finite volume (being the input to a number of muscle fibres) over a frequency range and coming from a number of (moto)neurons. The parameter denotes the activation time constant, and is the ratio of activation to deactivation time constants (deactivation boost).
An alternative formulation of muscle activation dynamics was introduced by Hatze [6]:We divided the original equation from Hatze [6] by the parameter which represents the maximum concentration of free ions in the muscle sarcoplasma. Thus, the values of the corresponding normalised concentration are . The activity is finally calculated by the functionand the parameter is shifted to the accordingly renormalised functionwith and . Two cases have been suggested by Hatze [13]: (i.e., ) for and (i.e., ) for , which have been applied in the literature [7, 8, 14, 15]. By substituting (2) and (3) into and resubstituting the inverse of (3) afterwards, Hatze’s formulation of an activation dynamics can be transformed into a nonlinear differential equation directly in terms of the activity: with the initial condition .
The solutions and of both formulations of activation dynamics (1) and (5) can now be directly compared by integrating them with the same initial condition using the same stimulation .
3. Local First and Second Order Sensitivity of ODE Systems regarding Their Parameters
Let and . We then consider a system of ordinary, first order initial value problems (IVP):where denotes the vector of state variables, the vector of right hand sides of the ODE, and the set of parameters which the ODE depends on. The vector of initial conditions is abbreviated by
The first order sensitivity of the solution with respect to the parameter set is defined as the matrixSimplifying, we denote , , and but keep the dependencies in mind. Because the solution might only be gained numerically rather than in a closedform expression, we have to apply the wellknown theory of sensitivity analysis as stated in Vukobratovic [16], Dickinson and Gelinas [17], Lehman and Stark [2], and ZivariPiran [18]. Differentiating (8) with respect to and applying the chain rule yieldwith being the gradient of state variables. Hence we obtain the following ODE for the first order solution sensitivity:or in short terms where is the sensitivity matrix and is the Jacobian matrix with ; furthermore, denotes the matrix containing the partial derivatives and denotes the matrix consisting of zeros only.
By analogy, the second order sensitivity of with respect to is defined as the following tensor: withassuming for all , therefore assuming that the prerequisites of Schwarz theorem (symmetry of the second derivatives) are fulfilled throughout. Differentiating with respect to and applying the chain rule lead to the ODEwith . For purposes beyond the aim of this paper, a condensed notation introducing the concept of tensor (or Kronecker) products as in ZivariPiran [18] may be helpful. For a practical implementation in MATLAB see Bader and Kolda [19].
Furthermore, if an initial condition (see (7)) is considered as another parameter, we can derive a separate sensitivity differential equation by rewriting (6) in its integral form Differentiating this equation with respect to yields and differentiating again with respect to results in a homogeneous ODE for each component ; namely,
The parameters of our analysed models are supposed to represent physiological processes and bear physical dimensions therefore. For example, and are frequencies measured in (Hz), whereas is measured in (mol/L). Accordingly, would be measured in (Hz) and in (s) (note that our model only consists of ODE and therefore we do not need a second index). Normalisation provides a comprehensive comparison between all sensitivities, even across models. For any parameter, the value fixed for a specific simulation is a natural choice. For any state variable, we chose its current value at each point in time of the corresponding ODE solution. Hence, we normalise each sensitivity by multiplying it with the ratio to get the relative sensitivityA relative sensitivity thus quantifies the percentage change in the th state variable value per percentage change in the th parameter value. This applies accordingly to the second order sensitivityIt can be shown that this method is valid and mathematically equivalent to another common method in which the whole model is nondimensionalised a priori [20]. A nonnormalised model formulation has the additional advantage of usually allowing a more immediate appreciation of and transparent access for experimenters. In the remainder of this paper, we are always going to present and discuss relative sensitivity values normalised that way.
In our model the specific case applies, so (10) and (14) simplify to the case (no summation).
4. VarianceBased Global Sensitivity Analysis
The differential sensitivity analysis above is called a local method because it does not take the physiological range of parameter values into account. Additionally factoring in such ranges characterises the socalled global methods. The main idea behind most global methods is to include a statistical component to scan the whole parameter space and combine the percentage change in a state variable value per percentage change in a parameter value with the variability of all of the parameters. The parameter space can be seen as a dimensional cuboid , where and are the minimal and maximal parameter values and is the number of parameters. We can now fix a certain point and calculate the local gradient of the solution with respect to . The volume of the starshaped area, investigated by changing only one parameter at once and lying within a ball around , vanishes in comparison to for an increasing number of parameters [21]. For an overview of the numerous methods like ANOVA, FAST, Regression, or Sobol’s Indexing, the reader is referred to Saltelli and Chan [4] and Frey et al. [22].
In this section we want to sketch just the main idea of the variancebased sensitivity analysis approach as presented in Chan et al. [3], which is based on Sobol’s Indexing. We chose this method because of its transparency and low computational cost. This method aims at calculating two measurands of sensitivity of a state variable with respect to parameter : the variancebased sensitivity function denoted by and the total sensitivity index function denoted by . The functions give a normalised first order sensitivity quite similar to from the previous section but include the parameter range. The functions, however, additionally include higher order sensitivities and give a measurand for interdependencies of parameter influences.
A receipt for calculating and is as follows. First of all, set boundaries for all model parameters, either by model assumptions or by literature reference, thus fixing . Secondly, generate two sets of sample points , , suited to represent the underlying probability distribution of each parameter, in our case the uniform distribution. Thirdly, with indicating a parameter, generate sets of new sample points , , , where consists of all sample points in except for its th component (parameter value) replaced by the th component of . Consequently, consists of the th component of and every other component taken from . Fourthly, evaluate the model from (6) at all of the sample points resulting in a family of solutions.
For this family perform the following calculations.(1)Compute the variance of the family of all solutions as a function of time, namely, . This variance function indicates the general model output variety throughout the whole parameter range.(2)Compute the variances of the family of solutions resulting from an evaluation of the model at all and , that is, for every and . Each is a function of time and indicates the model output variety if solely the value of parameter is changed.(3)Compute the variances of the family of solutions resulting from an evaluation of the model at all and , that is, for every and . Each is a function of time and indicates the model output variety if the value of is fixed, whereas all other parameter values are changed.
Note that the computations in Chan et al. [3] are done using MonteCarlo integrals as an approximation. The and can be finally calculated asThe normalisation entails additional properties of and (see [3, Figure 1]):In other words, gives the normalised global first order sensitivity function of the solution with respect to in relation to the model output range. Accordingly, quantifies a relative impact of the variability in parameter on the model output, factoring in the interdependent influence in combination with all other parameters’ sensitivities. Chan et al. [3] suggested to denote the value as the “importance” of .
5. An Analytical Example for Local Sensitivity Analysis including a Link between Zajac’s and Hatze’s Formulations
By further simplifying Zajac’s formulation of an activation dynamics (1) through assuming a deactivation boost (activation and deactivation time constants are equal) and a basic activity , we obtain a linear ODE for this specific case , which is equivalent to Hatze’s equation (2) modelling the time evolution of the free ion concentration:By analysing this specific case, we aim at making the above described sensitivity analysis method more transparent for the reader. Solving (22) yieldsdepending on just two parameters (stimulation: control parameter) and (time constant of activation: internal parameter) in addition to the initial value . The solution equals the value after about .
We apply the more generally applicable, implicit methods (10) and (17) to determine the derivatives of the solution with respect to the parameters (the sensitivities), although we already know solution (23) in a closed form. Hence, for the transparency of our method, we calculate the gradient of the right hand side of ODE (22)and insert these partial derivatives into (10) and (17). Solving the respective three ODEs for the three parameters (, and ) and normalising them according to (18) give the relative sensitivities of with respect to , , and as functions of time (see Figure 1):
A straightforward result is that the time constant has its maximum effect on the solution (Figure 1, see ) at time . In case of a step in stimulation, the sensitivity vanishes in the initial situation and exponentially approaches zero again after a few further multiples of the typical period . Note that is negative, which means that an increase in decelerates activation. Thus, for a fixed initial value , the solution value decreases at a given point in time if is increased. After a step in stimulation , the time in which the solution bears some memory of its initial value is equal to the period of being nonsensitive to any further step in (compare to and (25) to (27)). After about , the sensitivity has already fallen to about and to about accordingly.
6. The Numerical Approach and Results
Typically, biological dynamics are represented by nonlinear ODEs. Therefore, the linear ODE used for describing activation dynamics in the Zajac [5] case (1) is more of an exception. For example, a closedform solution can be given. Equation (23) is an example as shown in the previous section for the reduced case of nonboosted deactivation (22).
In general, however, nonlinear ODEs used in biomechanical modelling, as the Hatze [6] case (5) for describing activation dynamics, can only be solved numerically. It is understood that any explicit formulation of a model in terms of ODEs allows providing the partial derivatives of their right hand sides with respect to the model parameters in a closed form. Fortunately, this is exactly what is required as part of the sensitivity analysis approach presented in Section 3, in particular in (10).
As an application for applying this approach, we will now present a comparison of both formulations of activation dynamics. The example indicates that the approach may be of general value because it is common practice in biomechanical modelling to (i) formulate the ODEs in closed form and (ii) integrate the ODEs numerically. Adding further sensitivity ODEs for model parameters then becomes an inexpensive enhancement of the procedure used to solve the problem anyway.
For the two different activation dynamics [5, 6], the parameter sets and , respectively, consist ofincluding the initial conditions. The numerical solutions for these ODEs were computed within the MATLAB environment (The MathWorks, Natick, USA; version R2013b), using the preimplemented numerical solver which is a RungeKutta algorithm of order 5 (for details see [23]).
6.1. Results for Zajac’s Activation Dynamics: Sensitivity Functions
We simulated activation dynamics for the parameter set (28) leaving two of the values constant (, ) and varying the other three (initial condition , stimulation , and deactivation boost ). The time courses of the relative sensitivities with respect to all parameters are plotted in Figure 2. In the left column of Figure 2 we used , in the right column . Pairs of the parameter values and are specified in the legend of Figure 2, with increasing values of both parameters from top to bottom.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
6.1.1. Relative Sensitivity
Solutions are nonsensitive to the choice except if both initial activity and stimulation (also approximating the final activity if and ) are very low near itself.
6.1.2. Relative Sensitivity
The memory (influence on solution) of the initial value is lost after about , almost independently of all other parameters. This loss in memory is obviously slower than in that case (initial value) and (for and exactly the final value; see Section 5 and Figure 1). In that extreme case, the influence (relative sensitivity) of the lowest possible initial value () on the most rapidly increasing solution (maximum possible final value: ) is lost earlier.
6.1.3. Relative Sensitivity
The influence of the time constant on the solution is reduced with decreasing difference between initial and final activity values (compare maximum values in Figures 1 and 2) and, no matter the value, with jointly raised levels of initial activity and , the latter determining the final activity value if . When deactivation is slower than activation (: right column in Figure 2), is higher than in the case , both in its maximum amplitude and for longer times after the step in stimulation, especially at low activity levels (upper rows in Figure 2).
6.1.4. Relative Sensitivity
Across all parameters, the solution in general is most sensitive to . However, the influence of the deactivation boost parameter is usually comparable. In some situations, this also applies to the activation time constant (see below). For (Figure 2, left), the solution becomes a little less sensitive to with decreasing activity level (), which reflects that the final solution value is not determined by alone but by and as much. If deactivation is much slower than activation (: Figure 2, right), we find the opposite to the case: the more the activity level rises, the lesser determines the solution. Additionally, stimulation somehow competes with both deactivation boost and time constant (see further below). Using the term “compete” illustrates the idea that any single parameter should have an individual interest in influencing the dynamics as much as possible in order not to be considered superfluous.
6.1.5. Relative Sensitivity
Sensitivity with respect to generally decreases with increasing activity and stimulation levels. It vanishes at maximum stimulation .
6.1.6. Relative Sensitivities , ,
At submaximal stimulation levels , the final solution value is determined to almost the same degree by stimulation and deactivation boost , yet with opposite tendencies (, ). As explained, both parameters compete for their impact on the final solution value. Only at maximum stimulation (lowest row in Figure 2), this parameter competition is resolved in favour of . In this specific case, does not influence the solution at all. For the competition about influencing the solution is intermittently but only slightly biased by : sensitivity peaks at comparably low magnitude around . This influence comes likewise intermittently at the cost of influence: the absolute value of rises a little slower than . In the case , this competition becomes more differentiated and spread out in time. Again at submaximal stimulation and activity levels, the absolute value of is lower than that of but higher than that of , making all three parameters , , and compete to comparable degrees for an impact on the solution until about . Also, does not vanish before about .
6.2. Results for Hatze’s Activation Dynamics: Sensitivity Functions
We also simulated activation dynamics for the parameter set (29), leaving now four of the values constant (, , , ) and again varying three others (initial condition , stimulation , and nonlinearity ), keeping in mind that the eighth parameter () is assumed to depend on . Time courses of the relative sensitivities with respect to all parameters are plotted (see Figure 3). In the left column of Figure 3, , is used, in the right column , . Here, the same pairs of the parameter values ( and , increasing from top to bottom; see legend of Figure 3) are used as in Section 6.1 (Figure 2).
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Hatze’s activation dynamics (5) are nonlinear unlike Zajac’s activation dynamics (1). This nonlinearity manifests particularly in a changeful influence of the parameter . Additionally, the parameter is just roughly comparable to the inverse of the exponential time constant in Zajac’s linear activation dynamics.
6.2.1. Relative Sensitivity
In Zajac’s linear differential equation (1), establishes a distinct time scale independent of all other parameters. The parameter in Hatze’s activation dynamics (5) is just formally equivalent to the reciprocal of : the sensitivity does not peak stringently at but rather diffusely between about and in both of the cases and . At first this is not surprising because the scaling factor in Hatze’s dynamics is rather than just . However, does not fix an invariant time scale for Hatze’s nonlinear differential equation. This fact becomes particularly prominent at extremely low activity levels for (Figure 3, left, top row) and up to moderately submaximal activity levels for (Figure 3, right, top two rows). Here, is negative, which means that increasing the parameter results in less steeply increasing activity. This observation is counterintuitive to identifying with a reciprocal of a time constant like . Rather than being expected from the product , the exponent does not linearly scale the time behaviour because peaks do not occur systematically earlier in the case as compared to .
6.2.2. Relative Sensitivity
Losing the memory of the initial condition confirms the analysis of time behaviour based on . At high activity levels (Figure 3, bottom row), Hatze’s activation dynamics loses memory at identical time horizons (no matter the value) seemingly slower for higher at intermediate levels (Figure 3, two middle rows) and clearly faster at very low levels (Figure 3, top row). The parameter still does roughly determine the time horizon in which the memory of the initial condition is lost and the influence of all other parameters is continuously switched on from zero influence at .
6.2.3. Relative Sensitivity
As in Zajac’s dynamics the solution is generally only sensitive to at very low stimulation levels (Figure 3, top row). At such levels, the case shows the peculiarity that the solution becomes strikingly insensitive to any other parameter than itself (and ). The time evolution of the solution is more or less determined by just this minimum () and initial () activities, and determining the approximate switching time horizon between both. The dependency, constituting a crucial property of Hatze’s activation dynamics, is practically suppressed for at very low activities and stimulations. In contrast, remains for on a low but still significant level of about a fourth of the three dominating quantities , , and .
6.2.4. Relative Sensitivity
The sensitivity with respect to is extraordinarily high at low activities and stimulations around , both for and for (Figure 3, second row from top), additionally at extremely low levels for (Figure 3, left, top row). At moderately submaximal levels (Figure 3, third row from top), the solution is influenced with an already inverted tendency ( changes sign to positive) after around a time horizon for . However, at these levels the solution is practically insensitive to for any . At high levels (Figure 3, bottom row) we find that there is no change in the character of time evolution of the solution, despite the specific value of . The degree of nonlinearity is unimportant because the time evolution and the ranking of all other sensitivities are hardly influenced by . In both cases, the rise in activity is quickened by increasing (), as opposed to low activity and stimulation levels where rises in activity are slowed down (; see also above).
6.2.5. Relative Sensitivities , , , and
Of all the remaining parameters, stimulation , scaled maximum free ion concentration , relative CE length , and the pole of the length dependency in Hatze’s activation dynamics, the latter has the lowest influence on the solution. The influence characters of all four parameters are yet completely identical. Their sensitivities are always positive and coupled by fixed scaling ratios due to all of them occurring within just one product on the right side of (5). and are identical, while the sensitivity with respect to is the highest, with ratios and . Except at very low activity (where plays a dominating role) and except for the generally changeful influence, these are the four parameters that dominate the solution after an initial phase in which the initial activity determines its evolution. The parameter does not have a strong direct influence on the solution. As stated above, it defines the approximate time horizon in which the influence gets lost and all other parameters’ influence is switched on from zero at .
6.3. VarianceBased Sensitivity (VBS) and Total Sensitivity Indices (TSI) for Zajac’s and Hatze’s Activation Dynamics
Table 1 pools the lower and upper boundaries for every parameter in and used in our calculations. We refer to Hatze [24], Zajac [5], or Günther et al. [11] for traceability of our choices. The left hand side of Figure 4 shows the functions of every parameter in of Zajac’s model. The plotted functions can be compared to our previously computed relative first order sensitivity functions from Figure 2: at first sight, and look equal, but the function indicates a slightly increased duration of influence of . Regarding , the function peaks at the same time as , but with a smaller amplitude. Likewise, the courses of and are comparable to and from the second and third row of Figure 2. The calculated functions in the Zajac case show what would be expected intuitively: a represents a parameter’s mean influence averaged over its range of values. Additionally, we plotted the sum of all first order sensitivities. This sum indicates which amount of the total variance is covered by first order sensitivities. The closer the sum to 1 the smaller the impact of the second and higher order sensitivities.

(a)
(b)
The right hand side of Figure 4 shows the functions of every parameter in of Zajac’s model. Generally, there are only minor deviations of the functions from their counterparts . That is, the influence of none of the parameters is significantly enhanced by an interdependent effect in combination with other parameters. According to both analyses, there are just four globally important parameters that govern the system’s state throughout the whole examined solution space: the initial condition within a typical time horizon after a step in , the new stimulation level determining activity after about , the deactivation boost with smaller impact than , and determining the time horizon itself.
The left hand side of Figure 5 shows the functions of every parameter in of Hatze’s model. Very similar to the Zajac case, the calculated seemingly represent to a high degree a parameter’s mean influence averaged over its range of values (compare Figure 3). As in the Zajac case, there are four globally important parameters, according to both and analyses. Compared to Zajac’s model, the interdependent effect in combination with other parameters (: right hand side of Figure 5) is more pronounced for two parameters: both the stimulation and the CE length importance are distinctly higher than their first order effects as expressed by functions. Furthermore, the time horizon within the initial condition has an aftereffect in response to a step in globally a little higher in as compared to local sensitivity analysis (Figure 3). In addition, the time horizon of is clearly enhanced by interdependencies with other parameters (: right hand side of Figure 5).
(a)
(b)
Altogether, versus analysis substantiate local first and second order sensitivity analyses: for one thing, Hatze’s model is more inert against steps in stimulation than Zajac’s model. For another thing, the dynamics described by Hatze’s model incorporates stronger nonlinear coupling effects from combinations of parameters than Zajac’s model. These latter effects are better seen in detail when looking at local sensitivities, that is, analysing just small and selected volumes of the parameter space . In turn, and provide a broad but coarse overview about first and higher order sensitivities of all parameters.
7. Consequences, Discussion, and Conclusions
7.1. A Bottom Line for Comparing Zajac’s and Hatze’s Activation Dynamics: Second Order Sensitivities
At first sight, Zajac’s activation dynamics [5] is more transparent because it is descriptive in a sense that it captures the physiological behaviour of activity rise and fall in an apparently simple way. It thereto utilises a linear differential equation with wellknown properties, allowing for a closedform solution. It needs only four parameters to describe the ion influx to the muscle as a response to electrical stimulation: the stimulation itself as a control parameter, the time constant for an exponential response to a step increase in stimulation, a third parameter (deactivation boost) biasing both the rise and fall times, and the saturation value of activity which in turn depends on and the basic activity being the fourth parameter. The smaller the is (deactivation slower than activation), the faster the very activity level is reached, at which saturation would occur for . Saturation for occurs at a level that is higher than . Altogether, in Zajac’s as compared to Hatze’s activation dynamics, the outcome of setting a control parameter value , with regard to how fast and at which level the activity saturates, seems easier to be handled by a controller.
A worse controllability of Hatze’s activation dynamics [6] may be expected from its nonlinearity, a higher number of parameters, and their interdependent influence on model dynamics. Additionally, Hatze’s formulation depends on the CE length , which makes the mutual coupling of activation with contraction dynamics more interwoven. So, at first sight, Hatze’s dynamics seems a less manageable construct for a controller to deal with a muscle as the biological actuator. Regarding the nonlinearity exponent , solution sensitivity further depends nonmonotonously on activity level, partly even with the strongest influence, partly without any influence. We also found that the solution is more sensitive to its parameters , , and than is Zajac’s activation dynamics to any of its parameters.
This higher complexity of Hatze’s dynamics becomes even more evident by analysing the second order sensitivities (see (13) or (19) for their relative values). They express how a first order sensitivity changes upon variation of any other model parameter. In other words, they are a measure of model entanglement and complexity. Here, we found that the highest values amongst all relative second order sensitivities in Zajac’s activation dynamics are about () and (). In Hatze’s activation dynamics, the highest relative second order sensitivities are those with respect to or (in particular for , , and , themselves) with maximum values between about (, ) and (, , , at submaximal activity). That is, they are an order of magnitude higher than in Zajac’s activation dynamics.
Yet, we have to acknowledge that Hatze’s activation dynamics contains crucial physiological features that go beyond Zajac’s description.
7.2. A Plus for Hatze’s Approach: Length Dependency
It has been established that the length dependency of activation dynamics is both physiological [7] and functionally vital [15] because it largely contributes to lowfrequency muscle stiffness. It has also been verified that Hatze’s model approach provides a good approximation for experimental data [7]. In that study, was used without comparing to the case. There seem to be arguments in favour of from a mathematical point of view. In particular, the less changeful scaling of the activation dynamics’ characteristics down to very low activity and stimulation levels, at which some CE length sensitivity remains, seem to be an advantage when compared to the case. Up to this point, we have argued solely mathematically. It is, however, physiological reality that is eventually aimed at. We therefore repeated the model fit done by Kistemaker et al. [7] while now allowing a variation in and in forcelength relations.
7.3. An Optimal Parameter Set for Hatze’s Activation Dynamics Plus CE ForceLength Relation
Sensitivity analysis allows rating Hatze’s approach as an entangled construct. Additionally, Kistemaker et al. [7] decided to choose without giving a reason for discarding . It further seemed that they did not perform an algorithmic optimisation across various submaximal stimulation levels to find a muscle parameter set, which best fits known shifts in optimal, submaximal CE length at which isometric force peaks. Accordingly, it seemed worth performing such an optimisation because generally depends on length and activity , and the latter may be additionally biased by an dependent capability for building up crossbridges at a given level of free ions in the sarcoplasma, as formulated in Hatze’s approach: . Thus, a shift in optimal CE length with changing can occur depending on the specific choices of both the length dependency of activation (see (3) and (4)) and the CE’s forcelength relation .
Consequently, we searched for optimal parameter sets of Hatze’s activation dynamics in combination with two different forcelength relations : either a parabola [7] or bellshaped curves [11, 25]. For a given optimal CE length [26] representing a rat gastrocnemius muscle and three fixed exponent values in Hatze’s activation dynamics (all other parameters as given in Section 2), we thus determined Hatze’s constant and the width parameters of the two different forcelength relations ( in Kistemaker et al. [7] and van Soest and Bobbert [9] and in Mörl et al. [25], resp.) by an optimisation approach. The objective function to be minimised was the sum of squared differences between the values as predicted by the model and as derived from experiments (see Table 2 in Kistemaker et al. [7]) over five stimulation levels . Note that applies in the isometric situation (see (2) and compare (3)). Further note that experimental data for muscle contractions at very low stimulation levels are missing in the literature so far: the lowest analysed level available for Kistemaker et al. [7] was , that is, comparable to the second rows from top in Figures 2 and 3.
The optimisation results are summarised in Table 2. The higher the value, the smaller the optimisation error. The predicted width values or , respectively, decrease along with the error. We would yet tend to exclude the case because the predicted width values seem unrealistically low when compared to published values from other sources (e.g., [9], [25]). Furthermore, decreases with using the parabola model for whereas it saturates between and for the bellshaped model. The bellshaped model shows the most realistic in the case (). Fitting the same model to other contraction modes of the muscle [25], a value of had been found. In contrast, when using the parabola model, realistic values between and are predicted by our optimisation for .

When comparing the optimised parameter values across all start values of the widths, across all values, and across both model functions, we find that the resulting optimal parameter sets are more consistent for bellshaped than for the parabola function. The bellshaped forcelength relation gives generally a better fit. For each single value, the corresponding optimisation error is smaller when comparing realistic, published and values that may correspond to each other ( [9] and [25]). Additionally, the error values from our optimisation are generally smaller than the corresponding value calculated from Table 2 in Kistemaker et al. [7] ().
In a nutshell, we would say that the most realistic model for the isometric force at submaximal activity levels is the combination of Hatze’s approach for activation dynamics with and a bellshaped curve for the forcelength relation with . As a side effect, we predict that the parameter value , being a weighting factor of the first addend in the compact formulation of Hatze’s activation dynamics (5), should be reduced by about 40% () as compared to the value originally published in Hatze [13] ().
7.4. A Generalised Method for Calculating Parameter Sensitivities
The findings in the last section were initiated by thoroughly comparing two different biomechanical models of muscular activation using a systematic sensitivity analysis as introduced in Dickinson and Gelinas [17] and Lehman and Stark [2], respectively. Starting with the latter formulation, Scovil and Ronsky [1] calculated specific parameter sensitivities for muscular contractions. They applied three variants of this method.
Method 1 applies to state variables that are explicitly known to the modeller as in, for example, an eye model [2], a musculoskeletal model for running that includes a Hilltype muscle model [1], or the activation models analysed in our study. Scovil and Ronsky [1] calculated the change in the value of a state variable averaged over time per a finite change in a parameter value, both normalised to each of their unperturbed values. They thus calculated just one (mean) sensitivity value for a finite time interval (e.g., a running cycle) rather than timecontinuous sensitivity functions.
Method 2: whereas Dickinson and Gelinas [17] and Lehman and Stark [2] had introduced the full approach for calculating such sensitivity functions, Scovil and Ronsky [1] distorted this approach by suggesting that the partial derivative of the right hand side of an ODE, that is, of the rate of change of a state variable, with respect to a model parameter would be a “model sensitivity.” The distortion becomes explicitly obvious from our formulation: this partial derivative is just one of the two addends that contribute to the rate of change of the sensitivity function (10), rather than defining the sensitivity of the state variable itself (i.e., the solution of the ODE) with respect to a model parameter (8).
Method 3: Scovil and Ronsky [1] had also asked for calculating the influence of, for example, a parameter of the activation dynamics (like the time constant) on an arbitrary joint angle, that is, a variable that quantifies the overall output of a coupled dynamical system. Of course, the time constant does not explicitly appear in the mechanical differential equation for the acceleration of this very joint angle, which renders applicability of method 2 impossible. The conclusion in Scovil and Ronsky [1] was to apply method 1. Here, the potential of our formulation comes particularly to the fore. It enables calculating the timecontinuous sensitivity of all components of the coupled solution, that is, any state variable . This is because all effects of a parameter change are in principle reflected within any single state variable, and the time evolution of a sensitivity according to (10) takes this into account.
In this paper, we have further worked out the sensitivity function approach by Lehman and Stark [2], presenting the differential equations for sensitivity functions in more detail to those modellers who want to apply the method. Furthermore, we enhanced the approach by Lehman and Stark [2] to also calculating the sensitivities of the state variables with respect to their initial conditions (17). This should be helpful not only in biomechanics but also, for example, in meteorology when predicting the behaviour of storms [28]. Since initial conditions are often just known approximately but start with the relative sensitivity values of , their influence should be traced to verify how their uncertainty propagates during a simulation. In the case of muscle activation dynamics, the sensitivities and , respectively, decreased rapidly to zero: initial activity has no effect on the solution very early before steady state is reached.
Furthermore, we included a second order sensitivity analysis which is not only helpful for an enhanced understanding of the parameter influence but also part of mathematical optimisation techniques [29]. The values of could be interpreted either as the relative sensitivity of the sensitivity with respect to another parameter (and vice versa: with respect to ) or as the curvature of the graph of the solution in the dimensional solutionparameter space. The latter may help to connect the results to the field of mathematical optimisation in which the second derivative (Hessian) of a function is often included in objective functions to find optimal parameter sets.
7.5. Insights into Global Methods
Some additional conclusions can be drawn from global sensitivity analysis, in particular from comparing results in Section 6.3 to those based on local sensitivity analysis (Sections 6.1, 6.2, and 7.1).
For Zajac’s activation dynamics, global analysis confirms local analysis in stating that there are no significant second or higher order sensitivities, with the slight exception of the phase of rapid change in activity after a step in stimulation. An experimenter who wants to measure the activation time constant can exclude influence from potentially slower deactivation processes () by starting from high activity levels (Figure 2, bottom). It should yet be kept in mind that buildup of activity to the new level is not solely determined by but might be biased by other parameters than because peaks during the buildup phase (Figure 4, right).
In Hatze’s activation dynamics, the higher order sensitivities play a clearly more significant role, even in the nearsteadystate case (Figure 5: stronger deviation from 1 of both and ). When arguing in terms of controllability of the models in Section 7.1, we speculated that Zajac’s dynamics might be easier to control than Hatze’s dynamics. Notwithstanding, Figure 5 shows that the stimulation is the most important control factor with even a higher importance than in Zajac’s formulation.
At first sight unapparent, another result is the importance of . From a strictly local point of view we concluded that this parameter should have the same sensitivity as since they both are formally equivalent multipliers in Hatze’s ODE (see relative sensitivities in Figure 3). However, the importance of is significantly smaller than that of , in fact almost negligible. Their different global variabilities of values can give an explanation. The parameter in the product has a clearly lower relative variability than , measured in maximum percentage deviation from the respective mean value. The parameter thus acts as an amplifier for . Similarly, the parameter has a relatively small variability throughout the literature. So, although its differential sensitivity is quite large, is found to have a low importance for the model output. For the latter fact there is yet another reason. In Section 6.2, we have emphasised that has a very changeful influence on solutions, depending on activity level. Additionally, its influence is highly dependent on other parameters like length and (see end of Section 7.1). Its strong influence in some situations and configurations is thus hidden by global averaging.
This demonstrates that the findings of global sensitivity analysis must be treated with caution because the whole dynamics of a system is condensed to a single average function per whole parameter range. Without local analyses of the solution space as exemplified in Sections 6.1 and 6.2 crucial features of its topology might be lost when solely relying on global analysis.
Symbols
:  Contractile element (CE) length; value: timedepending 
:  Contraction velocity; value: first time derivative of 
Optimal CE length; value: musclespecific  
:  Relative CE length; value: (dimensionless) 
:  Maximum isometric force of the CE; value: musclespecific 
:  Neural muscle stimulation; value: timedepending; here: a fixed parameter 
:  Muscle activity (bound concentration); value: timedepending 
:  Basic activity according to Hatze [13]; value: 0.005 
:  Activity according to Hatze [6]; value: timelengthdepending 
:  Initial condition for Hatze’s activation ODE; value: mutable 
:  Activity according to Zajac [5]; value: timedepending 
:  Initial condition for Zajac’s activation ODE; value: mutable 
:  Activation time constant in Zajac [5]; value: here: 1/40 s 
:  Deactivation time constant in Zajac [5]; value: here: 1/40 s or 3/40 s 
:  Corresponding deactivation boost [5]; value: 
:  Exponent in Hatze’s formulation; value: 2 or 3 
:  Activation frequency constant in Hatze [6]; value: range: (1/s); here: (1/s) 
:  Maximal concentration in Hatze [24]; value: mol/L 
:  Representation of free concentration [6, 13]; value: timedepending 
:  Length dependency of Hatze [24] activation dynamics; value: 
:  Pole in Hatze’s length dependency function; value: 
:  Factor in van Soest [8], Hatze [6]; value: L/mol () or L/mol () 
:  Merging of and ; value: ; here: () or () 
:  Model parameter set; value: . 
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
Michael Günther was supported by “Berufsgenossenschaft Nahrungsmittel und Gastgewerbe, Geschäftsbereich Prävention, Mannheim” (BGN) and Deutsche Forschungsgemeinschaft (DFG: SCHM2392/51), both granted to Syn Schmitt.
References
 C. Y. Scovil and J. L. Ronsky, “Sensitivity of a Hillbased muscle model to perturbations in model parameters,” Journal of Biomechanics, vol. 39, no. 11, pp. 2055–2063, 2006. View at: Publisher Site  Google Scholar
 S. L. Lehman and L. W. Stark, “Three algorithms for interpreting models consisting of ordinary differential equations: sensitivity coefficients, sensitivity functions, global optimization,” Mathematical Biosciences, vol. 62, no. 1, pp. 107–122, 1982. View at: Publisher Site  Google Scholar
 K. Chan, A. Saltelli, and S. Tarantola, “Sensitivity analysis of model output: variancebased methods make the difference,” in Proceedings of the 29th Conference on Winter Simulation (WSC '97), S. Andradóttir, K. Healy, D. Withers, and B. Nelson, Eds., pp. 261–268, IEEE Computer Society, Washington, DC, USA, 1997. View at: Google Scholar
 A. Saltelli and K. S. E. Chan, Sensitivity Analysis, John Wiley & Sons, New York, NY, USA, 1st edition, 2000.
 F. E. Zajac, “Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control,” Critical Reviews in Biomedical Engineering, vol. 17, no. 4, pp. 359–411, 1989. View at: Google Scholar
 H. Hatze, “A myocybernetic control model of skeletal muscle,” Biological Cybernetics, vol. 25, no. 2, pp. 103–119, 1977. View at: Publisher Site  Google Scholar
 D. A. Kistemaker, A. J. van Soest, and M. F. Bobbert, “Lengthdependent [Ca^{2+}] sensitivity adds stiffness to muscle,” Journal of Biomechanics, vol. 38, no. 9, pp. 1816–1821, 2005. View at: Publisher Site  Google Scholar
 A. van Soest, Jumping from structure to control: a simulation study of explosive movements [Ph.D. thesis], Vrije Universiteit, Amsterdam, The Netherlands, 1992.
 A. J. van Soest and M. F. Bobbert, “The contribution of muscle properties in the control of explosive movements,” Biological Cybernetics, vol. 69, no. 3, pp. 195–204, 1993. View at: Publisher Site  Google Scholar
 G. Cole, A. van den Bogert, A. W. Herzog, and K. Gerritsen, “Modelling of force production in skeletal muscle undergoing stretch,” Journal of Biomechanics, vol. 29, pp. 1091–1104, 1996. View at: Google Scholar
 M. Günther, S. Schmitt, and V. Wank, “Highfrequency oscillations as a consequence of neglected serial damping in Hilltype muscle models,” Biological Cybernetics, vol. 97, no. 1, pp. 63–79, 2007. View at: Publisher Site  Google Scholar
 D. F. B. Haeufle, M. Günther, A. Bayer, and S. Schmitt, “Hilltype muscle model with serial damping and eccentric forcevelocity relation,” Journal of Biomechanics, vol. 47, no. 6, pp. 1531–1536, 2014. View at: Publisher Site  Google Scholar
 H. Hatze, Myocybernetic Control Models of Skeletal Muscle, University of South Africa, 1981.
 D. A. Kistemaker, A. J. van Soest, and M. F. Bobbert, “Is equilibrium point control feasible for fast goaldirected singlejoint movements?” Journal of Neurophysiology, vol. 95, no. 5, pp. 2898–2912, 2006. View at: Publisher Site  Google Scholar
 D. A. Kistemaker, A. J. van Soest, and M. F. Bobbert, “A model of openloop control of equilibrium position and stiffness of the human elbow joint,” Biological Cybernetics, vol. 96, no. 3, pp. 341–350, 2007. View at: Publisher Site  Google Scholar
 R. T. M. Vukobratovic, General Sensitivity Theory, Elsevier, New York, NY, USA, 1962.
 R. P. Dickinson and R. J. Gelinas, “Sensitivity analysis of ordinary differential equation systems—a direct method,” Journal of Computational Physics, vol. 21, no. 2, pp. 123–143, 1976. View at: Publisher Site  Google Scholar  MathSciNet
 H. ZivariPiran, Efficient simulation, accurate sensitivity analysis and reliable parameter estimation for delay differential equations [Ph.D. thesis], University of Toronto, Toronto, Canada, 2009.
 B. W. Bader and T. G. Kolda, “Algorithm 862: matlab tensor classes for fast algorithm prototyping,” ACM Transactions on Mathematical Software, vol. 32, no. 4, pp. 635–653, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 O. Scherzer, Mathematische Modellierung—Vorlesungsskript, Universität Wien, 2009.
 A. Saltelli and P. Annoni, “How to avoid a perfunctory sensitivity analysis,” Environmental Modelling and Software, vol. 25, no. 12, pp. 1508–1517, 2010. View at: Publisher Site  Google Scholar
 H. C. Frey, A. Mokhtari, and T. Danish, “Evaluation of selected sensitivity analysis methods based upon application to two food safety process risk models,” Tech. Rep., Computational Laboratory for Energy, North Carolina State University, 2003. View at: Google Scholar
 H. Benker, Differentialgleichungen mit MATHCAD und MATLAB, vol. 1, Springer, Berlin , Germany, 2005. View at: Publisher Site
 H. Hatze, “A general mycocybernetic control model of skeletal muscle,” Biological Cybernetics, vol. 28, no. 3, pp. 143–157, 1978. View at: Publisher Site  Google Scholar
 F. Mörl, T. Siebert, S. Schmitt, R. Blickhan, and M. Günther, “Electromechanical delay in hilltype muscle models,” Journal of Mechanics in Medicine and Biology, vol. 12, no. 5, Article ID 1250085, 2012. View at: Publisher Site  Google Scholar
 T. Siebert, O. Till, and R. Blickhan, “Work partitioning of transversally loaded muscle: experimentation and simulation,” Computer Methods in Biomechanics and Biomedical Engineering, vol. 17, no. 3, pp. 217–229, 2014. View at: Publisher Site  Google Scholar
 B. Roszek, G. C. Baan, and P. A. Huijing, “Decreasing stimulation frequencydependent lengthforce characteristics of rat muscle,” Journal of Applied Physiology, vol. 77, no. 5, pp. 2115–2124, 1994. View at: Google Scholar
 R. H. Langland, M. A. Shapiro, and R. Gelaro, “Initial condition sensitivity and error growth in forecasts of the 25 January 2000 east coast snowstorm,” Monthly Weather Review, vol. 130, no. 4, pp. 957–974, 2002. View at: Publisher Site  Google Scholar
 M. Sunar and A. D. Belegundu, “Trust region methods for structural optimization using exact second order sensitivity,” International Journal for Numerical Methods in Engineering, vol. 32, no. 2, pp. 275–293, 1991. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2015 Robert Rockenfeller et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.