The Method of Multiple Scales

This page details the dynamical systems on which the Method of Multiple Scales (MMS) is applied, how it is applied, how steady state responses can be derived, and how their stability can be evaluated.

Dynamical systems of interest

Systems considered are typically composed of \(N\) coupled nonlinear equations of the form

\[\begin{split}\begin{cases} \ddot{x}_0 + \omega_0^2 x_0 & = f_0(\boldsymbol{x}, \dot{\boldsymbol{x}}, \ddot{\boldsymbol{x}}, t), \\ & \vdots \\ \ddot{x}_{N-1} + \omega_{N-1}^2 x_{N-1} & = f_{N-1}(\boldsymbol{x}, \dot{\boldsymbol{x}}, \ddot{\boldsymbol{x}}, t). \end{cases}\end{split}\]

The \(x_i(t)\) (\(i=0,...,N-1\)) are the oscillators’ coordinates, and represent the degrees of freedom (dof) of the system.

\[\boldsymbol{x}(t)^\intercal = [x_0(t), x_1(t), \cdots, x_{N-1}(t)]\]

is the vector containing all the oscillators’ coordinates (\(^\intercal\) denotes the transpose), \(\omega_i\) are their natural frequencies, \(t\) is the time, \(\dot{(\bullet)} = \textrm{d}(\bullet)/\textrm{d}t\) denotes a time-derivative. \(f_i\) are functions which can contain:

  • Weak linear terms in \(x_i\), \(\dot{x}_i\), or \(\ddot{x}_i\).

  • Weak linear coupling terms involving \(x_j\), \(\dot{x}_j\), or \(\ddot{x}_j\) with \(j \neq i\).

  • Weak nonlinear terms. Taylor expansions are performed to approximate nonlinear terms as polynomial nonlinearities.

  • Forcing terms, which can be:

    • Hard (appearing at leading order) or weak (small).

    • Primarily harmonic, e.g., \(F \cos(\omega t)\), where \(F\) and \(\omega\) are the forcing amplitude and frequency, respectively.

    • Modulated by any function (constant, linear, or nonlinear) to model parametric forcing (e.g., \(x_i(t) F \cos(\omega t)\)).

Internal resonance relations among oscillators can be specified in a second step by expressing the \(\omega_i\) as a function of a reference frequency. See Dynamical_system. Detuning can also be introduced during this step.

Nonlinear system.

Illustration of a system of coupled nonlinear oscillators that can be studied using oscilate. \(f_{\text{nl},i}(x_i, \dot{x}_i, \ddot{x}_i)\), \(g_{\text{nl},i}(\boldsymbol{x}, \dot{\boldsymbol{x}}, \ddot{\boldsymbol{x}})\) and \(f_{\text{f},_i}(x_i, \dot{x}_i, \ddot{x}_i)\) represent the intrinsic nonlinear of oscillator \(i\), the nonlinear couplings with other oscillators, and the parametric forcing coeffiicent. A unitary mass is considered for each oscillator, and \(\omega_i^2\) and \(c_i\) are the stiffness (natural frequencies) and the (weak) viscous damping coefficients. Note that this illustration does not represent weak linear terms nor complex coupled parametric forcing, which can be accounted for in the oscilate package.

Application of the MMS

Asymptotic series and time scales

The starting point is to introduce asymptotic series and multiple time scales in the initial dynamical system. The solution for oscillator \(i\) is sought as a series expansion up to order \(N_e\) (for a leading order term \(\epsilon^0 = 1\)). This expansion takes the form

\[x_i(t) = x_{i,0}(t) + \epsilon x_{i,1}(t) + \epsilon^2 x_{i,2}(t) + \cdots + \epsilon^{N_e} x_{i,N_e}(t) + \mathcal{O}(\epsilon^{N_e+1}).\]

Time scales are introduced as follows:

\[t_0 = t, \; t_1 = \epsilon t, \; t_2 = \epsilon^2 t, \cdots, t_{N_e} = \epsilon^{N_e} t,\]

where \(t_0\) is the fast time, i.e. the time used to describe the oscillations, while \(t_1, \; t_2,\; \cdots,\; t_{N_e}\) are slow times, associated to amplitude and phase variations of the solutions in time. In addition, the chain rule gives

\[\begin{split}\begin{aligned} \dfrac{\textrm{d}(\bullet)}{\textrm{d}t} & = \sum_{i=0}^{N_e} \epsilon^{i} \dfrac{\partial(\bullet)}{\partial t_i} + \mathcal{O}(\epsilon^{N_e+1}), \\ \dfrac{\textrm{d}^2(\bullet)}{\textrm{d}t^2} & = \sum_{j=0}^{N_e}\sum_{i=0}^{N_e} \epsilon^{i+j} \dfrac{\partial}{\partial t_j}\dfrac{\partial(\bullet)}{\partial t_i} + \mathcal{O}(\epsilon^{N_e+1}). \end{aligned}\end{split}\]

The introduction of asymptotic series and time scales are performed using asymptotic_series() and time_scales().

Scaling

The construction of the MMS system requires a scaling of the parameters. Most scalings are already passed to the MMS through the sub_scaling parameter. However, the natural frequencies also need to be scaled as they can contain both a leading order term and a detuning term. Natural frequencies \(\omega_i\) are defined as a function of the reference frequency \(\omega_{\textrm{ref}}\) through the ratio_omega_osc optional parameter, which is then used in oscillators_frequencies(). This allows to define internal resonance relations among the oscillators. If these internal resonances are not perfect, detunings can be introduced through the detunings optional parameter, which needs to be scaled and part of the sub_scaling parameter.

To write the MMS system it is convenient to introduce the leading order natural frequencies

\[\omega_{i,0} = r_i \omega_{\textrm{ref}},\]

where \(r_i\) stands for ratio_omega_osc[i].

The multiple scales system

Introducing the asymptotic series, the time scales and the scaled parameters in the initial dynamical system (see Dynamical_system) results in \(N_e+1\) dynamical systems, each one appearing at different orders of \(\epsilon\). Denoting time scales derivatives as

\[\textrm{D}_i(\bullet) = \partial (\bullet) / \partial t_i,\]

introducing the vector of time scales

\[\boldsymbol{t}^\intercal = [t_0, t_1, \cdots, t_{N_e}],\]

where \(^\intercal\) denotes the transpose, and the vectors of asymptotic coordinates

\[\boldsymbol{x}_i(\boldsymbol{t})^\intercal = [x_{0,i}(\boldsymbol{t}), x_{1,i}(\boldsymbol{t}), \cdots, x_{N-1, i}(\boldsymbol{t})],\]

which contains all the asymptotic terms of order \(i\), the MMS equations can be written as

\[\begin{split}\begin{aligned} & \epsilon^0 \rightarrow \; \begin{cases} \textrm{D}_0 x_{0,0} + \omega_{0,0}^2 x_{0,0} & = f_{0,0}(t_0, t_1), \\ & \vdots \\ \textrm{D}_0 x_{N-1,0} + \omega_{N-1,0}^2 x_{N-1,0} & = f_{N-1,0}(t_0, t_1), \end{cases} \\[15pt] & \epsilon^1 \rightarrow \; \begin{cases} \textrm{D}_0 x_{0,1} + \omega_{0,0}^2 x_{0,1} & = f_{0,1} (\boldsymbol{x}_0, \textrm{D}_0 \boldsymbol{x}_0, \textrm{D}_0^2 \boldsymbol{x}_0, \textrm{D}_1 \boldsymbol{x}_0, \textrm{D}_0\textrm{D}_1 \boldsymbol{x}_0, t_0, t_1), \\ & \vdots \\ \textrm{D}_0 x_{N-1,1} + \omega_{N-1,0}^2 x_{N-1,1} & = f_{N-1,1}(\boldsymbol{x}_0, \textrm{D}_0 \boldsymbol{x}_0, \textrm{D}_0^2 \boldsymbol{x}_0, \textrm{D}_1 \boldsymbol{x}_0, \textrm{D}_0\textrm{D}_1 \boldsymbol{x}_0, t_0, t_1), \end{cases} \\[15pt] & \epsilon^2 \rightarrow \; \begin{cases} \textrm{D}_0 x_{0,2} + \omega_{0,0}^2 x_{0,2} & = f_{0,2} (\boldsymbol{x}_0, \cdots, \textrm{D}_0 \textrm{D}_2 \boldsymbol{x}_0, \boldsymbol{x}_1, \cdots, \textrm{D}_0\textrm{D}_1 \boldsymbol{x}_1, t_0, t_1), \\ & \vdots \\ \textrm{D}_0 x_{N-1,2} + \omega_{N-1,0}^2 x_{N-1,2} & = f_{N-1,2}(\boldsymbol{x}_0, \cdots, \textrm{D}_0 \textrm{D}_2 \boldsymbol{x}_0, \boldsymbol{x}_1, \cdots, \textrm{D}_0\textrm{D}_1 \boldsymbol{x}_1, t_0, t_1), \end{cases} \\[10pt] & \hspace{3cm} \vdots \\[10pt] & \epsilon^{N_e} \rightarrow \; \begin{cases} \textrm{D}_0 x_{0,N_e} + \omega_{0,0}^2 x_{0,N_e} & = f_{0,N_e} (\boldsymbol{x}_0, \cdots, \textrm{D}_0 \textrm{D}_{N_e} \boldsymbol{x}_0, \cdots, \textrm{D}_0\textrm{D}_1 \boldsymbol{x}_{N_e-1}, t_0, t_1), \\ & \vdots \\ \textrm{D}_0 x_{N-1,N_e} + \omega_{N-1,0}^2 x_{N-1,N_e} & = f_{N-1,N_e}(\boldsymbol{x}_0, \cdots, \textrm{D}_0 \textrm{D}_{N_e} \boldsymbol{x}_0, \cdots, \textrm{D}_0\textrm{D}_1 \boldsymbol{x}_{N_e-1}, t_0, t_1). \end{cases} \end{aligned}\end{split}\]

Consider oscillator \(i\) at order \(j\):

  • The left-hand side term represents a harmonic oscillator of frequency \(\omega_{i,0}\) oscillating with respect to the fast time \(t_0\).

  • The right-hand side term \(f_{i,j}\) is analogous to a forcing generated by all combinations of terms that appear on oscillator \(i\)’s equation at order \(\epsilon^j\).

    This can involve lower order terms \(x_{i,\ell}, \; \ell \leq j\), coupling terms \(x_{k, \ell}, \; k \neq j,\; \ell \leq j\), their derivatives and cross-derivatives with respect to the time scales, and physical forcing terms. The later is responsible for the dependency on \(t_0,\; t_1\). The reason why slower time scales are not involved will be explained in the following.

Function \(f_{i,j}\) tends to get increasingly complex as the order increases because the initial equations generate more high order terms than low order ones.

This operation is performed using compute_EqO().

Note that internal resonance relations can be given through the ratio_omega_osc optional parameter, which is then used in oscillators_frequencies().

Frequency of interest

The response of \(x_i\) will be analysed at a frequency \(\omega\), defined as

\[\omega = \omega_{\textrm{MMS}} + \epsilon \sigma,\]

where \(\omega_{\textrm{MMS}}\) is the central MMS frequency, controlled through the ratio_omegaMMS optional parameter and expressed in terms of omega_ref, and \(\sigma\) is a detuning about that frequency. In case the forced response is studied, \(\omega\) corresponds to the forcing frequency. In case the free response is studied, \(\omega\) corresponds to the frequency of free oscillations, which generates the backbone curve of the forced response. Note that \(\omega t = \omega_{\textrm{MMS}} t_0 + \sigma t_1\). This is the reason why the forcing only involves these two time scales in the right-hand side functions of the MMS system.

Iteratively solving the MMS system

The multiple scales system can be solved iteratively by solving successively the systems of equations at each order.

Leading order solution

The leading order solution for oscillator \(i\) must satisfy

\[\textrm{D}_0 x_{i,0} + \omega_{i,0}^2 x_{i,0} = f_{i,0}(t_0, t_1).\]

It is sought as

\[x_{i,0}(\boldsymbol{t}) = x_{i,0}^\textrm{h}(\boldsymbol{t}) + x_{i,0}^\textrm{p}(t_0, t_1),\]

where \(x_{i,0}^\textrm{h}(\boldsymbol{t})\) and \(x_{i,0}^\textrm{p}(t_0, t_1)\) are the leading order homogeneous and particular sollutions, respectively.

It is now conveninent to introduce the slow times vector

\[\boldsymbol{t}_s^\intercal = [t_1, \cdots, t_{N_e}].\]

This way, one can express the leading order solutions as

\[\begin{split}\begin{cases} x_{i,0}^\textrm{h}(\boldsymbol{t}) & = A_i(\boldsymbol{t}_s) e^{\textrm{j} \omega_{i,0} t_0} + cc = |A_i(\boldsymbol{t}_s)| \cos(\omega_{i,0} t_0 + \arg{A_i(\boldsymbol{t}_s)}), \\ x_{i,0}^\textrm{p}(t_0, t_1) & = B_i e^{\textrm{j} \omega t} + cc = B_i e^{\textrm{j} (\omega_{\textrm{MMS}} t_0 + \sigma t_1)} + cc = |B_i| \cos(\omega_{\textrm{MMS}} t_0 + \sigma t_1 + \arg{B_i}), \end{cases}\end{split}\]

where \(A_i\) is a slow time-dependent complex amplitude to be determined while \(B_i\) is a time-independent function of the forcing parameters. \(cc\) denotes the complex conjugate. Note that in most situations, forcing does not appear at leading order (i.e. forcing is weak), so \(B_i=0\).

In the following it will be convenient to use the notations

\[\begin{split}\begin{split} \boldsymbol{A}(\boldsymbol{t}_s)^\intercal & = [A_0(\boldsymbol{t}_s), A_1(\boldsymbol{t}_s), \cdots, A_{N-1}(\boldsymbol{t}_s)], \\ \boldsymbol{B}^\intercal & = [B_0, B_1, \cdots, B_{N-1}]. \end{split}\end{split}\]

The leading order solutions are defined in sol_order_0().

Higher order solutions

Once the leading order solutions are computed, they can be injected in the \(1^\textrm{st}\) higher order equations, where they (and their derivatives) appear as forcing terms, potentially together with physical forcing. The \(1^\textrm{st}\) higher order equation for oscillator \(i\) is

\[\textrm{D}_0 x_{i,1} + \omega_{i,0}^2 x_{i,1} = f_{i,1}(\boldsymbol{x}_0, \textrm{D}_0 \boldsymbol{x}_0, \textrm{D}_0^2 \boldsymbol{x}_0, \textrm{D}_1 \boldsymbol{x}_0, \textrm{D}_0\textrm{D}_1 \boldsymbol{x}_0, t_0, t_1).\]

The forcing terms that involve oscillations at \(\omega_{i,0}\) would force the oscillator on its natural frequency. Moreover, damping is always weak in the MMS, so damping terms of the form \(c \textrm{D}_0 x_{i,1}\) do not appear at this order. The aforementioned forcing terms would thus lead to unbounded solutions, which is unphysical. These forcing terms, called secular terms, must therefore be eliminated. For instance, the \(1^\textrm{st}\) higher order equation for oscillator \(i\) with the secular terms cancelled is

\[\textrm{D}_0 x_{i,1} + \omega_{i,0}^2 x_{i,1} = \bar{f}_{i,1}(\boldsymbol{x}_0, \textrm{D}_0 \boldsymbol{x}_0, \textrm{D}_0^2 \boldsymbol{x}_0, \textrm{D}_1 \boldsymbol{x}_0, \textrm{D}_0\textrm{D}_1 \boldsymbol{x}_0, t_0, t_1).\]

where \(\bar{f}_{i,1}\) is \(f_{i,1}\) with the secular terms cancelled, i.e. without terms oscillating as \(\omega_{i,0}\). After cancelation of the secular terms, each oscillator’s equation can be solved as a forced harmonic oscillator with the independent variable \(t_0\).

Note that only the particular solutions are considered when solving higher order terms, i.e.

\[x_{i,1}(\boldsymbol{t}) = \underbrace{x_{i,1}^\textrm{h}(\boldsymbol{t})}_{=0} + x_{i,1}^\textrm{p}(t_0, t_1).\]

This choice can be justified if one assumes that initial conditions are of leading order. Though this is questionable, it is assumed here.

The higher order solutions \(x_{i,1}(\boldsymbol{t})\) are expressed as a function of the leading order unknown amplitudes \(\boldsymbol{A}(\boldsymbol{t}_s)\), their slow time derivatives \(\textrm{D}_i\boldsymbol{A}(\boldsymbol{t}_s), \; i=1, ..., N_e\), and forcing terms if any (including the hard forcing amplitudes \(\boldsymbol{B}\)).

This process is repeated successively at each order, i.e. the computed solutions are introduced in the next higher order system of equations, secular terms are cancelled and the next higher order solutions are computed.

The secular terms are identified in secular_analysis() and the leading order solutions are computed in sol_higher_order(). Note that sol_higher_order() is applied on equations with only \(t_0\) as the independent variable so as to allow the use of dsolve(). This is enforced using system_t0(), which temporarily ignores the dependency of \(\boldsymbol{A}(\boldsymbol{t}_s)\) on the slow time scales.

Secular analysis

At this stage, the solutions are all expressed in terms of the unknown amplitudes \(\boldsymbol{A}(\boldsymbol{t}_s)\) and their slow time derivatives \(\textrm{D}_1 A_i(\boldsymbol{t}_s),\; \cdots,\; \textrm{D}_{N_e} A_i(\boldsymbol{t}_s)\). These can be obtained from the elimination of the secular terms (called secular conditions), as described below.

The \(i^\textrm{th}\) MMS equation at \(1^\textrm{st}\) higher order involves the slow time derivative \(\textrm{D}_1 A_i(\boldsymbol{t}_s)\), which appears in the secular term. It is coming from the chain rule

\[\dfrac{\textrm{d}^2 x_i(t)}{\textrm{d}t^2} = \dfrac{\partial^2 x_{i,0}(\boldsymbol{t})}{\partial t_0^2} + \epsilon \dfrac{\partial^2 x_{i,1}(\boldsymbol{t})}{\partial t_0^2} + 2 \epsilon \dfrac{\partial^2 x_{i,0}(\boldsymbol{t})}{\partial t_0 \partial t_1} + \mathcal{O}(\epsilon^2).\]

In addition, the \(\textrm{D}_1 A_j(\boldsymbol{t}_s),\; j\neq i\) do not appear in the \(i^\textrm{th}\) MMS equation as couplings among oscillators are weak. It is thus possible to use the secular conditions in order to express the \(\textrm{D}_1 A_i(\boldsymbol{t}_s)\) as a function of \(\boldsymbol{A}(\boldsymbol{t}_s)\).

This process can be done successively at each order to obtain the system of complex modulation (or slow time) equations

\[\begin{split}\begin{cases} \textrm{D}_1 A_i(\boldsymbol{t}_s) & = f_{A_i}^{(1)}(\boldsymbol{A}, t_1), \\ & \vdots \\ \textrm{D}_{N_e} A_i(\boldsymbol{t}_s) & = f_{A_i}^{(N_e)}(\boldsymbol{A}, t_1). \end{cases}\end{split}\]

\(f_{A_i}^{(j)}(\boldsymbol{A}, t_1)\) are functions governing the modulation of \(A_i\) with respect to the slow time \(t_j\). Note the dependency of \(f_{A_i}^{(j)}\) on \(t_1\) due to the possible presence of forcing. The complex modulation equations are derived in secular_analysis().

The above system of \(1^\textrm{st}\) order PDE can theoretically be solved to obtain the complex amplitudes \(\boldsymbol{A}\). However, this approach is not the prefered one as

  • It is more convenient to deal with real variables than complex ones to get a physical meaning from the analysis,

  • It is more convenient to deal with autonomous systems (without the explicit \(t_1\)-dependency) than nonautonomous ones,

  • The PDEs are complex.

The first two points can be achieved introducing new coordinates, as described thereafter.

Modulation equations

Polar coordinates

As discussed previously, it is more convenient to deal with real variables than complex ones. This can be done introducing the polar coordinates \(a_i\) and \(\phi_i\) for oscillator \(i\) such that

\[A_i(\boldsymbol{t}_s) = \dfrac{1}{2} a_i(\boldsymbol{t}_s) e^{\textrm{j} \phi_i(\boldsymbol{t}_s)}.\]

\(a_i\) and \(\phi_i\) correspond to the amplitude and phase of the leading solution for oscillator \(i\), respectively. With these new coordinates and introducing

\[\begin{split}\begin{aligned} \boldsymbol{a}(\boldsymbol{t}_s)^\intercal & = [a_0(\boldsymbol{t}_s), a_1(\boldsymbol{t}_s), \cdots, a_{N-1}(\boldsymbol{t}_s)], \\ \boldsymbol{\phi}(\boldsymbol{t}_s)^\intercal & = [\phi_0(\boldsymbol{t}_s), \phi_1(\boldsymbol{t}_s), \cdots, \phi_{N-1}(\boldsymbol{t}_s)], \end{aligned}\end{split}\]

the modulation equations on the \(A_i(\boldsymbol{t}_s)\) can be split into real and imaginary terms, leading to

\[\begin{split}\begin{aligned} & \epsilon^1 \rightarrow \; \begin{cases} \textrm{D}_1 a_i & = \hat{f}_{a_i}^{(1)}(\boldsymbol{a}, \boldsymbol{\phi}, t_1), \\ a_i \textrm{D}_1 \phi_i & = \hat{f}_{\phi_i}^{(1)}(\boldsymbol{a}, \boldsymbol{\phi}, t_1), \end{cases} \\[5pt] & \hspace{3cm} \vdots \\[5pt] & \epsilon^{N_e} \rightarrow \; \begin{cases} \textrm{D}_{N_e} a_i & = \hat{f}_{a_i}^{(N_e)}(\boldsymbol{a}, \boldsymbol{\phi}, t_1), \\ a_i \textrm{D}_{N_e} \phi_i & = \hat{f}_{\phi_i}^{(N_e)}(\boldsymbol{a}, \boldsymbol{\phi}, t_1). \end{cases} \end{aligned}\end{split}\]

The \(\epsilon^j \rightarrow\) indicate a system of 2 equations originating from the secular analysis at order \(j\). As one has

\[\textrm{D}_j A_i = \textrm{D}_j \dfrac{1}{2} a_i e^{\textrm{j} \phi_i} = \dfrac{1}{2} \textrm{D}_j \left( a_i \right) e^{\textrm{j} \phi_i} + \textrm{j} \dfrac{1}{2} a_i e^{\textrm{j} \phi_i} \textrm{D}_j \left( \phi_i \right),\]

it is convenient to pre-multiply the modulation equations on the \(A_i(\boldsymbol{t}_s)\) by \(e^{-\textrm{j} \phi_i(\boldsymbol{t}_s)}\) or even \(\gamma e^{-\textrm{j} \phi_i(\boldsymbol{t}_s)}\) with, for instance, \(\gamma = 2\). This avoids the presence of \(\cos(\phi_i)\), \(\sin(\phi_i)\) and many \(1/2\) terms in the modulation equations. The modulation functions on polar coordinates are therefore defined as

\[\begin{split}\begin{cases} \hat{f}_{a_i}^{(j)}(\boldsymbol{a}, \boldsymbol{\phi}, t_1) & = \Re\left[ 2 e^{-\textrm{j} \phi_i(\boldsymbol{t}_s)} f_{A_i}^{(j)}(\boldsymbol{A}, t_1) \right], \\ \hat{f}_{\phi_i}^{(j)}(\boldsymbol{a}, \boldsymbol{\phi}, t_1) & = \Im\left[ 2 e^{-\textrm{j} \phi_i(\boldsymbol{t}_s)} f_{A_i}^{(j)}(\boldsymbol{A}, t_1) \right]. \end{cases}\end{split}\]

The modulation equations system on the polar coordinates involves only real variables, but functions \(\hat{f}_{a_i}^{(j)}\) and \(\hat{f}_{\phi_i}^{(j)}\) are still nonautonomous due to the explicit dependency on \(t_1\).

The polar coordinates are introduced in polar_coordinates(). The real modulation equations are only computed for the autonomous system, as described below.

Autonomous phase coordinates

The presence of nonautonomous terms stems from forcing terms, which involve \(\cos(\sigma t_1 - \phi_i), \; \sin(\sigma t_1 - \phi_i)\) in the polar modulation functions of oscillator \(i\). A change of phase coordinates is required to make this autonomous. Moreover, the change of phase coordinate is necessary even in the absence of forcing for a convenient representation of the leading order solution. Indeed, the solution for \(x^{\textrm{h}}_{i,0}(\boldsymbol{t})\) written in terms of the current polar coordinates is

\[x^{\textrm{h}}_{i,0}(\boldsymbol{t}) = a_i(\boldsymbol{t}_s) \cos(\omega_{i,0} t_0 + \phi_i(\boldsymbol{t}_s)).\]

However, one would eventually like to express the oscillations of oscillator \(i\) in terms of the frequency \(\omega\). To force its appearance, we recall that

\[\omega_{i,0} = r_i \omega_{\textrm{ref}}, \quad \omega_{\textrm{ref}} = \frac{1}{r_{\textrm{MMS}}} \omega_{\textrm{MMS}}, \quad \textrm{and} \quad \omega_{\textrm{MMS}} = \omega - \epsilon \sigma.\]

Introducing this in the leading order solution leads to

\[x^{\textrm{h}}_{i,0}(\boldsymbol{t}) = a_i(\boldsymbol{t}_s) \cos\left( \frac{r_i}{r_{\textrm{MMS}}} \omega t_0 - \frac{r_i}{r_{\textrm{MMS}}} \sigma t_1 + \phi_i(\boldsymbol{t}_s)\right).\]

It therefore appears convenient to introduce the new phase coordinate \(\beta_i(\boldsymbol{t}_s)\) as

\[\beta_i(\boldsymbol{t}_s) = \frac{r_i}{r_{\textrm{MMS}}} \sigma t_1 - \phi_i(\boldsymbol{t}_s),\]

which allows to write the leading order solution as

\[x^{\textrm{h}}_{i,0}(\boldsymbol{t}) = a_i(\boldsymbol{t}_s) \cos\left( \frac{r_i}{r_{\textrm{MMS}}} \omega t_0 - \beta_i(\boldsymbol{t}_s)\right).\]

In addition, and as discussed previously, the introduction of these new phase coordinates removes the explicit dependency of the modulation functions on \(t_1\), which was due to terms \(\cos(\sigma t_1 - \phi_i), \; \sin(\sigma t_1 - \phi_i)\). The forcing phase being zero (i.e. reference phase), the \(\beta_i(\boldsymbol{t}_s)\) can be seen as the phases relative to the forcing.

Introducing the notation

\[\boldsymbol{\beta}(\boldsymbol{t}_s)^\intercal = [\beta_1(\boldsymbol{t}_s), \beta_2(\boldsymbol{t}_s), \dots, \beta_{N_e}(\boldsymbol{t}_s)],\]

the modulation equations can be rewritten as

\[\begin{split}\begin{aligned} & \epsilon^1 \rightarrow \; \begin{cases} \textrm{D}_1 a_0(\boldsymbol{t}_s) & = f_{a_0}^{(1)}(\boldsymbol{a}, \boldsymbol{\beta}), \\ a_0 \textrm{D}_1 \beta_0(\boldsymbol{t}_s) & = f_{\beta_0}^{(1)}(\boldsymbol{a}, \boldsymbol{\beta}), \\ & \vdots \\ \textrm{D}_1 a_{N-1}(\boldsymbol{t}_s) & = f_{a_{N-1}}^{(1)}(\boldsymbol{a}, \boldsymbol{\beta}), \\ a_{N-1} \textrm{D}_1 \beta_{N-1}(\boldsymbol{t}_s) & = f_{\beta_{N-1}}^{(1)}(\boldsymbol{a}, \boldsymbol{\beta}), \end{cases} \\[5pt] & \hspace{3cm} \vdots \\[5pt] & \epsilon^{N_e} \rightarrow \; \begin{cases} \textrm{D}_{N_e} a_0(\boldsymbol{t}_s) & = f_{a_0}^{(N_e)}(\boldsymbol{a}, \boldsymbol{\beta}), \\ a_0 \textrm{D}_{N_e} \beta_0(\boldsymbol{t}_s) & = f_{\beta_0}^{(N_e)}(\boldsymbol{a}, \boldsymbol{\beta}), \\ & \vdots \\ \textrm{D}_{N_e} a_{N-1}(\boldsymbol{t}_s) & = f_{a_{N-1}}^{(N_e)}(\boldsymbol{a}, \boldsymbol{\beta}), \\ a_{N-1} \textrm{D}_{N_e} \beta_{N-1}(\boldsymbol{t}_s) & = f_{\beta_{N-1}}^{(N_e)}(\boldsymbol{a}, \boldsymbol{\beta}). \end{cases} \end{aligned}\end{split}\]

The above system is the key result of the application of the MMS as it governs the modulation of leading order amplitudes and phases, on which all higher order solutions depend. It can be solved numerically or analytically, if analytical solutions exist, though it is generally not the case. An analytical resolution is shown in Example 6: Van der Pol oscillator. The modulation system can also be rewritten in a more compact form as discussed in the following.

The autonomous phase coordinates are introduced in autonomous_phases() and the modulation equations are computed in modulation_equations().

All solutions previously computed using the complex amplitudes \(\boldsymbol{A}(\boldsymbol{t}_s)\) can be rewritten in terms of the polar coordinates \(\boldsymbol{a}(\boldsymbol{t}_s),\; \boldsymbol{\beta}(\boldsymbol{t}_s)\) using sol_x_polar().

Reconstituted equations

The reconstitution method consists in combining the modulation equations at each slow time scales in order to obtain modulation equations in the physical time. Recalling the chain rule

\[\dfrac{\textrm{d}(\bullet)}{\textrm{d}t} = \sum_{i=0}^{N_e} \epsilon^{i} \textrm{D}_i(\bullet),\]

and reintroducing the physical time, the systems at each order can be summed up to write

\[\begin{split}\begin{cases} \dfrac{\textrm{d}}{dt} a_0(t) & = f_{a_0}(\boldsymbol{a}, \boldsymbol{\beta}), \\ a_0 \dfrac{\textrm{d}}{dt} \beta_0(t) & = f_{\beta_0}(\boldsymbol{a}, \boldsymbol{\beta}), \\ & \vdots \\ \dfrac{\textrm{d}}{dt} a_{N-1}(t) & = f_{a_{N-1}}(\boldsymbol{a}, \boldsymbol{\beta}), \\ a_{N-1} \dfrac{\textrm{d}}{dt} \beta_{N-1}(t) & = f_{\beta_{N-1}}(\boldsymbol{a}, \boldsymbol{\beta}), \end{cases}\end{split}\]

where

\[\begin{split}\begin{cases} f_{a_i}(\boldsymbol{a}, \boldsymbol{\beta}) & = \sum_{j=1}^{N_e} \epsilon^{j} f_{a_i}^{(j)}(\boldsymbol{a}, \boldsymbol{\beta}), \\ f_{\beta_i}(\boldsymbol{a}, \boldsymbol{\beta}) & = \sum_{j=1}^{N_e} \epsilon^{j} f_{\beta_i}^{(j)}(\boldsymbol{a}, \boldsymbol{\beta}). \end{cases}\end{split}\]

The MMS system then obtained represents a nonlinear autonomous system of \(2N\) coupled \(1^{\textrm{st}}\) order PDEs, with the physical time \(t\) as the independent variable. Like the time scales-dependent modulation equations, they can be solved numerically or analytically, if analytical solutions exist.

These physical time-dependent modulation equations are also computed in modulation_equations().

Evaluation at steady state

Steady state solutions

Steady state conditions

At steady state, the solutions’ amplitudes and phases are time-independent. One therefore has, for each oscillator \(i=1,...,N\),

\[\begin{split}\begin{cases} \dfrac{\textrm{d}}{dt} a_i & = 0, \\ \dfrac{\textrm{d}}{dt} \beta_i & = 0, \\ \end{cases}\end{split}\]

and the homogeneous steady state solutions take the form

\[x^{\textrm{h}}_{i,0}(t) = a_i \cos\left( \frac{r_i}{r_{\textrm{MMS}}} \omega t - \frac{r_i}{r_{\textrm{MMS}}} \beta_i \right).\]

MMS modulation equations at steady state

The steady state amplitudes \(\boldsymbol{a}\) and phases \(\boldsymbol{\beta}\) are governed by the modulation equations evaluated at steady state, which take the form

\[\begin{split}\begin{cases} f_{a_0}(\boldsymbol{a}, \boldsymbol{\beta}) & = 0, \\ f_{\beta_0}(\boldsymbol{a}, \boldsymbol{\beta}) & = 0, \\ & \vdots \\ f_{a_{N-1}}(\boldsymbol{a}, \boldsymbol{\beta}) & = 0, \\ f_{\beta_{N-1}}(\boldsymbol{a}, \boldsymbol{\beta}) & = 0. \end{cases}\end{split}\]

This is now an algebraic system of equations.

MMS solutions at steady state

Solving the modulation equations at steady state yields the steady state solutions \(\boldsymbol{a}, \boldsymbol{\beta}\). The system can be solved directly for \(\boldsymbol{a}\) and \(\boldsymbol{\beta}\), yielding explicit analytical solutions, but this is often complex as the system is nonlinear.

A possibility that is sometimes available to obtain analytical solutions is to rearrange the equations to isolate the phase terms \(\cos(f(\boldsymbol{\beta})),\; \sin(f(\boldsymbol{\beta}))\) where \(f(\boldsymbol{\beta})\) is a linear function of the \(\beta_i\). Then the equations can be squared and summed up to obtain an equation on \(a_i\) only and/or \(a_j\) as a function of \(a_i\). The \(a_j\) can be expressed as a function of \(a_i\), leading to a polynomial equation on \(a_i\). The resulting polynomial equation can rarely be solved directly as the polynomial involved are often of high order. However, the polynomial is often quadratic in the detuning \(\sigma\) and forcing amplitude \(F\). It can therefore be solved for \(\sigma\) and \(F\) with \(a_i\) seen as a parameter. This yields an implicit solution for \(a_i(\sigma) \Rightarrow a_i(\omega)\) and \(a_i(F)\), from which one can deduce the other amplitudes \(a_j\) and phases \(\boldsymbol{\beta}\), thus reconstructing the oscillators’ solutions.

The processus described above is not always feasible. The blocking points are typically to

  1. get rid of phase terms \(\cos(f(\boldsymbol{\beta})),\; \sin(f(\boldsymbol{\beta}))\) in the equations,

  2. express every amplitude \(a_j\) in terms of a single one, \(a_i\),

  3. end up with a polynomial of order 2 in \(\sigma\) and \(F\)

These difficulties become more pronounced when the system involves several oscillator, in which case the amplitudes and phases may only be computed numerically.

To facilitate the derivation of an analytical solution, it is possible to consider the backbone curve (bbc) of the forced solution rather than the forced solution itself. This bbc is computed in the absence of damping and forcing, therefore simplifying the system. Typically, this reduces the number of phase terms appearing. The solving procedure is then the same as that described previously.

Stability analysis

The stability analysis is described in details in Stability analysis.

Stability analysis

Stability information

Consider a steady state solution \((\hat{\boldsymbol{a}} , \hat{\boldsymbol{\beta}})\) such that, for \(i=1,...,N\),

\[\begin{split}\begin{cases} f_{a_i}(\hat{\boldsymbol{a}}, \hat{\boldsymbol{\beta}}) & = 0, \\ f_{\beta_i}(\hat{\boldsymbol{a}}, \hat{\boldsymbol{\beta}}) & = 0. \end{cases}\end{split}\]

The aim is to determine the stability state of that steady solution, which corresponds to a fixed point in the phase space.

Jacobian matrix

Let’s first introduce the vector of polar coordinates and polar modulation functions

\[\begin{split}\boldsymbol{x}^{(\textrm{p})\intercal} & = [a_0, \beta_0, \cdots, a_{N-1}, \beta_{N-1}], \\ \boldsymbol{f}^{(\textrm{p})\intercal} & = [f_{a_0}, f_{\beta_0}^*, \cdots, f_{a_{N-1}}, f_{\beta_{N-1}}^*].\end{split}\]

Note that the appearance of \(f_{\beta_i}^*\) in \(\boldsymbol{f}^{(\textrm{p})}\) requires \(a_i \neq 0\), which strongly constraints the type of steady state solutions that can be considered in the approach described below. To relax this constraint, one can use a change of coordinates from polar to cartesian ones. This will be discussed in following sections, after the description of this polar approach.

Using the vectors of polar coordinates and modulation functions, one can write the modulation equations system as

\[\dfrac{\textrm{d} \boldsymbol{x}^{(\textrm{p})}}{\textrm{d}t} = \textrm{J}^{(\textrm{p})} \boldsymbol{x}^{(\textrm{p})},\]

where we introduced the Jacobian matrix

\[\begin{split}\textrm{J}^{(\textrm{p})} = \dfrac{\partial \boldsymbol{f}^{(\textrm{p})} }{ \partial \boldsymbol{x}^{(\textrm{p})} } = \begin{bmatrix} \frac{\partial f_{a_0}}{\partial a_0} & \frac{\partial f_{a_0}}{\partial \beta_0} & \cdots & \frac{\partial f_{a_0}}{\partial \beta_{N-1}} \\ \frac{\partial f_{\beta_0}^*}{\partial a_0} & \frac{\partial f_{\beta_0}^*}{\partial \beta_0} & \cdots & \frac{\partial f_{\beta_0}^*}{\partial \beta_{N-1}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_{\beta_{N-1}}^*}{\partial a_0} & \frac{\partial f_{\beta_{N-1}}^*}{\partial \beta_0} & \cdots & \frac{\partial f_{\beta_{N-1}}^*}{\partial \beta_{N-1}} \end{bmatrix}.\end{split}\]

Perturbation of the steady state solution

Let us now consider a small perturbation \(\tilde{\boldsymbol{x}}^{(\textrm{p})}\) of the steady state solution such that

\[\boldsymbol{x}^{(\textrm{p})} = \hat{\boldsymbol{x}}^{(\textrm{p})} + \tilde{\boldsymbol{x}}^{(\textrm{p})} \quad \Leftrightarrow \quad \tilde{\boldsymbol{x}}^{(\textrm{p})} = \boldsymbol{x}^{(\textrm{p})} - \hat{\boldsymbol{x}}^{(\textrm{p})}.\]

Using a first order Taylor expansion for \(\textrm{d} \tilde{\boldsymbol{x}}^{(\textrm{p})} / \textrm{d} t\), one can write

\[\dfrac{\textrm{d} \tilde{\boldsymbol{x}}^{(\textrm{p})}}{\textrm{d}t} = \left.\textrm{J}^{(\textrm{p})}\right|_{\hat{\boldsymbol{x}}^{(\textrm{p})}} \tilde{\boldsymbol{x}}^{(\textrm{p})} + \mathcal{O}(||\tilde{\boldsymbol{x}}^{(\textrm{p})}||^2),\]

where \(\left.\textrm{J}^{(\textrm{p})}\right|_{\hat{\boldsymbol{x}}^{(\textrm{p})}}\) denotes the Jacobian matrix evaluated on the steady state solution. The perturbation solution takes the form

\[\tilde{\boldsymbol{x}}^{(\textrm{p})} = \sum_{i=1}^{2N} C_i \boldsymbol{\psi}_i e^{\lambda_i t},\]

where \((\lambda_i, \boldsymbol{\psi}_i),\; i=1, ..., 2N\) are the eigensolutions of the Jacobian (evaluated on \(\hat{\boldsymbol{x}}^{(\textrm{p})}\)).

Stability condition

The steady state solution \(\hat{\boldsymbol{x}}^{(\textrm{p})}\) is considered stable if a small perturbation \(\tilde{\boldsymbol{x}}^{(\textrm{p})}\) vanishes in time, such that solutions close from \(\hat{\boldsymbol{x}}^{(\textrm{p})}\) are converging towards it. This condition is fulfilled if

\[\Re[\lambda_i] < 0, \quad \forall i,\]

meaning that all eigenvalues of the Jacobian evaluated on the steady state solution must have negative real parts. If this condition is not met, the system is either quasi stable or unstable.

Bifurcations

A bifurcation occurs when the stability state of the system changes.

  1. Simple bifurcations occur when at least one eigenvalue crosses the imaginary axis through 0.

    Such bifurcations include saddle node and pitchfork bifurcations, which cause jumps of the response and the appearance of lower symmetry solutions, respectively.

  2. Neimark-Sacker bifurcations occur when a pair of complex conjugate eigenvalues with nonzero real parts cross the imaginary axis.

    These bifurcations lead to non periodic solutions.

Simple bifurcations can be detected by evaluating the sign of the Jacobian, as

\[\det \left.\textrm{J}^{(\textrm{p})}\right|_{\hat{\boldsymbol{x}}^{(\textrm{p})}} = \prod_{i=1}^{2N} \lambda_i,\]

thereby making \(\det \left.\textrm{J}^{(\textrm{p})}\right|_{\hat{\boldsymbol{x}}^{(\textrm{p})}}\) an important stability indicator. Neimark-Sacker bifurcations are more difficult to detect. Information from the trace of \(\left.\textrm{J}^{(\textrm{p})}\right|_{\hat{\boldsymbol{x}}^{(\textrm{p})}}\) can be considered, or the Routh-Hurwitz criterion can be used. This is not detailed here.

Bifurcation curves

Bifurcation curves are curves constructed by evaluating the coordinates of bifurcation points when varying one or more parameters. The stability state of a solution changes when the response curve crosses a bifurcation curve.

Stability analysis in cartesian coordinates

Limitations of polar coordinates

As mentioned previously, the approach described above fails when one of the oscillator’s leading order amplitude is 0. Indeed, the Jacobian is constructed using the modulation functions

\[f_{\beta_i}^*(\boldsymbol{a}, \boldsymbol{\beta}) = \frac{f_{\beta_i}(\boldsymbol{a}, \boldsymbol{\beta})}{a_i},\]

which are defined only if \(a_i=0\). This prevents evaluating the stability of

  • Trivial solutions, for which no oscillator responds,

  • 1 mode solutions, whose stability can be affected under perturbation from another mode.

These limitations can be overcome using a change of coordinates.

Cartesian coordinates

The leading order homogeneous solution for oscillator \(i\) can be written as

\[\begin{split}\begin{split} x_{i,0}^{\textrm{h}}(t) & = a_i \cos\left(\frac{r_i}{r_{\textrm{MMS}}}\omega t - \beta_i \right), \\ & = a_i \cos(\beta_i) \cos\left(\frac{r_i}{r_{\textrm{MMS}}}\omega t\right) + a_i \sin(\beta_i) \sin\left(\frac{r_i}{r_{\textrm{MMS}}}\omega t\right). \end{split}\end{split}\]

It then appears natural to introduce the cartesian coordinates

\[\begin{split}\begin{cases} p_i = a_i \cos(\beta_i), \\ q_i = a_i \sin(\beta_i), \end{cases}\end{split}\]

in order to rewrite the solution as

\[x_{i,0}^{\textrm{h}}(t) = p_i \cos\left(\frac{r_i}{r_{\textrm{MMS}}}\omega t\right) + q_i \sin\left(\frac{r_i}{r_{\textrm{MMS}}}\omega t\right).\]

In the following it will be convenient to use the cartesian coordinates vectors

\[\begin{split}\begin{aligned} \boldsymbol{p}(t)^\intercal & = [p_0(t), p_1(t), \cdots, p_{N-1}(t)], \\ \boldsymbol{q}(t)^\intercal & = [q_0(t), q_1(t), \cdots, q_{N-1}(t)]. \end{aligned}\end{split}\]

Cartesian modulation equations

The cartesian modulation equations can be obtained from the polar ones. To do so, one can write

\[\begin{split}\begin{cases} \dfrac{\textrm{d} p_i}{\textrm{d}t} & = \dfrac{\textrm{d} a_i}{\textrm{d}t} \cos(\beta_i) - a_i \sin(\beta_i) \dfrac{\textrm{d} \beta_i}{\textrm{d}t}, \\ \dfrac{\textrm{d} q_i}{\textrm{d}t} & = \dfrac{\textrm{d} a_i}{\textrm{d}t} \sin(\beta_i) + a_i \cos(\beta_i) \dfrac{\textrm{d} \beta_i}{\textrm{d}t}. \end{cases}\end{split}\]

Then, by identification, one necessarily has

\[\begin{split}\begin{cases} f_{p_i}(\boldsymbol{p}, \boldsymbol{q}) & = f_{a_i}(\boldsymbol{a}, \boldsymbol{\beta}) \cos(\beta_i) - f_{\beta_i}(\boldsymbol{a}, \boldsymbol{\beta}) \sin(\beta_i), \\ f_{q_i}(\boldsymbol{p}, \boldsymbol{q}) & = f_{a_i}(\boldsymbol{a}, \boldsymbol{\beta}) \sin(\beta_i) + f_{\beta_i}(\boldsymbol{a}, \boldsymbol{\beta}) \cos(\beta_i), \end{cases}\end{split}\]

in order to write the modulation equations in cartesian coordinates

\[\begin{split}\begin{cases} \dfrac{\textrm{d}}{dt} p_0(t) & = f_{p_0}(\boldsymbol{p}, \boldsymbol{q}), \\ \dfrac{\textrm{d}}{dt} q_0(t) & = f_{q_0}(\boldsymbol{p}, \boldsymbol{q}), \\ & \vdots \\ \dfrac{\textrm{d}}{dt} p_{N-1}(t) & = f_{p_{N-1}}(\boldsymbol{p}, \boldsymbol{q}), \\ \dfrac{\textrm{d}}{dt} q_{N-1}(t) & = f_{q_{N-1}}(\boldsymbol{p}, \boldsymbol{q}). \end{cases}\end{split}\]

Jacobian matrix

As done previously for polar coordinates, let’s introduce the vectors of cartesian coordinates and modulation equations as

\[\begin{split}\boldsymbol{x}^{(\textrm{c})\intercal} & = [p_0, q_0, \cdots, p_{N-1}, q_{N-1}], \\ \boldsymbol{f}^{(\textrm{c})\intercal} & = [f_{p_0}, f_{q_0}, \cdots, f_{p_{N-1}}, f_{q_{N-1}}].\end{split}\]

Then one can write

\[\dfrac{\textrm{d} \boldsymbol{x}^{(\textrm{c})}}{\textrm{d}t} = \textrm{J}^{(\textrm{c})} \boldsymbol{x}^{(\textrm{c})},\]

where we introduced the Jacobian matrix

\[\begin{split}\textrm{J}^{(\textrm{c})} = \dfrac{\partial \boldsymbol{f}^{(\textrm{c})} }{ \partial \boldsymbol{x}^{(\textrm{c})} } = \begin{bmatrix} \frac{\partial f_{p_0}}{\partial p_0} & \frac{\partial f_{p_0}}{\partial q_0} & \cdots & \frac{\partial f_{p_0}}{\partial q_{N-1}} \\ \frac{\partial f_{q_0}}{\partial p_0} & \frac{\partial f_{q_0}}{\partial q_0} & \cdots & \frac{\partial f_{q_0}}{\partial q_{N-1}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_{q_{N-1}}}{\partial p_0} & \frac{\partial f_{q_{N-1}}}{\partial q_0} & \cdots & \frac{\partial f_{q_{N-1}}}{\partial q_{N-1}} \end{bmatrix}.\end{split}\]

Note that there are no constraints related to an oscillator’s amplitude being 0 here. This cartesian coordinates approach therefore allows to investigate how the stability of a steady state solution is affected by a perturbation from an oscillator who’s amplitude is 0 in that steady state solution.

The stability analysis with \(\textrm{J}^{(\textrm{c})}\) is carried out as described previously with \(\textrm{J}^{(\textrm{p})}\).