Infinity in finite time

Illustration of the method

We shall illustrate first with a simple example how the method explicitly constructs the solution, and how to use the package to obtain it.

We consider the differential equation given by

\[\dot{x} = x^2,\tag{1}\]

with the initial condition $x(0)=x_0$, whose exact solution reads

\[x(t) = \frac{x_0}{1-x_0t}.\tag{2}\]

We shall implement the construction of this example explicitly, which illustrates the way TaylorIntegration.jl is conceived.

The initial condition defines the 0-th order approximation, i.e., $x(t) = x_0 + \mathcal{O}(t^1)$. We now write the solution as $x(t) = x_0 + x_{[1]}t + \mathcal{O}(t^2)$, and we want to determine $x_{[1]}$. Substituting this solution into the RHS of (1), yields

\[x^2 = x_0^2 + 2 x_0 x_{[1]} t + x_{[1]}^2 t^2 = x_0^2 + \mathcal{O}(t^1),\]

where in the last equality we have kept all terms up to order 0, since we want to determine $x_{[1]}$, and the recursion formula requires for that the 0-th order term of the Taylor expansion of the RHS of the equation of motion. Hence, we have $f_{[0]}=x_0^2$, which using the recursion relation $x_{[k+1]} = f_{[k]}/(k+1)$ yields $x_{[1]} = x_0^2$.

Following the same procedure, we write $x(t) = x_0 + x_{[1]} t + x_{[2]} t^2 + \mathcal{O}(t^3)$, and

\[x^2 = x_0^2 + 2 x_0 x_{[1]} t + \mathcal{O}(t^2),\]

where we kept all terms up to order 1. We thus have $f_{[1]}=2 x_0 x_{[1]} = 2 x_0^3$, which then yields $x_{[2]} = x_0^3$. Repeating this calculation, we obtain

\[x(t) = x_0 + x_0^2 t + x_0^3 t^2 + \cdots + x_0^{k+1} t^k + \cdots.\tag{3}\]

The solution given by Eq. (3) is a geometrical series, which is identical to the exact solution, Eq. (2). Yet, it is not obvious from the solution that it is only defined for $t<1/x_0$. To see this, we obtain the step size, as described previously, for the series truncated to order $k$. The Taylor coefficient of order $k$ is $x_{[k]}=x_0^{k+1}$, so the time step is

\[h < \Big(\frac{\epsilon_\textrm{tol}}{|x_0^{k+1}|}\Big)^{1/k} = \frac{\epsilon_\textrm{tol}^{1/k}}{|x_0|^{1+1/k}}.\]

In the limit $k\to\infty$ we obtain $h < h_\textrm{max}=1/x_0$, which is the domain of existence of the exact solution.

Below, we shall fix a maximum order for the expansion. This entails a truncation error which is somewhat controlled through the absolute tolerance $\epsilon_\textrm{tol}$. The key to a correct use of Taylor's method is to impose a quite small value of $\epsilon_\textrm{tol}$ together with a large enough order of the expansion.

Implementation

We shall illustrate how to use TaylorIntegration.jl to integrate Eq. (1) for the initial condition $x(0)=3$. Notice that according to the exact solution Eq. (2), the solution only exists for $t<t_\mathrm{max} =1/3$; in addition, we note that this number can not be represented exactly as a floating-point number.

We first load the required packages and define a function which represents the equation of motion.

using TaylorIntegration, Plots
diffeq(x, p, t) = x^2;
diffeq (generic function with 1 method)
Note

In TaylorIntegration.jl we use the same convention of DifferentialEquations.jl when writing the function representing the equations of motion.

We now integrate the equations of motion using taylorinteg; note that, despite of the fact that the solution only exists for $t<t_\textrm{max}$, below we shall try to compute it up to $t_\textrm{end}=0.34$; as we shall see, Taylor's method takes care of this. For the integration presented below, we use a 25-th series expansion, with $\epsilon_\textrm{tol} = 10^{-20}$, and compute up to 150 integration steps.

tT, xT = taylorinteg(diffeq, 3.0, 0.0, 0.34, 25, 1e-20, maxsteps=150) ;
([0.0, 0.04673748724788921, 0.08666964734179776, 0.12082375432218478, 0.15006657642316595, 0.17513029556227688, 0.19663410663971947, 0.21510223893149374, 0.23097901054539446, 0.24464141948580478  …  0.33333323413266813, 0.3333332415691814, 0.33333324842592277, 0.33333325474982284, 0.333333260583874, 0.3333332659674727, 0.3333332709367305, 0.3333332755247577, 0.3333332797619218, 0.33333328367608234], [3.0, 3.489234103211201, 4.054103043097782, 4.705670232152399, 5.456526960261398, 6.3209911395453116, 7.315330336442326, 8.45801187123614, 9.769983062384513, 11.274985019706682  …  1.0080577551241778e7, 1.0897501670640979e7, 1.1777534986335507e7, 1.2725315931854066e7, 1.3745806602465497e7, 1.4844314146377133e7, 1.6026513491543336e7, 1.7298471486640345e7, 1.8666672539102122e7, 2.0138045837674662e7])

We first note that the last point of the calculation does not exceed $t_\textrm{max}$.

tT[end]
0.33333328367608234

Increasing the maxsteps parameter pushes tT[end] closer to $t_\textrm{max}$ but it actually does not reach this value.

Figure 1 displays the computed solution as a function of time, in log scale.

plot(tT, log10.(xT), shape=:circle)
xlabel!("t")
ylabel!("log10(x(t))")
xlims!(0,0.34)
title!("Fig. 1")
Example block output

Clearly, the solution diverges without bound when $t\to t_\textrm{max} = 1/3$, i.e., $x(t)$ approaches infinity in finite time.

Figure 2 shows the relative difference between the numerical and the analytical solution in terms of time.

exactsol(t, x0) = x0 / (1 - x0 * t)
δxT = abs.(xT .- exactsol.(tT, 3.0)) ./ exactsol.(tT, 3.0);
plot(tT[6:end], log10.(δxT[6:end]), shape=:circle)
xlabel!("t")
ylabel!("log10(dx(t))")
xlims!(0, 0.4)
title!("Fig. 2")
Example block output

To put in perspective how good is the constructed solution, we impose (arbitrarily) a relative accuracy of $10^{-13}$; the time until such accuracy is satisfied is given by:

indx = findfirst(δxT .> 1.0e-13);
esol = exactsol(tT[indx-1],3.0);
tT[indx-1], esol, eps(esol)
(0.3323590833381362, 1026.4305926915836, 2.2737367544323206e-13)

Note that, the accuracy imposed in terms of the actual value of the exact solution means that the difference of the computed and the exact solution is essentially due to the eps of the computed value.