Ordinary differential equation
From Academic Kids

 ODE redirects here. For the realtime physics engine, see open dynamics engine.
In mathematics, and in particular analysis, an ordinary differential equation (or ODE) is an equation that involves the derivatives of an unknown function of one variable. A simple example of an ordinary differential equation is
 <math>f' = f<math>,
where <math>f<math> is an unknown function, and <math>f'<math> is its derivative.
See differential calculus and integral calculus for basic calculus background.
Definition
Let y represent an unknown function of x, and let
 <math>y', y'',\ \dots,\ y^{(n)}<math>
denote the derivatives
 <math>\frac{dy}{dx},\ \frac{d^{2}y}{dx^2},\ \dots,\ \frac{d^{n}y}{dx^{n}}.<math>
An ordinary differential equation (ODE) is an equation involving
 <math>x,\ y,\ y',\ y'',\ \dots<math>.
The order of a differential equation is the order <math>n<math> of the highest derivative that appears.
A solution of an ODE is a function y(x) whose derivatives satisfy the equation. Such a function is not guaranteed to exist and, if it does exist, is usually not unique.
When a differential equation of order n has the form
 <math>F(x, y', y'',\ \dots,\ y^{(n)}) = 0<math>
it is called an implicit differential equation whereas the form
 <math>F(x, y', y'',\ \dots,\ y^{(n1)}) = y^{(n)}<math>
is called an explicit differential equation.
A differential equation not depending on x is called autonomous, and one with no terms depending only on x is called homogeneous.
General application
An important special case is when the equations do not involve <math>x<math>. These differential equations may be represented as vector fields. This type of differential equations has the property that space can be divided into equivalence classes based on whether two points lie on the same solution curve. Since the laws of physics are believed not to change with time, the physical world is governed by such differential equations. (See also symplectic topology for abstract discussion.)
The problem of solving a differential equation is to find the function <math>y<math> whose derivatives satisfy the equation. For example, the differential equation
 <math>y'' + y = 0 <math>
has the general solution
 <math>y = A \cos{x} + B \sin{x} <math>,
where A, B are constants determined from boundary conditions. In the case where the equations are linear, this can be done by breaking the original equation down into smaller equations, solving those, and then adding the results back together. Unfortunately, many of the interesting differential equations are nonlinear, which means that they cannot be broken down in this way. There are also a number of techniques for solving differential equations using a computer (see numerical ordinary differential equations).
Ordinary differential equations are to be distinguished from partial differential equations where <math>y<math> is a function of several variables, and the differential equation involves partial derivatives.
Types of differential equations with some history
The influence of geometry, physics, and astronomy, starting with Newton and Leibniz, and further manifested through the Bernoullis, Riccati, and Clairaut, but chiefly through d'Alembert and Euler, has been very marked, and especially on the theory of linear partial differential equations with constant coefficients.
Linear ODEs with constant coefficients
The first method of integrating linear ordinary differential equations with constant coefficients is due to Euler, who made the solution of the form
 <math>\frac {d^{n}y} {dx^{n}} + A_{1}\frac {d^{n1}y} {dx^{n1}} + \cdots + A_{n}y = 0<math>
depend on that of the algebraic equation of the nth degree,
 <math>F(z) = z^{n} + A_{1}z^{n1} + \cdots + A_n = 0<math>
in which z^{k} takes the place of
 <math>\frac {d^{k}y} {dx^{k}}\quad\quad(k = 1, 2, \cdots, n).<math>
This equation F(z) = 0, is the "characteristic" equation considered later by Monge and Cauchy. Template:ExampleSidebar If z is a (possibly imaginary) zero of F(z) of multiplicity m and <math>k\in\{0,1,\dots,m1\}<math> then <math>y=x^ke^{zx}<math> is a solution of the ODE.
If the A_{i} are real then realvalued solutions are preferable. Since the nonreal z values will come in conjugate pairs, so will their corresponding y values; replace each pair with their linear combinations <math>\Re y<math> and <math>\Im y<math>.
A case that involves complex roots can be solved with the aid of Euler's formula.
 Example: Suppose <math>P(D)y = 0<math> for P(D)=<math>D^2  4D + 5.<math>
(Note: Here operator's notation is used to represent the linear ODE, y"4y'+5=0.)
Complete the square to find the roots by writing above equation in the form:
 <math>P(D)=\left[ {P  a} \right] + b^2 <math>
The roots are <math>r = a \pm bi.<math>
 <math>P(D) = \left[ {D^2  4D + 4} \right] + 1 = \left[ {D  2} \right]^2 + 1^2.<math>
Here <math>r = 2 \pm i<math> are the characteristic roots. Hence, a solution in the form of <math> y = e^{rx} <math> is to be written as
 <math>e^{\left( {2 + i} \right)x} = e^{2x + ix} = e^{2x} e^{ix} = e^{2x} \left( {\cos x + i\sin x} \right) = e^{2x} \cos x + ie^{2x} \sin x<math>
We think of <math>r = 2 \pm i{\rm{ }}<math> as a root of multiplicity of 2. So seek two linearly independent solution to above equation yields:
 <math>\left\{ {\begin{matrix} {y_1 = e^{2x} \cos x} \\ {y_2 = e^{2x} \sin x} \\\end{matrix}} \right.<math>
Any other solution to the equation has form of: <math>y_c = c_1 e^{2x} \cos x + c_2 e^{2x} \sin x<math> . Note the arbitrariness of C1 and C2 absorbs <math> \pm <math> i.
Also, for repeated complex roots, multiply <math>y_1<math> and <math>y_2 <math> repeatedly by x to generate a family of solutions, but only to multiplicity.
Linear ODEs with variable coefficient
Natural oscillations (be it mechanical or electrical circuit) exhibit a forcing function that is due to friction, dashpot, or circuit resistance.
Suppose we model this forcing function as <math>f(t)<math>, an linear ODE with this added nonhomogeneous term now takes the form
 <math>A_n \frac{{d^n y}}{{dt^n }} + A_{n  1} \frac{{d^{n  1} y}}{{dt^{n  1} }} + \cdots + A_1 \frac{{dy}}{{dt}} + A_0 y = f\left( t \right),<math>
or simply (in standard form),
 <math>a_n y^{(n)} + a_{n  1} y^{(n  1)} + \cdots + a_1 y' + a_0 y = f\left( t \right).\,<math>
In case of nonhomogeneous linear ODE (nonHLDE) where the input function is polynomial, sinusoidal, exponential or any product of the three; we seek the solution to the equation above in the form of <math>y_G = y_c + y_p <math> where
 <math>y_G <math> denotes a general solution;
 <math>y_c <math> denotes a characteristic equation;
 <math>y_p <math> denotes a particular solution.
Method of undetermined coefficients
The method of undetermined coefficients (MoUC) is useful in finding solution for <math>y_p <math>. Given <math>P(D) = f(t)<math>, find the annihilator <math>A(D)<math> for <math>f(t)<math> such that <math>A(D)f(t) = 0<math>; then apply <math>A(D)<math> to both side of <math>P(D) = f(t)<math> to have <math>A(D)f(t) = A(D)f = 0<math>, a HLDE with constant coefficients (cc) which could than be readily solve using technique found in 3.1. Note by convention when f(t) is used, it often means that an equation is timedependent, where f(x) and other denotes timeindependent.
Suppose that f(x) = 1 − 2x; A(D) has the following family of solutions:
Recall: <math>r = 0:e^0 = 1,x,x^2,x^3,...<math>
Thus, when we have x; henceforth it implies this root repeated twice. With this in mind, <math>A(D) = D^2<math> has multiplicity 2.
Similarly, case of complex roots is based on sin or cos.
 Example: <math>f(x) = \sin x  x\cos 2x<math>
 sin x is due to complex root, has real part of 0 because <math>e^0 = 1<math> (multiply 1 on sin and cos).
 A(D) then has root of <math>0 \pm i<math> (simply <math>\pm i<math>) with multiplicity 1.
 Also <math>r=\pm 2i<math> with multiplicity 2.
 Example: <math>\left[ {D^2  D} \right]y = 1  2x<math>
Here <math>r = 0:e^0 = 1,x,x^2,....x^n ;r = 1:e^x,xe^x,...,x^n e^x<math> Note that once a distinct root is used, it may not be used again due to linearly independent.
 <math>y_c = c_1 y_1 + c_2 y_2 = c_1 \left( 1 \right) + c_2 \left( {e^x } \right)<math>. A(D) has of multiplicity of 2.
<math>\left. {\begin{matrix}
{Y_p = Ax + Bx^2 } \\ {Y_p ^\prime = A + 2Bx} \\ {Y_p ^{\prime \prime } = 2B} \\
\end{matrix}} \right\}2B  \left[ {A + 2Bx} \right] = \left[ {2B  A} \right]  2Bx = 1  2x <math>
Equating coefficients, 2B − A yields constant term on RHS of 1, hence 2B − 1 = 1 so B = 1, A = 1. −2B = −2. Therefore <math>y_p = Ax + Bx^2 = x + x^2<math>. Solution hence becomes <math>y = y_c + y_p = C_1 + C_2 e^x + x + x^2<math> . If we do not keep deleting our used roots, we than may have <math>y = y_c + y_p = C_1 + C_2 e^x + 1 + x^2<math>, it would be incorrect since C_{1} absorbs the arbitrariness of x (here is 1); thus violates linearly dependence.
 Example: <math>\left[ {D^2  D} \right]y = x  2e^x<math> (same as <math>y''  y' = x  2e^x<math>)
In this case, we have roots r = {0, 1} which yield family of solution such as
 <math> \begin{matrix}
r = 0:1,x,x^2,x^3,... \\ r = 1:e^x,xe^x,x^2 e^x,... \\ \end{matrix}<math>
Therefore, <math>y_1 = 1,y_2 = e^x<math> and <math>y_c = C_1 (1) + C_2 e^x<math> Since A(D) has <math>\left. \begin{matrix}
r = 0\,\,{\rm{of\ multiplicity\ of\ 2}} \\ r = 1\,\,{\rm{of\ multiplicity\ of\ 1}} \\ \end{matrix} \right\}<math> giving the form of
<math> \left. \begin{matrix}
Y_p = Ax + Bx^2 + Cxe^x \\ Y_p ^\prime = A + 2Bx + C(1 + x)e^x \\ Y_p ^{\prime \prime } = 2B + C(2 + x)e^x \\ \end{matrix} \right\}<math> put in original equation to have
<math>\left[ {2B  A} \right]  2Bx + ce^x = x  2e^x<math>
Equating coefficient, <math>\begin{matrix}
2B  A = 0\,\,{\rm{so }}\,{\rm{A = 2B}} \Rightarrow A =  1 \\  2B = 1 \Rightarrow B =  \frac{1}{2};C =  1 \\ \end{matrix}<math>
Thus <math>y_p = Ax + Bx^2 + Cxe^x =  x  \frac{1}{2}x^2  2xe^x<math>
 Example: <math>\left[ {D^2 + 1} \right]y = f = \sec x<math>. What roots would give rise to the solution of the form <math>f\left( x \right) = \sec \left( x \right)<math> ?
Solution: No roots. <math>f\left( x \right) = \sec \left( x \right)<math> is not a sinusoid, rather the reciprocal of a sinusoid. So this method would not apply and 2ndorder variationofparameters (VoP) must be used to solve these type of problems (no valid finite linear combination could be tried in this case).
Method of variation of parameters
As explained above, the general solution to a nonhomogeneous, linear differential equation <math>y''(x) + p(x) y'(x) + q(x) y(x) = g(x)<math> can be expressed as the sum of the general solution <math>y_h(x)<math> to the corresponding homogenous, linear differential equation <math>y''(x) + p(x) y'(x) + q(x) y(x) = 0<math> and any one solution <math>y_p(x)<math> to <math>y''(x) + p(x) y'(x) + q(x) y(x) = g(x)<math>.
Like the method of undetermined coefficients, described above, the method of variation of parameters is a method for finding one solution to <math>y''(x) + p(x) y'(x) + q(x) y(x) = g(x)<math>, having already found the general solution to <math>y''(x) + p(x) y'(x) + q(x) y(x) = 0<math>. Unlike the method of undetermined coefficients, which fails except with certain specific forms of g(x), the method of variation of parameters will always work; however, it is significantly more difficult to use.
For a secondorder equation, the method of variation of parameters makes use of the following fact:
Fact
Let p(x), q(x), and g(x) be functions, and let <math>y_1(x)<math> and <math>y_2(x)<math> be solutions to the homogeneous, linear differential equation <math>y''(x) + p(x) y'(x) + q(x) y(x) = 0<math>. Further, let u(x) and v(x) be functions such that <math>u'(x) y_1(x) + v'(x) y_2(x) = 0<math> and <math>u'(x) y_1'(x) + v'(x) y_2'(x) = g(x)<math> for all x, and define <math>y_p(x) = u(x) y_1(x) + v(x) y_2(x)<math>. Then <math>y_p(x)<math> is a solution to the nonhomogeneous, linear differential equation <math>y''(x) + p(x) y'(x) + q(x) y(x) = g(x)<math>.
Proof
<math>y_p(x) = u(x) y_1(x) + v(x) y_2(x)<math>
<math>y_p'(x) = u'(x) y_1(x) + u(x) y_1'(x) + v'(x) y_2(x) + v(x) y_2'(x) = 0 + u(x) y_1'(x) + v(x) y_2'(x)<math>
<math>y_p''(x) = u'(x) y_1'(x) + u(x) y_1''(x) + v'(x) y_2'(x) + v(x) y_2''(x) = g(x) + u(x) y_1''(x) + v(x) y_2''(x)<math>
<math>y_p''(x) + p(x) y'_p(x) + q(x) y_p(x) = g(x) + u(x) y_1''(x) + v(x) y_2''(x) + p(x) u(x) y_1'(x) + p(x) v(x) y_2'(x) + q(x) u(x) y_1(x) + q(x) v(x) y_2(x) <math>
<math> = g(x) + u(x) (y_1''(x) + p(x) y_1'(x) + q(x) y_1(x)) + v(x) (y_2''(x) + p(x) y_2'(x) + q(x) y_2(x)) = g(x) + 0 + 0 = g(x)<math>
Usage
To solve the secondorder, nonhomogeneous, linear differential equation <math>y''(x) + p(x) y'(x) + q(x) y(x) = g(x)<math> using the method of variation of parameters, use the following steps:
 Find the general solution to the corresponding homogeneous equation <math>y''(x) + p(x) y'(x) + q(x) y(x) = 0<math>. Specifically, find two linearly independent solutions <math>y_1(x)<math> and <math>y_2(x)<math>.
 Since <math>y_1(x)<math> and <math>y_2(x)<math> are linearly independent solutions, their Wronskian <math>y_1(x) y_2'(x)  y_1'(x) y_2(x)<math> is nonzero, so we can compute <math>\frac{g(x) y_2(x)}{y_1(x) y_2'(x)  y_1'(x) y_2(x)}<math> and <math>\frac{g(x) y_1(x)}{y_1(x) y_2'(x)  y_1'(x) y_2(x)}<math>. If the former is equal to u'(x) and the latter to v'(x), then u and v satisfy the two constraints given above: that <math>u'(x) y_1(x) + v'(x) y_2(x) = 0<math> and that <math>u'(x) y_1'(x) + v'(x) y_2'(x) = g(x)<math>.
 Integrate <math>\frac{g(x) y_2(x)}{y_1(x) y_2'(x)  y_1'(x) y_2(x)}<math> and <math>\frac{g(x) y_1(x)}{y_1(x) y_2'(x)  y_1'(x) y_2(x)}<math> to obtain u(x) and v(x), respectively. (Note that we only need one choice of u and v, so there is no need for constants of integration.)
 Compute <math>y_p(x) = u(x) y_1(x) + v(x) y_2(x)<math>. The function <math>y_p<math> is one solution of <math>y''(x) + p(x) y'(x) + q(x) y(x) = g(x)<math>.
 The general solution is <math>c_1 y_1(x) + c_2 y_2(x) + y_p(x)<math>, where <math>c_1<math> and <math>c_2<math> are arbitrary constants.
Higherorder equations
The method of variation of parameters can also be used with higherorder equations. For example, if <math>y_1(x)<math>, <math>y_2(x)<math>, and <math>y_3(x)<math> are linearly independent solutions to <math>y'''(x) + p(x) y''(x) + q(x) y'(x) + r(x) y(x) = 0<math>, then there exist functions u(x), v(x), and w(x) such that <math>u'(x) y_1(x) + v'(x) y_2(x) + w'(x) y_3(x) = 0<math>, <math>u'(x) y_1'(x) + v'(x) y_2'(x) + w'(x) y_3'(x) = 0<math>, and <math>u'(x) y_1''(x) + v'(x) y_2''(x) + w'(x) y_3''(x) = g(x)<math>. Having found such functions (by solving algebraically for u'(x), v'(x), and w'(x), then integrating each), we have <math>y_p(x) = u(x) y_1(x) + v(x) y_2(x) + w(x) y_3(x)<math>, one solution to the equation <math>y'''(x) + p(x) y''(x) + q(x) y'(x) + r(x) y(x) = g(x)<math>.
Example
Solve the previous example, <math>y'' + y = \sec x<math> Recall <math>\sec x = \frac{1}{{\cos x}} = f<math>. From technique learned from 3.1, LHS has root of <math>r = \pm i<math> that yield <math>y_c = C_1 \cos x + C_2 \sin x<math>, (so <math>y_1 = \cos x<math>, <math>y_2 = \sin x<math> ) and its derivatives <math>\left\{ {\begin{matrix}
{\dot u = \frac{{  y_2 f}}{W} = \frac{{  \sin x}}{{\cos x}} = \tan x} \\ {\dot v = \frac{{y_1 f}}{W} = \frac{{\cos x}}{{\cos x}} = 1} \\
\end{matrix}} \right.<math> where Wronskian <math>W\left( {y_1,y_2 :x} \right) = \left {\begin{matrix}
{\cos x} & {\sin x} \\ {  \sin x} & {\cos x} \\
\end{matrix}} \right = 1<math> were computed in order to seek solution to its derivatives. Upon integration, <math>\left\{ \begin{matrix}
u =  \int {\tan xdx =  \ln \left {\sec x} \right + C} \\ v = \int {1dx = x + C} \\ \end{matrix} \right.<math>
Computing <math>y_p<math> and <math>y_G<math>: <math>\begin{matrix}
y_p = f = uy_1 + vy_2 = \cos x\ln \left {\cos x} \right + x\sin x \\ y_G = y_c + y_p = C_1 \cos x + C_2 \sin x + x\sin x + \cos x\ln \left( {\cos x} \right) \\ \end{matrix}<math>
General solution method for firstorder linear ODEs
For a firstorder linear ODE, with coefficients that may or may not vary with t:
<math>x'(t) + p(t) \times x(t) = r(t)<math>
Then:
<math>x=e^{C}(\int{r(t) \times e^{C}dt} + \kappa)<math>
Where <math>\kappa<math> is the constant of integration, and:
<math>C=\int{Adt}<math>
Proof
This proof comes from Jean Bernoulli. Let
 <math>x^\prime + px = r<math>
Suppose for some unknown functions u(t) and v(t) that x = uv.
Then
 <math>x^\prime = u^\prime v + u v^\prime<math>
Substituting into the differential equation,
 <math>u^\prime v + u v^\prime + puv = r<math>
Now, the most important step: Since the differential equation is linear we can split this into two independent equations and write
 <math>u^\prime v + puv = 0<math>
 <math>u v^\prime = r<math>
Since v is not zero, the top equation becomes
 <math>u^\prime + pu = 0<math>
The solution of this is
 <math>u = e^{  \int p dt } <math>
Substituting into the second equation
 <math>v = \int r e^{ \int p dt } + C <math>
Since x = uv, for arbitrary constant C
 <math>x =e^{  \int p dt } \left( \int r e^{ \int p dt } + C \right)<math>
First order differential equation with constant coefficients
As an illustrative example, consider a first order differential equation with constant coefficients:
 <math>a\frac{dx}{dt} + bx = Af(t).<math>
This equation is particularly relevant to first order systems such as RC circuits, massdamper systems.
After nondimensionalization, the equation becomes
 <math>\frac{d \chi}{d \tau} + \chi = F(\tau).<math>
In this case, p(t) = r(t) = 1.
Hence it's solution by inspection is
 <math>\chi (\tau) = e^{\tau} \left( \int F(\tau)e^{\tau} \, d\tau + C \right).<math>
Linear PDEs
The theory of linear partial differential equations may be said to begin with Lagrange (1779 to 1785). Monge (1809) treated ordinary and partial differential equations of the first and second order, uniting the theory to geometry, and introducing the notion of the "characteristic", the curve represented by <math>F(z) = 0<math>, which was investigated by Darboux, Levy, and Lie.
Firstorder PDEs
Pfaff (1814, 1815) gave the first general method of integrating partial differential equations of the first order, of which Gauss (1815) gave an analysis. Cauchy (1819) gave a simpler method, attacking the subject from the analytical standpoint, but using the Monge characteristic. Cauchy also first stated the theorem (now called the CauchyKovalevskaya theorem) that every analytic differential equation defines an analytic function, expressible by means of a convergent series.
Jacobi (1827) also gave an analysis of Pfaff's method, besides developing an original one (1836) which Clebsch published (1862). Clebsch's own method appeared in 1866, and others are due to Boole (1859), Korkine (1869), and A. Mayer (1872). Pfaff's problem (on total differential equations) was investigated by Natani (1859), Clebsch (1861, 1862), DuBoisReymond (1869), Cayley, Baltzer, Frobenius, Morera, Darboux, and Lie.
The next great improvement in the theory of partial differential equations of the first order was made by Lie (1872), who placed the whole subject on a solid foundation. After about 1870, Darboux, Kovalevsky, Méray, Mansion, Graindorge, and Imschenetsky became prominent in this line. The theory of partial differential equations of the second and higher orders, beginning with Laplace and Monge, was notably advanced by Ampère (1840).
The integration of partial differential equations with three or more variables was the object of elaborate investigations by Lagrange, and his name became connected with certain subsidiary equations. It was he and Charpit who originated one of the methods for integrating the general equation with two variables; a method which now bears Charpit's name.
Singular solutions
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century did it receive special attention. A valuable but littleknown work on the subject is that of Houtain (1854). Darboux (starting in 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field which was worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
Reduction to quadratures
The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenthcentury algebraists to find a method for solving the general equation of the <math>n<math>th degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that the differential equation meets its limitations very soon unless complex numbers are introduced. Hence analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter the real question was to be, not whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and if so, what are the characteristic properties of this function.
The Fuchsian theory
Two memoirs by Fuchs (Crelle, 1866, 1868), inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869, although his method for integrating a nonlinear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those followed in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve which remains unchanged under a rational transformation, so Clebsch proposed to classify the transcendent functions defined by the differential equations according to the invariant properties of the corresponding surfaces f = 0 under rational onetoone transformations.
Lie's theory
From 1870 Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact (Berührungstransformationen).
See also
 Examples of differential equations
 Differential equations of mathematical physics
 Differential equations from outside physics
 Difference equation
 Laplace transform applied to differential equations
 Boundary value problem
 List of dynamical systems and differential equations topics
External links
 EqWorld: The World of Mathematical Equations (http://eqworld.ipmnet.ru/index.htm), containing a list of ordinary differential equations with their solutions.
 Example ODEs (http://www.exampleproblems.com/wiki/index.php?title=Ordinary_Differential_Equations) from exampleproblems.com.
Bibliography
 A. D. Polyanin and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition)", Chapman & Hall/CRC Press, 2003.
 A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, 2002.
 D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
Topics in mathematics related to change  Edit (http://en.wikipedia.org/w/wiki.phtml?title=Template:Change&action=edit) 
Arithmetic  Calculus  Analysis  Differential equations  Dynamical systems 
ja:常微分方程式 pt:Equação diferencial ordinária ro:Ecuaţie diferenţială ordinară