To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Matrix differential equation

From Wikipedia, the free encyclopedia

A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives.

For example, a first-order matrix ordinary differential equation is

where is an vector of functions of an underlying variable , is the vector of first derivatives of these functions, and is an matrix of coefficients.

In the case where is constant and has n linearly independent eigenvectors, this differential equation has the following general solution,

where λ1, λ2, …, λn are the eigenvalues of A; u1, u2, …, un are the respective eigenvectors of A; and c1, c2, …, cn are constants.

More generally, if commutes with its integral then the Magnus expansion reduces to leading order, and the general solution to the differential equation is

where is an constant vector.

By use of the Cayley–Hamilton theorem and Vandermonde-type matrices, this formal matrix exponential solution may be reduced to a simple form.[1] Below, this solution is displayed in terms of Putzer's algorithm.[2]

YouTube Encyclopedic

  • 1/5
    Views:
    176 660
    118 586
    21 915
    20 448
    1 474
  • Linear Systems: Matrix Methods | MIT 18.03SC Differential Equations, Fall 2011
  • Matrix methods for systems of differential equations
  • Systems of linear first-order odes | Lecture 39 | Differential Equations for Engineers
  • Matrix Form for a System of Differential Equations
  • System of First Order Differential Equations || Matrix Method || RSC || Dr. Abhishek

Transcription

PROFESSOR: Welcome back. So in this session, we're going to use the matrix method to solve this linear system of differential equations. These are x dot equals 6x plus 5y, and y dot equals x plus 2y. So why don't you take a few minutes to write down the system in matrix form and go through the matrix method to solve it. And I'll be right back. Welcome back. So let's write down this system in matrix form. You would have a vector with entries x and y prime equals a matrix with entries 6, 5, 1, 2 multiplying the column vector xy. So now, we did big part of the work. The matrix method tells us that we need to find the eigenvalues of this matrix to be able to basically diagonalize it and seek eigenvectors so that then we can just read off the solutions and write the solution of the system as a linear combination of the eigenvectors that we found. So let's look for the eigenvalues first. The eigenvalues would be computed by seeking the determinant of this matrix in this form, 6 minus lambda, 5, 1, 2 minus lambda. We're going to have an equation on lambda, solve for lambda, and the solutions will be our eigenvalues. So the determinant would be 6 minus lambda multiplying 2 minus lambda minus 5, 1 dot 5, equals to 0. So here, the lambda that lambda gives us a lambda dot squared. We have minus 6 lambda minus 2 lambda, which would be minus 8 lambda. And then, we would have 2 dot 6, which is 12, minus 5, which gives us 7. So quadratic equation in lambda, and you can factorize it and find the solutions, which is lambda 1 equals to 1, lambda 2 equals to 7. So we're done with the first part. These are our eigenvalues. They're not repeated. They're just completely different and real valued. So now, we're going to look at the eigenvectors associated to each eigenvalue. So first eigenvector would be associated with lambda 1 equals to 1. So we would be solving this system. We would be solving this system with a new matrix, 6 minus 1. I'm going to spell out this one so that 2 minus 1. So this is just our lambda, multiplying an unknown vector with components a1 and a2 equals to zero vector. And basically here, the unknowns are a1 and a2. So this is simply 5, 5, 1, and 1, a1, a2 equals to 0, 0. So as you saw before, here, basically, we can read off the equation as being 5a1 plus 5a2 equals to 0 and another one which is a1 plus a2 equals to 0. They're the same equations. So really, we just have a1 plus a2 equals to 0. And so our vector V1 could be picked to just have component 1, for example, a1 equals to 1. And its second component would just be minus 1. That would be one pick for our V1. We could normalize this vector if you wanted to. I'm just going to keep it like this for now. So if we look now for the second negative vector corresponding to the second eigenvalue of 7, I would be looking for the components of these vectors by doing a similar solving for the same thing. And I'm going to spell it out again so that you see where the terms are coming from. It's just 6 minus the value of my lambda, 0, 0. So here, we have 6 minus 7, which is 1, 5. And then, we have 1 and 2 minus 7, which is minus 5. So really, what do you have is an equation minus 1 plus 5a2 in both cases. So we can pick a value for a1 or a2 and write down a vector V2. And for example, the form of a1 equals to, let's pick a2 equals to 1. And we would have a1 equals to 5, for example. Again, if you wanted an orthonormal basis formed by your V1, V2, you would just normalize these two vectors. So here, basically, we can then rewrite the solution to the original system as being linear combinations of-- so I'm just going to write it in vector form. The first vector 1, I keep it in V1, V2, that way you see it. And then, I'll go into the component. We'd have V1 exponential of the value of lambda we found that corresponds to V1. So it would be 1 dot t. And then, V2 exponential of the lambda value that corresponds to V2. And then, basically, we just have constants of integration here. And so the solution to this problem would be linear combination of the vectors by the basis of our eigenvectors and multiplied by the exponentials assigned a value of the eigenvalues that we found when we looked for the eigenvalues of the matrix of the system. So here, just know that like for the 1D problem that we saw before, we're building a solution based on linear combination of lucky guesses that we used. And in the one equation case, we used a guess of e to lambda dot t in 1D. Here, in this case, we had a guess of a vector in the form of lambda dot t that we use. And then, basically, we just solved for the lambdas, and solved for the V's, and did a linear combination of all the solutions. Like we did before in the 1D case, we solved the lambda. We had different values of lambda. We did a linear combinations of the exponentials. So that ends this problem. And here, the key is just to go through the method of diagonalizing your matrix. Basically, it's finding the eigenvalues, and then computing the eigenvectors associated with that, and writing your solutions in terms of a linear combination of the solution that you found.

Stability and steady state of the matrix system

The matrix equation

with n×1 parameter constant vector b is stable if and only if all eigenvalues of the constant matrix A have a negative real part.

The steady state x* to which it converges if stable is found by setting

thus yielding

assuming A is invertible.

Thus, the original equation can be written in the homogeneous form in terms of deviations from the steady state,

An equivalent way of expressing this is that x* is a particular solution to the inhomogeneous equation, while all solutions are in the form

with a solution to the homogeneous equation (b=0).

Stability of the two-state-variable case

In the n = 2 case (with two state variables), the stability conditions that the two eigenvalues of the transition matrix A each have a negative real part are equivalent to the conditions that the trace of A be negative and its determinant be positive.

Solution in matrix form

The formal solution of has the matrix exponential form

evaluated using any of a multitude of techniques.

Putzer Algorithm for computing eAt

Given a matrix A with eigenvalues ,

where

The equations for are simple first order inhomogeneous ODEs.

Note the algorithm does not require that the matrix A be diagonalizable and bypasses complexities of the Jordan canonical forms normally utilized.

Deconstructed example of a matrix ordinary differential equation

A first-order homogeneous matrix ordinary differential equation in two functions x(t) and y(t), when taken out of matrix form, has the following form:

where , , , and may be any arbitrary scalars.

Higher order matrix ODE's may possess a much more complicated form.

Solving deconstructed matrix ordinary differential equations

The process of solving the above equations and finding the required functions of this particular order and form consists of 3 main steps. Brief descriptions of each of these steps are listed below:

The final, third, step in solving these sorts of ordinary differential equations is usually done by means of plugging in the values calculated in the two previous steps into a specialized general form equation, mentioned later in this article.

Solved example of a matrix ODE

To solve a matrix ODE according to the three steps detailed above, using simple matrices in the process, let us find, say, a function x and a function y both in terms of the single independent variable t, in the following homogeneous linear differential equation of the first order,

To solve this particular ordinary differential equation system, at some point in the solution process, we shall need a set of two initial values (corresponding to the two state variables at the starting point). In this case, let us pick x(0) = y(0) = 1.

First step

The first step, already mentioned above, is finding the eigenvalues of A in

The derivative notation x′ etc. seen in one of the vectors above is known as Lagrange's notation (first introduced by Joseph Louis Lagrange. It is equivalent to the derivative notation dx/dt used in the previous equation, known as Leibniz's notation, honoring the name of Gottfried Leibniz.)

Once the coefficients of the two variables have been written in the matrix form A displayed above, one may evaluate the eigenvalues. To that end, one finds the determinant of the matrix that is formed when an identity matrix, , multiplied by some constant λ, is subtracted from the above coefficient matrix to yield the characteristic polynomial of it,

and solve for its zeroes.

Applying further simplification and basic rules of matrix addition yields

Applying the rules of finding the determinant of a single 2×2 matrix, yields the following elementary quadratic equation,

which may be reduced further to get a simpler version of the above,

Now finding the two roots, and of the given quadratic equation by applying the factorization method yields

The values and , calculated above are the required eigenvalues of A. In some cases, say other matrix ODE's, the eigenvalues may be complex, in which case the following step of the solving process, as well as the final form and the solution, may dramatically change.

Second step

As mentioned above, this step involves finding the eigenvectors of A from the information originally provided.

For each of the eigenvalues calculated, we have an individual eigenvector. For the first eigenvalue, which is , we have

Simplifying the above expression by applying basic matrix multiplication rules yields

All of these calculations have been done only to obtain the last expression, which in our case is α = 2β. Now taking some arbitrary value, presumably, a small insignificant value, which is much easier to work with, for either α or β (in most cases, it does not really matter), we substitute it into α = 2β. Doing so produces a simple vector, which is the required eigenvector for this particular eigenvalue. In our case, we pick α = 2, which, in turn determines that β = 1 and, using the standard vector notation, our vector looks like

Performing the same operation using the second eigenvalue we calculated, which is , we obtain our second eigenvector. The process of working out this vector is not shown, but the final result is

Third step

This final step finds the required functions that are 'hidden' behind the derivatives given to us originally. There are two functions, because our differential equations deal with two variables.

The equation which involves all the pieces of information that we have previously found, has the following form:

Substituting the values of eigenvalues and eigenvectors yields

Applying further simplification,

Simplifying further and writing the equations for functions x and y separately,

The above equations are, in fact, the general functions sought, but they are in their general form (with unspecified values of A and B), whilst we want to actually find their exact forms and solutions. So now we consider the problem’s given initial conditions (the problem including given initial conditions is the so-called initial value problem). Suppose we are given , which plays the role of starting point for our ordinary differential equation; application of these conditions specifies the constants, A and B. As we see from the conditions, when t = 0, the left sides of the above equations equal 1. Thus we may construct the following system of linear equations,

Solving these equations, we find that both constants A and B equal 1/3. Therefore substituting these values into the general form of these two functions specifies their exact forms,

the two functions sought.

Using matrix exponentiation

The above problem could have been solved with a direct application of the matrix exponential. That is, we can say that

Given that (which can be computed using any suitable tool, such as MATLAB's expm tool, or by performing matrix diagonalisation and leveraging the property that the matrix exponential of a diagonal matrix is the same as element-wise exponentiation of its elements)

the final result is

This is the same as the eigenvector approach shown before.

See also

References

  1. ^ Moya-Cessa, H.; Soto-Eguibar, F. (2011). Differential Equations: An Operational Approach. New Jersey: Rinton Press. ISBN 978-1-58949-060-4.
  2. ^ Putzer, E. J. (1966). "Avoiding the Jordan Canonical Form in the Discussion of Linear Systems with Constant Coefficients". The American Mathematical Monthly. 73 (1): 2–7. doi:10.1080/00029890.1966.11970714. JSTOR 2313914.
This page was last edited on 26 March 2024, at 21:11
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.