Telechargé par Drame Ibrahime

CM Intro Chap1

publicité
UFR PhITEM
Master EEA
M2 Conception des Systèmes d’Énergie Électrique
Numerical Analysis
of Circuit Equations
Winter 2020-2021
[email protected]
Laboratoire de Génie Électrique de Grenoble
G2ELab - CNRS UMR 5269 - Université Grenoble Alpes
Bâtiment GreEn-ER - 21 avenue des Martyrs
CS 90624 - 38031 Grenoble Cedex 1 - France
Selplnig cerortiocn
Aordccing to a sduty by the Usitnivery of Cbriamdge, the oredr of letetrs in a
word deos not metatr, the olny ipormntat thnig is taht the fsrit and lsat are in the
rhigt palce. The rset can be in a tatol mses and you can sitll raed wotuhit any
perblom. This is baucese the hamun biran deos not raed each letetr ietslf, but
the wrod as a wlohe.
The proof...
However, carelessness, typographical errors, dubious puns, who knows, errors of
reasoning must be part of this document.
I’m sorry.
Any correction, suggestion or criticism will be welcomed with humility, emotion
and gratitude.
Thank you, thank you.
i
ii
Contents
Contents
Notations
iv
Introduction
1
1 Resolution of linear systems
1.1 Some reminders . . . . . . . . . . . . . .
1.1.1 Immediate direct resolution
1.2 Direct methods . . . . . . . . . . . . . . .
1.2.1 Principles . . . . . . . . . . . . .
1.2.2 LU decomposition . . . . . . .
1.3 Iterative methods . . . . . . . . . . . . .
1.3.1 Introduction . . . . . . . . . . .
1.3.2 Gauss Seidel’s method . . . .
1.3.3 Other methods . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
4
4
5
6
6
8
8
10
11
2 Reminders on electrical circuits
2.1 The conventions . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Kirchhoff’s laws . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Current-voltage relations . . . . . . . . . . . . . . . . . . . . .
2.4 Equivalence of sources in stationary sinusoidal mode .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
13
14
15
15
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
17
18
18
19
19
20
21
22
22
22
23
23
23
24
25
25
25
26
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 Formulation of an electrical circuit in the frequency domain
3.1 Writing in matrix form . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Incidence matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.1 Elements of graph theory . . . . . . . . . . . . . . . . . .
3.2.2 Concept of incidence matrix . . . . . . . . . . . . . . . .
3.2.3 Node-branch incidence matrix . . . . . . . . . . . . . .
3.2.4 Potential of nodes . . . . . . . . . . . . . . . . . . . . . . .
3.2.5 Mesh current . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Operational research . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.1 Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.2 Search for independent nodes . . . . . . . . . . . . . .
3.3.3 Search for independent meshes . . . . . . . . . . . . .
3.4 Use of the Kirchhoff current law . . . . . . . . . . . . . . . . . .
3.4.1 The natural writing . . . . . . . . . . . . . . . . . . . . . .
3.4.2 A minimum of unknowns . . . . . . . . . . . . . . . . . .
3.5 Use of the Kirchhoff voltage law . . . . . . . . . . . . . . . . . . .
3.5.1 The natural writing . . . . . . . . . . . . . . . . . . . . . .
3.5.2 A minimum of unknowns . . . . . . . . . . . . . . . . . .
3.6 Some remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iii
Notations
4 Solving differential equations
4.1 Explicit Euler method . . . . . . . . . . . . . . .
4.1.1 Principle . . . . . . . . . . . . . . . . . . .
4.1.2 Error . . . . . . . . . . . . . . . . . . . . . .
4.2 Implicit Euler method . . . . . . . . . . . . . . .
4.2.1 Principle . . . . . . . . . . . . . . . . . . .
4.2.2 Itération fonctionnelle . . . . . . . . .
4.3 Runge-kutta methods . . . . . . . . . . . . . . .
4.3.1 Second order Runge-Kutta method
4.3.2 Fourth order Runge-Kutta method
4.3.3 Remarks . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
27
28
29
29
30
31
31
31
32
5 Formulation of an electrical circuit in the time domain
33
5.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Setting up equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6 Resolution of non-linear systems
6.1 Preliminary remarks . . . . . . . . . . . . . . . . . . . .
6.1.1 Convergence criteria . . . . . . . . . . . . . .
6.1.2 Convergence rate . . . . . . . . . . . . . . . . .
6.1.3 Sensitivity . . . . . . . . . . . . . . . . . . . . . .
6.2 Interval methods . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Bisection method or dichotomy method
6.2.2 False position method . . . . . . . . . . . . .
6.3 Fixed point methods . . . . . . . . . . . . . . . . . . . .
6.3.1 Principle of fixed point methods . . . . . .
6.3.2 Substitution method . . . . . . . . . . . . . .
6.3.3 Newton-Raphson method . . . . . . . . . .
6.3.4 Secant method . . . . . . . . . . . . . . . . . . .
6.4 Generalization to non-linear systems . . . . . . . .
6.4.1 Newton-Raphson method . . . . . . . . . .
6.4.2 Gauss-Seidel method . . . . . . . . . . . . . .
6.5 Note on stopping criteria . . . . . . . . . . . . . . . .
Bibliographie
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
37
38
38
39
40
41
41
42
43
43
47
48
49
50
52
53
53
57
Notations
a
ai j
A
a
A
NG-2018
Real scalar.
Real scalar matrix term positioned on the i index line and on the j index column.
Real vector or matrix.
Complex scalar.
Complex vector or matrix.
v
Notations
vi
Introduction
T
course is an introduction to numerical analysis methods commonly used to solve algebraic or differential equations found in the modelling of physical, biological, chemical,
industrial or economic phenomena and to electrical circuit equation methods. This particularly large field requires mathematical, physical and computer skills at the same time. This
course is also intended to make you more critical of the computer tools used every day to
solve numerical problems.
In most cases, the solution of an algebraic or differential problem cannot be obtained analytically. In fact, the only possibility is to search for an approached solution using numerical
methods. The basic idea then consists in searching for the values of the unknown function
in a large number of points: it is the discretization. Thus, instead of solving a continuous
problem, we solve an algebraic system that defines the discretized problem. The solution obtained is then an approximate solution that differs from the exact analytical solution of the
continuous problem.
Numerical computation is a field of research in itself. An abundant literature is dedicated
to it: books, periodicals and programme libraries. As in many fields, practice plays an essential role in mastering the subject. We cannot reasonably expect to assimilate the methods
without manipulating them on concrete and known examples.
Numerical calculation is also a difficult art. We can get lost in it and forget that it is only a
tool for us. Many tend to want to reinvent everything and waste a lot of time rewriting standard methods with varying degrees of success. If this approach is legitimate or even necessary
for learning, it should not be made a habit, at the risk of becoming a Numerician; it is a choice.
For the Engineer, it is essential to know how to estimate the real place of numerical calculation in modelling and to find the right balance. Thus, it will not be necessary to hesitate to
use existing tools developed by professionals. In order to properly manipulate the methods
and routines that you can find here and there, and that you will implement in your program,
and those, more specific, that you will produce yourself, it is essential to know a minimum of
concepts. This is the objective of this course.
The content of this manuscript does not claim to be exhaustive and covers all techniques
related to numerical methods. On the contrary, it remains very succinct and gives only a few
elements of a "first journey". Thus, 3 families of resolutions are addressed:
HIS
• The resolution of linear systems
• The resolution of differential equation systems.
• The resolution of nonlinear systems
NG-2018
1
Introduction
The resolution by means of a computer has 2 important limitations leading to errors or
even non-convergences:
• The use of a digital processor with limited memory leads to numerical problem solving
in a discretized and finite space. However, the problems that we are trying to solve are
usually based on all the set of reals. There is therefore an inevitable first error linked to
this discretization. For example, the addition is commutative on all the real ones, but
on a discretized set with limited precision the order of the operations influences the
result.
• The algorithms used are often based on assumptions and/or simplifications that produce an approximate result (even if a calculator with infinite accuracy is used). For
example, the approximation of a derivative using Taylor’s series expansion.
In this course, setting up the equations of electrical circuits is also covered. The objective
here is to know how to write a problem and to see how it is possible to express it into an equation in a general (not necessarily optimal) way. The electrical circuits treated in this teaching
consist of the following dipoles:
• Loads R, L and C
• Independent current and voltage sources
Two main families of resolution are possible when dealing with electrical circuits of this
type:
• The temporal resolution that leads to the resolution of differential equations whose initial state is known. Step-by-step methods are therefore used over time.
• The frequency resolution which through a Laplace transform of the time system allows
a resolution of a linear system (instead of a resolution of a system of differential equations). In this case the sources are sinusoidal (or from a Fourier series development).
This course introduces the techniques of equation setting of electrical circuits and then
the main methods to solve them in order to understand the difficulties but also the problems
that can be encountered when using simulation tools. This course is not intended to make
you specialists in numerical resolution or circuit equation processing. However, at the end of
this course you will have a vision of the methods that can be used. The numerical resolution
techniques presented in this course remain basic. However, the methods used in commercial
or research tools have the same principles. Thus, the behaviours of the algorithms seen in this
course are representative of what is found in more sophisticated tools (they have the same
disadvantages), even if they are more stable (generally).
2
Chapter 1
Resolution of linear systems
T
chapter presents some methods for solving linear systems. The resolution of circuit
equations requires the resolution of a linear system. Of course, the algorithms studied
in this chapter are valid regardless of the physics modeled.
HIS
In general, the resolution of an equation system is carried out in matrix form:

 a 11 x1 + a 12 x2 + a 13 x3 = b1
a 21 x1 + a 22 x2 + a 23 x3 = b2
System of equations to solve

a 31 x1 + a 32 x2 + a 33 x3 = b3

a 11
Matrix form  a 21
a 31
a 12
a 22
a 32

 
 
b1
x1
a 13
a 23  ·  x2  =  b2  .
b3
x3
a 33
Which through compact writing is reduced to: A · x = b.
To solve this type of linear systems, there are 2 main families of algorithms:
• Direct methods for which the exact solution is found if the calculator has infinite accuracy. In this family, the number of operations to be performed is known before starting
the resolution.
• Iterative methods for which an approximate solution is obtained (regardless of the accuracy of the calculator). As this type of approach is iterative, the number of operations
before resolution is unknown and it is possible that no solution will be found.
In this chapter only the case of real matrices is treated. However, it is possible to rewrite
these algorithms in complex. This is not a particular problem, but the number of operations
is higher.
NG-2018
3
1. Resolution of linear systems
1.1
1.1.1
Some reminders
Immediate direct resolution
Here are some cases where the solution is trivial.
1.1.1.1
Diagonal matrices
A matrix is said to be diagonal if all extra diagonal terms are null. The resolution of a linear
system of this type is obvious:

a 11
 0
0

 
 
b1
0
x1
0  ·  x 2  =  b2  .
b3
x3
a 33
0
a 22
0
It immediately follows:

 
x1

 x2  = 


x3


1.1.1.2
b1
a 11
b2
a 22
b3
a 33




.


Triangular matrices
A matrix is said to be triangular if the terms below or above the diagonal are null. A system
with a triangular matrix (upper or lower) is easy to solve. Let’s take the case where we have a
lower triangular matrix. In this matrix the terms above the diagonal are null. We are therefore
trying to solve the following system:

a 11
 a 21
a 31

 
 
b1
0
x1
0  ·  x 2  =  b2  .
b3
x3
a 33
0
a 22
a 32
It can be seen that this system is easy to solve. Indeed, the first line simply gives us x1 , the
second line gives us x2 since x1 is known and so on. This leads to the following solution:

 
x1

 x2  = 


x3


b1
a 11
b2 − a 21 · x1
a 22
b3 − a 31 · x1 − a 32 · x2
a 33




.


Similarly, if the matrix A is an upper triangular type matrix, the solution can easily be found
starting from the bottom instead of the top.
1.1.1.3
Orthogonal matrices
A real square matrix A of order n is said to be orthogonal if it verifies one of the following
equivalent properties:
• A t · A = In .
4
1.2. Direct methods
• A · A t = In .
• A is invertible and A−1 = At .
• The determinant of an orthogonal matrix is square 1, i.e. it is equal to +1 or −1 (the
reciprocal is trivially false). An orthogonal matrix is said to be direct if its determinant
is +1 and indirect if it is −1.
It follows that in the case where the matrix A is orthogonal the solution is obvious since
the inverse of A is its transposition. In this case the system A · x = b has for solution: x = At · b.
1.1.1.4
Row and column permutations
When solving linear systems, it is possible that row or column permutations may be required
to avoid 0 divisions or to improve the quality of the solution. It may therefore be useful to
recall the consequences of switching rows and/or columns.
When you have a system of the form:

a 11
 a 21
a 31
a 12
a 22
a 32

 
 
b1
x1
a 13
a 23  ·  x2  =  b2  .
b3
x3
a 33
it is possible to swap rows or columns in the following way without changing the result.
Switching 2 lines (e. g. lines 2 and 3)

a 11
 a 31
a 21
a 12
a 32
a 22

 
 
b1
x1
a 13
a 33  ·  x2  =  b3  .
b2
x3
a 23
In this case, simply swap the lines of the matrix A and the vector b.
Switching 2 columns (e. g. columns 2 and 3)

a 11
 a 21
a 31
a 13
a 23
a 33

 
 
b1
x1
a 12
a 22  ·  x3  =  b2  .
b3
x2
a 32
In this case, you must swap the 2 columns of the matrix A but you must also swap the 2
rows of the vector x. So if the i and j columns are swapped in A, you must also swap the i and
j rows of the vector x.
1.2
Direct methods
Direct methods are generally used when the linear system to be solved is small (typically 1000
unknown).
5
1. Resolution of linear systems
1.2.1
Principles
The principle of direct methods is to reduce any matrix A describing the linear system to one
of the simple systems to be solved seen in paragraph §1.1.1. To do this, it is sufficient to carry
out linear operations. For example, two particularly well-known methods can be mentioned:
• Gauss - Jordan method in which the matrix A is transformed by successive operations
into a diagonal matrix.
• Gauss method in which the matrix A is transformed into an upper triangular matrix.
In fact these methods often seen in mathematics classes are not used for 2 main reasons:
• They are expensive in terms of calculation time (among others Gauss - Jordan).
• The linear operations performed modify the second member, therefore in case of resolution on several second members all operations must be repeated.
1.2.2
LU decomposition
This resolution technique is the most common. It allows to solve a linear system in an algebraic way in a minimum of operations for arbitrary matrix and whatever the number of
second members.
1.2.2.1
Principle
This decomposition consists in trying to write A as A = L · U with :




u 11 u 12 u 13
1
0 0
U =  0 u 22 u 23  .
L =  l 21 1 0 
0
0 u 33
l 31 l 32 1
When the matrices L and U are known, the resolution of the linear system is performed
by solving 2 triangular systems :
L·z =b
U·x=z
1.2.2.2
Decomposition of the matrix A
In this context, we have:
A=L·U

u 11
=  l 21 · u 11
l 31 · u 11
u 12
l 21 · u 12 + u 22
l 31 · u 12 + l 32 · u 22
 
a 11
u 13
 =  a 21
l 21 · u 13 + u 23
a 31
l 31 · u 13 + l 32 · u 23 + u 33
a 12
a 22
a 32

a 13
a 23  .
a 33
We can see that the matrices L and U can be built quite simply. Here is an algorithm (Algorithm 1) to build these 2 matrices (n represents the number of lines).
6
1.2. Direct methods
Algorithm 1: LU decomposition
input : Matrix A
output: Matrices L and U such that A = L · U
1
2
3
for j ← 1 to n − 1 do
for i ← 1 to j do
i −1
X
ui j ← ai j −
li k · uk j ;
k =1
end
for i ← j + 1 to n do
4
5
li j
6
j −1
X
1
· ai j −
li k · uk j
←
uj j
!
;
k =1
7
8
9
10
11
end
end
special treatment for the elements of column n;
for i ← 1 to n do
i −1
X
li k · uk n ;
ui n ← ai n −
k =1
12
end
It should be noted that this decomposition has a cost of about n 3 since 3 loops are intertwined. This cost is almost identical to that of a Gauss resolution but the LU decomposition
does not occur on the second member. In general, the matrices L and U are physically stored
in the same matrix so as not to store the unnecessary 0 and 1. The algorithm presented here
does not show the 1 of the matrix diagonal L. In this algorithm we see that there is a division
by a diagonal term. In the case where the latter is zero (or very low) it is necessary to perform a permutation of rows and/or columns in order to place a non-zero term. In general,
it is advisable to constantly search for the largest possible diagonal term. However, this can
be very expensive in terms of calculation time. Therefore, in practice, this operation is only
performed when the pivot is very close to 0 (0 numerical).
When the decomposition is performed, the product of the diagonal terms of the matrix U
gives the matrix determinant A.
1.2.2.3
Resolution of triangular systems
The matrices L and U being triangular, the resolution of the matrix system A · x = b is particularly easy. The algorithm (Algorithm 2) presents this resolution.
The cost of this operation is about n 2 . So, when you have several second members this
method is particularly interesting. In the case of an attempt to invert a matrix, it will be sufficient to solve n linear systems by using the identity matrix as a second member. This will
lead to an inversion cost of about n 3 while the same technique with a Gauss method will lead
to a cost of about n 4 .
1.2.2.4
Other methods
There are other decomposition methods available to solve linear systems. The best known
are the following:
7
1. Resolution of linear systems
Algorithm 2: Resolution of triangular systems
input : Matrices L and U, vector b
output: Solution vector x
1
2
3
z 1 ← b1 ;
for i ← 2 to n do
i −1
X
li j · z j ;
z i ← bi −
j =1
4
5
6
7
8
end
zn
;
un n
for i ← n − 1 to 1 do
xn ←
n
X
1
ui j · x j
· zi −
xi ←
ui i
j =i +1
!
;
end
• Cholesky decomposition: this decomposition applies only to positive defined symmetric matrices. It is similar to the LU method but given the properties of the matrix A, the
matrix U is simply the transposition of the matrix L (with all the simplifications that can
result in number of operations but also in memory space,. . . ).
• QR decomposition: this decomposition allows to solve linear systems but the number
of operations is higher than a LU type method (about twice as much). However, this
method is particularly interesting because it can be used to solve overdetermined systems (more equations than unknowns) and it can also be used to determine the eigenvalues and eigenvectors of a matrix. It should be noted that there are methods other
than QR decomposition to solve overdetermined systems and to evaluate the eigenvalues and eigenvectors of a matrix,. . . .
1.3
Iterative methods
These methods are reserved for large linear systems (many unknowns). They generally have:
• A stop criterion allowing to have a stop condition of the algorithm. Typically, this criterion consists of comparing the result of the i iteration with that of the i − 1 iteration
and if the difference is less than the criterion it means that the solution is found.
• A maximum number of iterations to manage the case where the condition of the stop
criterion is never reached (or too slowly).
• A starting point.
1.3.1
Introduction
Before discussing a method of iterative solving of a linear system, this paragraph presents (reminds) the principle of an iterative method on a simple resolution of a non-linear equation to
1 unknown. First of all, let’s change the form of this equation to bring f (x ) = 0 to an equation
of the form g (x ) = x (for example by adding x to the left and right of the f (x ) : f (x ) + x = x
function) Therefore, the solution of the equation is now reduced to finding the intersection
8
1.3. Iterative methods
between the equation line x and the function g (x ). The intersection can be obtained as follows:
• Choice of an initial value: x0 .
• Iteration 1: Determination of the value x1 by applying: x1 = g (x0 ).
• Iteration 2: Determination of the value x2 by applying : x2 = g (x1 ).
• ...
• This process continues until convergence.
When we talk about convergence, the first legitimate question to ask is whether the method
is converging towards the solution. To do this, it is enough to draw a few scenarios in order to
see the behaviour of this iterative method.
y
y
slope = −1
y =x
y = g (x )
y = g (x )
y =x
x
x0
x
x1 x2 xi
x1
(a) Convergence
xi x2
x0
(b) Convergence
y
y =x
y = g (x )
x
xi
x2
x1 x0
(c) Divergence
Figure 1.1: Illustration of the evolution of the iterative method.
On the 2 curves of the figure 1.1, both cases are presented. In the figures 1.1(a) and 1.1(b)
we can see that the iterative process is approaching the intersection xi . On the other hand,
in the case of the figure 1.1(c) the iterative process moves away from the solution.
d g (x )
In fact, for this method to converge, it is necessary to have d x x =x < 1 (necessary but
i
not sufficient condition). Of course, since we do not know xi it is impossible to check if this
9
1. Resolution of linear systems
condition is verified. It can therefore be seen that there is no guarantee that a solution will be
found. This observation can be generalized to all iterative methods.
1.3.2
Gauss Seidel’s method
In order for this algorithm to converge, there is a condition to be met on the matrix A describing the linear system. It is necessary that A be a dominant diagonal matrix. A matrix is said to
be dominant diagonal if for every i we have:
|a i i | ≥
1.3.2.1
n
X
ai j .
j =1
j 6=i
Principle
To illustrate this method, let us take an example of 3 equations with 3 unknowns:

 a 11 · x1 + a 12 · x2 + a 13 · x3 = b1
a 21 · x1 + a 22 · x2 + a 23 · x3 = b2
 a ·x +a ·x +a ·x = b
31
1
32
2
33
3
3
The idea is to solve the first equation with respect to x1 , the second with x2 and so on using
for the current iteration either the results of the previous iteration or the current iteration (x1 ,
x2 ,. . . come from the previous iteration and x1′ , x2′ ,. . . come from the current iteration):

b1 − a 12 · x2 − a 13 · x3


x1′ =


a 11


b2 − a 21 · x1′ − a 23 · x3
′
x2 =
a 22



b
−
a
·
x1′ − a 32 · x2′

3
31

 x3′ =
a 33
1.3.2.2
Algorithm
The algorithm of the Gauss Seidel method is given in (Algorithm 3).
Algorithm 3: Gauss Seidel
input : Matrix A, vectors b and x0 , stopping criterion ǫ and maximum iteration
number km a x
output: Solution vector x and iteration number k
1
2
3
4
5
6
x ← x0 ;
while ǫc > ǫ and k < km a x do
for i ← 1 to n do
!
n
i −1
X
X
1
xi′ ←
ai j · x j ;
a i j ·x j′ −
· bi −
ai i
j =i +1
j =1
end
end
The choice of the stopping criterion is a little more complicated here than in the case of an
equation with an unknown since these are vectors. Here are 2 possibilities (there are others):
10
1.3. Iterative methods
•
n
X
i =1
xi′ − xi ≤ ǫ (n being the number of lines and ǫ the stop criterion). This technique is
particularly fast but the accuracy is not very well controlled and especially in the case
of a slow convergence, there is a risk of stopping at the wrong solution. The same principle can be applied to each line instead of summing up and verifying the convergence
criterion on each line.
• Use of the residue standard as a stopping criterion: b − A · x′ < ǫ . This method, which
is often used, is particularly effective if the residue is known. In this case, the calculation of the residue will have a cost since it will be necessary to carry out a matrix-vector
product.
There is also a method to accelerate convergence. It consists in considering a relaxation
coefficient when the term xi′ has been calculated: xi′ = xi′ + α · (xi′ − xi ). If α is said to be overrelaxation, it leads to faster convergence but if α is too large (usually for α > 2) the system is
likely to diverge strongly. If α < 1 is said to be under-relaxation, such a value allows in some
cases to converge the method when it would normally diverge. It should be noted that this
technique is also valid for all iterative methods but is rarely used because of the particularly
delicate setting of the term α.
1.3.3
Other methods
The iterative algorithm families most commonly used to solve linear systems are descent
methods and methods based on the construction of a Krylov subspace. Here are the most
common ones:
• Conjugated gradient method (descent method for defined positive matrices).
• Double conjugated gradient or GMRes (for arbitrary matrices).
11
1. Resolution of linear systems
12
Téléchargement