Introduction
Solvers of linear systems are one of the most important algorithms in scientific computations. TNL offers the following iterative methods:
- Stationary methods
- Jacobi method (TNL::Solvers::Linear::Jacobi)
- Successive-overrelaxation method, SOR (TNL::Solvers::Linear::SOR)
- Krylov subspace methods
- Conjugate gradient method, CG (TNL::Solvers::Linear::CG)
- Biconjugate gradient stabilized method, BICGStab (TNL::Solvers::Linear::BICGStab)
- Biconjugate gradient stabilized method, BICGStab(l) (TNL::Solvers::Linear::BICGStabL)
- Transpose-free quasi-minimal residual method, TFQMR (TNL::Solvers::Linear::TFQMR)
- Generalized minimal residual method, GMRES (TNL::Solvers::Linear::GMRES) with various methods of orthogonalization:
- Classical Gramm-Schmidt, CGS
- Classical Gramm-Schmidt with reorthogonalization, CGSR
- Modified Gramm-Schmidt, MGS
- Modified Gramm-Schmidt with reorthogonalization, MGSR
- Compact WY form of the Householder reflections, CWY
The iterative solvers (not the stationary solvers like TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR) can be combined with the following preconditioners:
- Diagonal or Jacobi (TNL::Solvers::Linear::Preconditioners::Diagonal)
- ILU (Incomplete LU) - CPU only currently
- ILU(0) (TNL::Solvers::Linear::Preconditioners::ILU0)
- ILUT (ILU with thresholding) (TNL::Solvers::Linear::Preconditioners::ILUT)
Iterative solvers of linear systems
Basic setup
All iterative solvers for linear systems can be found in the namespace TNL::Solvers::Linear. The following example shows the use the iterative solvers:
1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/Linear/TFQMR.h>
7
8template< typename Device >
9void
10iterativeLinearSolverExample()
11{
12
13
14
15
16
17
18
19
20
23 const int size( 5 );
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
27
29 {
30 const int rowIdx = row.getRowIndex();
31 if( rowIdx == 0 ) {
32 row.setElement( 0, rowIdx, 2.5 );
33 row.setElement( 1, rowIdx + 1, -1 );
34 }
35 else if( rowIdx == size - 1 ) {
36 row.setElement( 0, rowIdx - 1, -1.0 );
37 row.setElement( 1, rowIdx, 2.5 );
38 }
39 else {
40 row.setElement( 0, rowIdx - 1, -1.0 );
41 row.setElement( 1, rowIdx, 2.5 );
42 row.setElement( 2, rowIdx + 1, -1.0 );
43 }
44 };
45
46
47
48
49 matrix_ptr->forAllRows( f );
51
52
53
54
55 Vector x( size, 1.0 );
56 Vector b( size );
57 matrix_ptr->vectorProduct( x, b );
58 x = 0.0;
60
61
62
63
65 LinearSolver solver;
67 solver.setConvergenceResidue( 1.0e-6 );
68 solver.solve( b, x );
70}
71
72int
73main( int argc, char* argv[] )
74{
76 iterativeLinearSolverExample< TNL::Devices::Sequential >();
77
78#ifdef __CUDACC__
80 iterativeLinearSolverExample< TNL::Devices::Cuda >();
81#endif
82}
#define __cuda_callable__
Definition Macros.h:49
Vector extends Array with algebraic operations.
Definition Vector.h:36
Implementation of sparse matrix, i.e. matrix storing only non-zero elements.
Definition SparseMatrix.h:57
void setMatrix(const MatrixPointer &matrix)
Set the matrix of the linear system.
Definition LinearSolver.h:120
Iterative solver of linear systems based on the Transpose-free quasi-minimal residual (TFQMR) method.
Definition TFQMR.h:21
In this example we solve a linear system \( A \vec x = \vec b \) where
\[A = \left(
\begin{array}{cccc}
2.5 & -1 & & & \\
-1 & 2.5 & -1 & & \\
& -1 & 2.5 & -1 & \\
& & -1 & 2.5 & -1 \\
& & & -1 & 2.5 \\
\end{array}
\right)
\]
The right-hand side vector \(\vec b \) is set to \(( 1.5, 0.5, 0.5, 0.5, 1.5 )^T \) so that the exact solution is \( \vec x = ( 1, 1, 1, 1, 1 )^T \). The elements of the matrix \( A \) are set using the method TNL::Matrices::SparseMatrix::forAllRows. In this example, we use the sparse matrix but any other matrix type can be used as well (see the namespace TNL::Matrices). Next we set the solution vector \( \vec x = ( 1, 1, 1, 1, 1 )^T \) and multiply it with matrix \( A \) to get the right-hand side vector \( \vec b \). Finally, we reset the vector \( \vec x \) to zero.
To solve the linear system in the example, we use the TFQMR solver. Other solvers can be used as well (see the namespace TNL::Solvers::Linear). The solver needs only one template parameter which is the matrix type. Next we create an instance of the solver and set the matrix of the linear system. Note that the matrix is passed to the solver as a std::shared_ptr. Then we set the stopping criterion for the iterative method in terms of the relative residue, i.e. \( \lVert \vec b - A \vec x \rVert / \lVert b \rVert \). The solver is executed by calling the TNL::Solvers::Linear::LinearSolver::solve method which accepts the right-hand side vector \( \vec b \) and the solution vector \( \vec x \).
The result looks as follows:
Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Setup with a solver monitor
Solution of large linear systems may take a lot of time. In such situations, it is useful to be able to monitor the convergence of the solver or the solver status in general. For this purpose, TNL provides a solver monitor which can show various metrics in real time, such as current number of iterations, current residue of the approximate solution, etc. The solver monitor in TNL runs in a separate thread and it refreshes the status of the solver with a configurable refresh rate (once per 500 ms by default). The use of the solver monitor is demonstrated in the following example.
1#include <iostream>
2#include <memory>
3#include <chrono>
4#include <thread>
5#include <TNL/Matrices/SparseMatrix.h>
6#include <TNL/Devices/Sequential.h>
7#include <TNL/Devices/Cuda.h>
8#include <TNL/Solvers/Linear/Jacobi.h>
9
10template< typename Device >
11void
12iterativeLinearSolverExample()
13{
14
15
16
17
18
19
20
21
22
25 const int size( 5 );
27 matrix_ptr->setDimensions( size, size );
28 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
29
31 {
32 const int rowIdx = row.getRowIndex();
33 if( rowIdx == 0 ) {
34 row.setElement( 0, rowIdx, 2.5 );
35 row.setElement( 1, rowIdx + 1, -1 );
36 }
37 else if( rowIdx == size - 1 ) {
38 row.setElement( 0, rowIdx - 1, -1.0 );
39 row.setElement( 1, rowIdx, 2.5 );
40 }
41 else {
42 row.setElement( 0, rowIdx - 1, -1.0 );
43 row.setElement( 1, rowIdx, 2.5 );
44 row.setElement( 2, rowIdx + 1, -1.0 );
45 }
46 };
47
48
49
50
51 matrix_ptr->forAllRows( f );
53
54
55
56
57 Vector x( size, 1.0 );
58 Vector b( size );
59 matrix_ptr->vectorProduct( x, b );
60 x = 0.0;
62
63
64
65
67 LinearSolver solver;
69 solver.setOmega( 0.0005 );
70
71
72
73
75 IterativeSolverMonitorType monitor;
77 monitor.setRefreshRate( 10 );
78 monitor.setVerbose( 1 );
79 monitor.setStage( "Jacobi stage:" );
80 solver.setSolverMonitor( monitor );
81 solver.setConvergenceResidue( 1.0e-6 );
82 solver.solve( b, x );
83 monitor.stopMainLoop();
85}
86
87int
88main( int argc, char* argv[] )
89{
91 iterativeLinearSolverExample< TNL::Devices::Sequential >();
92
93#ifdef __CUDACC__
95 iterativeLinearSolverExample< TNL::Devices::Cuda >();
96#endif
97}
Iterative solver of linear systems based on the Jacobi method.
Definition Jacobi.h:21
A RAII wrapper for launching the SolverMonitor's main loop in a separate thread.
Definition SolverMonitor.h:133
First, we set up the same linear system as in the previous example, we create an instance of the Jacobi solver and we pass the matrix of the linear system to the solver. Then, we set the relaxation parameter \( \omega \) of the Jacobi solver to 0.0005. The reason is to artificially slow down the convergence, because we want to see some iterations in this example. Next, we create an instance of the solver monitor and a special thread for the monitor (an instance of the TNL::Solvers::SolverMonitorThread class). We use the following methods to configure the solver monitor:
Next, we call TNL::Solvers::IterativeSolver::setSolverMonitor to connect the solver with the monitor and we set the convergence criterion based on the relative residue. Finally, we start the solver using the TNL::Solvers::Linear::Jacobi::solve method and when the solver finishes, we stop the monitor using TNL::Solvers::SolverMonitor::stopMainLoop.
The result looks as follows:
Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 40899 RES: 0.0013506
Jacobi stage: ITER: 82863 RES: 2.1425e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 5675 RES: 0.30444
Jacobi stage: ITER: 10887 RES: 0.13573
Jacobi stage: ITER: 11675 RES: 0.12025
Jacobi stage: ITER: 12349 RES: 0.10838
Jacobi stage: ITER: 12987 RES: 0.098292
Jacobi stage: ITER: 13591 RES: 0.089582
Jacobi stage: ITER: 14131 RES: 0.082451
Jacobi stage: ITER: 14747 RES: 0.075006
Jacobi stage: ITER: 15355 RES: 0.068318
Jacobi stage: ITER: 15967 RES: 0.062189
Jacobi stage: ITER: 16583 RES: 0.056574
Jacobi stage: ITER: 17195 RES: 0.051498
Jacobi stage: ITER: 17811 RES: 0.046849
Jacobi stage: ITER: 18429 RES: 0.042593
Jacobi stage: ITER: 19051 RES: 0.038724
Jacobi stage: ITER: 19589 RES: 0.035642
Jacobi stage: ITER: 20225 RES: 0.032324
Jacobi stage: ITER: 20853 RES: 0.029352
Jacobi stage: ITER: 21445 RES: 0.026801
Jacobi stage: ITER: 22049 RES: 0.024426
Jacobi stage: ITER: 22663 RES: 0.022235
Jacobi stage: ITER: 23303 RES: 0.020153
Jacobi stage: ITER: 23939 RES: 0.018277
Jacobi stage: ITER: 24575 RES: 0.016576
Jacobi stage: ITER: 25211 RES: 0.015033
Jacobi stage: ITER: 25849 RES: 0.013626
Jacobi stage: ITER: 26487 RES: 0.012358
Jacobi stage: ITER: 27123 RES: 0.011208
Jacobi stage: ITER: 27759 RES: 0.010164
Jacobi stage: ITER: 28395 RES: 0.0092184
Jacobi stage: ITER: 29031 RES: 0.0083605
Jacobi stage: ITER: 29667 RES: 0.0075823
Jacobi stage: ITER: 30305 RES: 0.0068724
Jacobi stage: ITER: 30941 RES: 0.0062328
Jacobi stage: ITER: 31577 RES: 0.0056527
Jacobi stage: ITER: 32213 RES: 0.0051266
Jacobi stage: ITER: 32845 RES: 0.0046523
Jacobi stage: ITER: 33475 RES: 0.0042245
Jacobi stage: ITER: 34111 RES: 0.0038313
Jacobi stage: ITER: 34743 RES: 0.0034769
Jacobi stage: ITER: 35375 RES: 0.0031552
Jacobi stage: ITER: 36007 RES: 0.0028633
Jacobi stage: ITER: 36639 RES: 0.0025984
Jacobi stage: ITER: 37271 RES: 0.0023581
Jacobi stage: ITER: 37903 RES: 0.0021399
Jacobi stage: ITER: 38533 RES: 0.0019419
Jacobi stage: ITER: 39167 RES: 0.0017623
Jacobi stage: ITER: 39799 RES: 0.0015992
Jacobi stage: ITER: 40429 RES: 0.0014513
Jacobi stage: ITER: 41061 RES: 0.001317
Jacobi stage: ITER: 41695 RES: 0.0011952
Jacobi stage: ITER: 42327 RES: 0.0010846
Jacobi stage: ITER: 42959 RES: 0.00098427
Jacobi stage: ITER: 43591 RES: 0.00089322
Jacobi stage: ITER: 44223 RES: 0.00081058
Jacobi stage: ITER: 44855 RES: 0.00073559
Jacobi stage: ITER: 45487 RES: 0.00066754
Jacobi stage: ITER: 46119 RES: 0.00060578
Jacobi stage: ITER: 46751 RES: 0.00054974
Jacobi stage: ITER: 47383 RES: 0.00049888
Jacobi stage: ITER: 48015 RES: 0.00045273
Jacobi stage: ITER: 48649 RES: 0.00041059
Jacobi stage: ITER: 49281 RES: 0.00037261
Jacobi stage: ITER: 49913 RES: 0.00033814
Jacobi stage: ITER: 50547 RES: 0.00030686
Jacobi stage: ITER: 51179 RES: 0.00027847
Jacobi stage: ITER: 51811 RES: 0.00025271
Jacobi stage: ITER: 52443 RES: 0.00022933
Jacobi stage: ITER: 53075 RES: 0.00020811
Jacobi stage: ITER: 53705 RES: 0.00018886
Jacobi stage: ITER: 54337 RES: 0.00017139
Jacobi stage: ITER: 54977 RES: 0.00015534
Jacobi stage: ITER: 55611 RES: 0.00014097
Jacobi stage: ITER: 56243 RES: 0.00012793
Jacobi stage: ITER: 56879 RES: 0.00011602
Jacobi stage: ITER: 57517 RES: 0.00010516
Jacobi stage: ITER: 58153 RES: 9.5371e-05
Jacobi stage: ITER: 58789 RES: 8.6495e-05
Jacobi stage: ITER: 59427 RES: 7.8445e-05
Jacobi stage: ITER: 60063 RES: 7.1144e-05
Jacobi stage: ITER: 60699 RES: 6.4522e-05
Jacobi stage: ITER: 61337 RES: 5.8481e-05
Jacobi stage: ITER: 61975 RES: 5.3038e-05
Jacobi stage: ITER: 62611 RES: 4.8102e-05
Jacobi stage: ITER: 63247 RES: 4.3625e-05
Jacobi stage: ITER: 63883 RES: 3.9565e-05
Jacobi stage: ITER: 64521 RES: 3.5861e-05
Jacobi stage: ITER: 65157 RES: 3.2523e-05
Jacobi stage: ITER: 65795 RES: 2.9496e-05
Jacobi stage: ITER: 66429 RES: 2.6751e-05
Jacobi stage: ITER: 67059 RES: 2.4291e-05
Jacobi stage: ITER: 67691 RES: 2.2044e-05
Jacobi stage: ITER: 68323 RES: 2.0004e-05
Jacobi stage: ITER: 68957 RES: 1.8143e-05
Jacobi stage: ITER: 69589 RES: 1.6464e-05
Jacobi stage: ITER: 70221 RES: 1.4941e-05
Jacobi stage: ITER: 70853 RES: 1.3559e-05
Jacobi stage: ITER: 71483 RES: 1.2312e-05
Jacobi stage: ITER: 72119 RES: 1.1166e-05
Jacobi stage: ITER: 72755 RES: 1.0127e-05
Jacobi stage: ITER: 73391 RES: 9.1843e-06
Jacobi stage: ITER: 74029 RES: 8.3244e-06
Jacobi stage: ITER: 74665 RES: 7.5497e-06
Jacobi stage: ITER: 75303 RES: 6.847e-06
Jacobi stage: ITER: 75939 RES: 6.2098e-06
Jacobi stage: ITER: 76577 RES: 5.6284e-06
Jacobi stage: ITER: 77213 RES: 5.1045e-06
Jacobi stage: ITER: 77849 RES: 4.6294e-06
Jacobi stage: ITER: 78487 RES: 4.1986e-06
Jacobi stage: ITER: 79123 RES: 3.8078e-06
Jacobi stage: ITER: 79759 RES: 3.4534e-06
Jacobi stage: ITER: 80399 RES: 3.1301e-06
Jacobi stage: ITER: 81035 RES: 2.8388e-06
Jacobi stage: ITER: 81673 RES: 2.573e-06
Jacobi stage: ITER: 82309 RES: 2.3335e-06
Jacobi stage: ITER: 82943 RES: 2.1176e-06
Jacobi stage: ITER: 83575 RES: 1.9217e-06
Jacobi stage: ITER: 84207 RES: 1.7439e-06
Jacobi stage: ITER: 84839 RES: 1.5826e-06
Jacobi stage: ITER: 85471 RES: 1.4362e-06
Jacobi stage: ITER: 86103 RES: 1.3033e-06
Jacobi stage: ITER: 86735 RES: 1.1827e-06
Jacobi stage: ITER: 87371 RES: 1.0727e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
The monitoring of the solver can be improved by time elapsed since the beginning of the computation as demonstrated in the following example:
1#include <iostream>
2#include <memory>
3#include <chrono>
4#include <thread>
5#include <TNL/Timer.h>
6#include <TNL/Matrices/SparseMatrix.h>
7#include <TNL/Devices/Sequential.h>
8#include <TNL/Devices/Cuda.h>
9#include <TNL/Solvers/Linear/Jacobi.h>
10
11template< typename Device >
12void
13iterativeLinearSolverExample()
14{
15
16
17
18
19
20
21
22
23
26 const int size( 5 );
28 matrix_ptr->setDimensions( size, size );
29 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
30
32 {
33 const int rowIdx = row.getRowIndex();
34 if( rowIdx == 0 ) {
35 row.setElement( 0, rowIdx, 2.5 );
36 row.setElement( 1, rowIdx + 1, -1 );
37 }
38 else if( rowIdx == size - 1 ) {
39 row.setElement( 0, rowIdx - 1, -1.0 );
40 row.setElement( 1, rowIdx, 2.5 );
41 }
42 else {
43 row.setElement( 0, rowIdx - 1, -1.0 );
44 row.setElement( 1, rowIdx, 2.5 );
45 row.setElement( 2, rowIdx + 1, -1.0 );
46 }
47 };
48
49
50
51
52 matrix_ptr->forAllRows( f );
54
55
56
57
58 Vector x( size, 1.0 );
59 Vector b( size );
60 matrix_ptr->vectorProduct( x, b );
61 x = 0.0;
63
64
65
66
68 LinearSolver solver;
70 solver.setOmega( 0.0005 );
71
72
73
74
76 IterativeSolverMonitorType monitor;
78 monitor.setRefreshRate( 10 );
79 monitor.setVerbose( 1 );
80 monitor.setStage( "Jacobi stage:" );
82 monitor.setTimer( timer );
84 solver.setSolverMonitor( monitor );
85 solver.setConvergenceResidue( 1.0e-6 );
86 solver.solve( b, x );
87 monitor.stopMainLoop();
89}
90
91int
92main( int argc, char* argv[] )
93{
95 iterativeLinearSolverExample< TNL::Devices::Sequential >();
96
97#ifdef __CUDACC__
99 iterativeLinearSolverExample< TNL::Devices::Cuda >();
100#endif
101}
Class for real time, CPU time and CPU cycles measuring.
Definition Timer.h:25
void start()
Starts timer.
Definition Timer.hpp:42
The only changes are around the lines where we create an instance of TNL::Timer, connect it with the monitor using TNL::Solvers::SolverMonitor::setTimer and start the timer with TNL::Timer::start.
The result looks as follows:
Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:1.451e-0
ELA: 0.10013 Jacobi stage: ITER: 41531 RES: 0.0012249
ELA: 0.20026 Jacobi stage: ITER: 82731 RES: 2.1877e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:1.5231e-
ELA: 0.10012 Jacobi stage: ITER: 691 RES: 0.80439
ELA: 0.20023 Jacobi stage: ITER: 1451 RES: 0.65919
ELA: 0.30036 Jacobi stage: ITER: 2039 RES: 0.57741
ELA: 0.40047 Jacobi stage: ITER: 2641 RES: 0.51089
ELA: 0.50058 Jacobi stage: ITER: 3175 RES: 0.46241
ELA: 0.60072 Jacobi stage: ITER: 3783 RES: 0.4153
ELA: 0.70084 Jacobi stage: ITER: 4391 RES: 0.37476
ELA: 0.80098 Jacobi stage: ITER: 5003 RES: 0.33903
ELA: 0.90114 Jacobi stage: ITER: 5611 RES: 0.30755
ELA: 1.0013 Jacobi stage: ITER: 6219 RES: 0.27938
ELA: 1.1014 Jacobi stage: ITER: 6829 RES: 0.25387
ELA: 1.2015 Jacobi stage: ITER: 7443 RES: 0.23082
ELA: 1.3016 Jacobi stage: ITER: 8055 RES: 0.20996
ELA: 1.4017 Jacobi stage: ITER: 8599 RES: 0.19304
ELA: 1.5018 Jacobi stage: ITER: 9193 RES: 0.17609
ELA: 1.6019 Jacobi stage: ITER: 9813 RES: 0.16006
ELA: 1.702 Jacobi stage: ITER: 10407 RES: 0.14613
ELA: 1.8021 Jacobi stage: ITER: 10997 RES: 0.13341
ELA: 1.9022 Jacobi stage: ITER: 11605 RES: 0.12151
ELA: 2.0023 Jacobi stage: ITER: 12239 RES: 0.11026
ELA: 2.1025 Jacobi stage: ITER: 12875 RES: 0.099998
ELA: 2.2026 Jacobi stage: ITER: 13511 RES: 0.09069
ELA: 2.3026 Jacobi stage: ITER: 14147 RES: 0.082248
ELA: 2.4027 Jacobi stage: ITER: 14785 RES: 0.074547
ELA: 2.5028 Jacobi stage: ITER: 15421 RES: 0.067608
ELA: 2.6029 Jacobi stage: ITER: 16059 RES: 0.061316
ELA: 2.703 Jacobi stage: ITER: 16695 RES: 0.055609
ELA: 2.8031 Jacobi stage: ITER: 17331 RES: 0.050433
ELA: 2.9032 Jacobi stage: ITER: 17967 RES: 0.04574
ELA: 3.0033 Jacobi stage: ITER: 18603 RES: 0.041483
ELA: 3.1034 Jacobi stage: ITER: 19239 RES: 0.037622
ELA: 3.2035 Jacobi stage: ITER: 19875 RES: 0.03412
ELA: 3.3036 Jacobi stage: ITER: 20513 RES: 0.030926
ELA: 3.4037 Jacobi stage: ITER: 21149 RES: 0.028047
ELA: 3.5038 Jacobi stage: ITER: 21781 RES: 0.025453
ELA: 3.6039 Jacobi stage: ITER: 22411 RES: 0.023112
ELA: 3.704 Jacobi stage: ITER: 23047 RES: 0.020961
ELA: 3.8041 Jacobi stage: ITER: 23679 RES: 0.019022
ELA: 3.9042 Jacobi stage: ITER: 24311 RES: 0.017262
ELA: 4.0043 Jacobi stage: ITER: 24941 RES: 0.015665
ELA: 4.1044 Jacobi stage: ITER: 25573 RES: 0.014216
ELA: 4.2045 Jacobi stage: ITER: 26205 RES: 0.012901
ELA: 4.3046 Jacobi stage: ITER: 26837 RES: 0.011707
ELA: 4.4048 Jacobi stage: ITER: 27467 RES: 0.010631
ELA: 4.5049 Jacobi stage: ITER: 28101 RES: 0.0096413
ELA: 4.605 Jacobi stage: ITER: 28733 RES: 0.0087493
ELA: 4.7051 Jacobi stage: ITER: 29365 RES: 0.0079399
ELA: 4.8052 Jacobi stage: ITER: 29995 RES: 0.0072098
ELA: 4.9053 Jacobi stage: ITER: 30627 RES: 0.0065428
ELA: 5.0054 Jacobi stage: ITER: 31259 RES: 0.0059375
ELA: 5.1055 Jacobi stage: ITER: 31891 RES: 0.0053882
ELA: 5.2056 Jacobi stage: ITER: 32523 RES: 0.0048897
ELA: 5.3057 Jacobi stage: ITER: 33155 RES: 0.0044374
ELA: 5.4058 Jacobi stage: ITER: 33787 RES: 0.0040268
ELA: 5.5059 Jacobi stage: ITER: 34419 RES: 0.0036543
ELA: 5.606 Jacobi stage: ITER: 35051 RES: 0.0033162
ELA: 5.7061 Jacobi stage: ITER: 35683 RES: 0.0030094
ELA: 5.8062 Jacobi stage: ITER: 36315 RES: 0.002731
ELA: 5.9063 Jacobi stage: ITER: 36947 RES: 0.0024784
ELA: 6.0064 Jacobi stage: ITER: 37579 RES: 0.0022491
ELA: 6.1065 Jacobi stage: ITER: 38211 RES: 0.002041
ELA: 6.2066 Jacobi stage: ITER: 38843 RES: 0.0018522
ELA: 6.3067 Jacobi stage: ITER: 39475 RES: 0.0016808
ELA: 6.4068 Jacobi stage: ITER: 40107 RES: 0.0015253
ELA: 6.5069 Jacobi stage: ITER: 40739 RES: 0.0013842
ELA: 6.6069 Jacobi stage: ITER: 41371 RES: 0.0012562
ELA: 6.707 Jacobi stage: ITER: 42003 RES: 0.00114
ELA: 6.8071 Jacobi stage: ITER: 42635 RES: 0.0010345
ELA: 6.9072 Jacobi stage: ITER: 43265 RES: 0.00093879
ELA: 7.0073 Jacobi stage: ITER: 43895 RES: 0.00085247
ELA: 7.1074 Jacobi stage: ITER: 44529 RES: 0.00077313
ELA: 7.2075 Jacobi stage: ITER: 45161 RES: 0.0007016
ELA: 7.3076 Jacobi stage: ITER: 45797 RES: 0.0006363
ELA: 7.4077 Jacobi stage: ITER: 46433 RES: 0.00057708
ELA: 7.5078 Jacobi stage: ITER: 47071 RES: 0.00052337
ELA: 7.6079 Jacobi stage: ITER: 47707 RES: 0.00047466
ELA: 7.708 Jacobi stage: ITER: 48343 RES: 0.00043049
ELA: 7.8081 Jacobi stage: ITER: 48979 RES: 0.00039042
ELA: 7.9082 Jacobi stage: ITER: 49617 RES: 0.00035387
ELA: 8.0083 Jacobi stage: ITER: 50253 RES: 0.00032093
ELA: 8.1084 Jacobi stage: ITER: 50891 RES: 0.00029106
ELA: 8.2085 Jacobi stage: ITER: 51527 RES: 0.00026397
ELA: 8.3086 Jacobi stage: ITER: 52163 RES: 0.0002394
ELA: 8.4087 Jacobi stage: ITER: 52799 RES: 0.00021712
ELA: 8.5087 Jacobi stage: ITER: 53437 RES: 0.00019679
ELA: 8.6088 Jacobi stage: ITER: 54073 RES: 0.00017848
ELA: 8.7089 Jacobi stage: ITER: 54709 RES: 0.00016187
ELA: 8.809 Jacobi stage: ITER: 55345 RES: 0.0001468
ELA: 8.9091 Jacobi stage: ITER: 55977 RES: 0.00013322
ELA: 9.0092 Jacobi stage: ITER: 56607 RES: 0.00012097
ELA: 9.1093 Jacobi stage: ITER: 57239 RES: 0.00010978
ELA: 9.2094 Jacobi stage: ITER: 57871 RES: 9.9623e-05
ELA: 9.3095 Jacobi stage: ITER: 58503 RES: 9.0407e-05
ELA: 9.4096 Jacobi stage: ITER: 59135 RES: 8.2043e-05
ELA: 9.5097 Jacobi stage: ITER: 59767 RES: 7.4453e-05
ELA: 9.6098 Jacobi stage: ITER: 60399 RES: 6.7565e-05
ELA: 9.7099 Jacobi stage: ITER: 61033 RES: 6.1277e-05
ELA: 9.81 Jacobi stage: ITER: 61669 RES: 5.5574e-05
ELA: 9.9101 Jacobi stage: ITER: 62305 RES: 5.0401e-05
ELA: 10.01 Jacobi stage: ITER: 62943 RES: 4.5711e-05
ELA: 10.11 Jacobi stage: ITER: 63579 RES: 4.1456e-05
ELA: 10.21 Jacobi stage: ITER: 64215 RES: 3.7598e-05
ELA: 10.31 Jacobi stage: ITER: 64851 RES: 3.4099e-05
ELA: 10.411 Jacobi stage: ITER: 65487 RES: 3.0925e-05
ELA: 10.511 Jacobi stage: ITER: 66127 RES: 2.803e-05
ELA: 10.611 Jacobi stage: ITER: 66763 RES: 2.5421e-05
ELA: 10.711 Jacobi stage: ITER: 67399 RES: 2.3055e-05
ELA: 10.811 Jacobi stage: ITER: 68035 RES: 2.0909e-05
ELA: 10.911 Jacobi stage: ITER: 68673 RES: 1.8952e-05
ELA: 11.011 Jacobi stage: ITER: 69307 RES: 1.7198e-05
ELA: 11.111 Jacobi stage: ITER: 69943 RES: 1.5598e-05
ELA: 11.211 Jacobi stage: ITER: 70581 RES: 1.4137e-05
ELA: 11.311 Jacobi stage: ITER: 71217 RES: 1.2821e-05
ELA: 11.411 Jacobi stage: ITER: 71851 RES: 1.1635e-05
ELA: 11.512 Jacobi stage: ITER: 72483 RES: 1.0559e-05
ELA: 11.612 Jacobi stage: ITER: 73115 RES: 9.5821e-06
ELA: 11.712 Jacobi stage: ITER: 73747 RES: 8.6956e-06
ELA: 11.812 Jacobi stage: ITER: 74379 RES: 7.8911e-06
ELA: 11.912 Jacobi stage: ITER: 75011 RES: 7.1611e-06
ELA: 12.012 Jacobi stage: ITER: 75643 RES: 6.4986e-06
ELA: 12.112 Jacobi stage: ITER: 76279 RES: 5.8938e-06
ELA: 12.212 Jacobi stage: ITER: 76935 RES: 5.3289e-06
ELA: 12.312 Jacobi stage: ITER: 77893 RES: 4.5983e-06
ELA: 12.412 Jacobi stage: ITER: 78855 RES: 3.9678e-06
ELA: 12.513 Jacobi stage: ITER: 79815 RES: 3.4238e-06
ELA: 12.613 Jacobi stage: ITER: 80723 RES: 2.9781e-06
ELA: 12.713 Jacobi stage: ITER: 81683 RES: 2.5698e-06
ELA: 12.813 Jacobi stage: ITER: 82643 RES: 2.2175e-06
ELA: 12.913 Jacobi stage: ITER: 83603 RES: 1.9135e-06
ELA: 13.013 Jacobi stage: ITER: 84563 RES: 1.6511e-06
ELA: 13.113 Jacobi stage: ITER: 85523 RES: 1.4248e-06
ELA: 13.213 Jacobi stage: ITER: 86485 RES: 1.2287e-06
ELA: 13.313 Jacobi stage: ITER: 87467 RES: 1.057e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Setup with preconditioner
Preconditioners of iterative solvers can significantly improve the performance of the solver. In the case of the linear systems, they are used mainly with the Krylov subspace methods. Preconditioners cannot be used with the starionary methods (TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR). The following example shows how to setup an iterative solver of linear systems with preconditioning.
1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/Linear/TFQMR.h>
7#include <TNL/Solvers/Linear/Preconditioners/Diagonal.h>
8
9template< typename Device >
10void
11iterativeLinearSolverExample()
12{
13
14
15
16
17
18
19
20
21
24 const int size( 5 );
26 matrix_ptr->setDimensions( size, size );
27 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
28
30 {
31 const int rowIdx = row.getRowIndex();
32 if( rowIdx == 0 ) {
33 row.setElement( 0, rowIdx, 2.5 );
34 row.setElement( 1, rowIdx + 1, -1 );
35 }
36 else if( rowIdx == size - 1 ) {
37 row.setElement( 0, rowIdx - 1, -1.0 );
38 row.setElement( 1, rowIdx, 2.5 );
39 }
40 else {
41 row.setElement( 0, rowIdx - 1, -1.0 );
42 row.setElement( 1, rowIdx, 2.5 );
43 row.setElement( 2, rowIdx + 1, -1.0 );
44 }
45 };
46
47
48
49
50 matrix_ptr->forAllRows( f );
52
53
54
55
56 Vector x( size, 1.0 );
57 Vector b( size );
58 matrix_ptr->vectorProduct( x, b );
59 x = 0.0;
61
62
63
64
68 preconditioner_ptr->update( matrix_ptr );
69 LinearSolver solver;
70 solver.setMatrix( matrix_ptr );
71 solver.setPreconditioner( preconditioner_ptr );
72 solver.setConvergenceResidue( 1.0e-6 );
73 solver.solve( b, x );
75}
76
77int
78main( int argc, char* argv[] )
79{
81 iterativeLinearSolverExample< TNL::Devices::Sequential >();
82
83#ifdef __CUDACC__
85 iterativeLinearSolverExample< TNL::Devices::Cuda >();
86#endif
87}
Diagonal (Jacobi) preconditioner for iterative solvers of linear systems.
Definition Diagonal.h:21
In this example, we solve the same problem as in all other examples in this section. The only differences concerning the preconditioner happen in the solver setup. Similarly to the matrix of the linear system, the preconditioner needs to be passed to the solver as a std::shared_ptr. When the preconditioner object is created, we have to initialize it using the update method, which has to be called everytime the matrix of the linear system changes. This is important, for example, when solving time-dependent PDEs, but it does not happen in this example. Finally, we need to connect the solver with the preconditioner using the setPreconditioner method.
The result looks as follows:
Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Choosing the solver and preconditioner type at runtime
When developing a numerical solver, one often has to search for a combination of various methods and algorithms that fit given requirements the best. To make this easier, TNL provides the functions TNL::Solvers::getLinearSolver and TNL::Solvers::getPreconditioner for selecting the linear solver and preconditioner at runtime. The following example shows how to use these functions:
1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/LinearSolverTypeResolver.h>
7
8template< typename Device >
9void
10iterativeLinearSolverExample()
11{
12
13
14
15
16
17
18
19
20
23 const int size( 5 );
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
27
29 {
30 const int rowIdx = row.getRowIndex();
31 if( rowIdx == 0 ) {
32 row.setElement( 0, rowIdx, 2.5 );
33 row.setElement( 1, rowIdx + 1, -1 );
34 }
35 else if( rowIdx == size - 1 ) {
36 row.setElement( 0, rowIdx - 1, -1.0 );
37 row.setElement( 1, rowIdx, 2.5 );
38 }
39 else {
40 row.setElement( 0, rowIdx - 1, -1.0 );
41 row.setElement( 1, rowIdx, 2.5 );
42 row.setElement( 2, rowIdx + 1, -1.0 );
43 }
44 };
45
46
47
48
49 matrix_ptr->forAllRows( f );
51
52
53
54
55 Vector x( size, 1.0 );
56 Vector b( size );
57 matrix_ptr->vectorProduct( x, b );
58 x = 0.0;
60
61
62
63
66 preconditioner_ptr->update( matrix_ptr );
67 solver_ptr->setMatrix( matrix_ptr );
68 solver_ptr->setPreconditioner( preconditioner_ptr );
69 solver_ptr->setConvergenceResidue( 1.0e-6 );
70 solver_ptr->solve( b, x );
72}
73
74int
75main( int argc, char* argv[] )
76{
78 iterativeLinearSolverExample< TNL::Devices::Sequential >();
79
80#ifdef __CUDACC__
82 iterativeLinearSolverExample< TNL::Devices::Cuda >();
83#endif
84}
std::shared_ptr< Linear::LinearSolver< MatrixType > > getLinearSolver(std::string name)
Function returning shared pointer with linear solver given by its name in a form of a string.
Definition LinearSolverTypeResolver.h:84
std::shared_ptr< Linear::Preconditioners::Preconditioner< MatrixType > > getPreconditioner(std::string name)
Function returning shared pointer with linear preconditioner given by its name in a form of a string.
Definition LinearSolverTypeResolver.h:138
We still stay with the same problem and the only changes are in the solver setup. We first use TNL::Solvers::getLinearSolver to get a shared pointer holding the solver and then TNL::Solvers::getPreconditioner to get a shared pointer holding the preconditioner. The rest of the code is the same as in the previous examples with the only difference that we work with the pointer solver_ptr
instead of the direct instance solver
of the solver type.
The result looks as follows:
Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]