Introduction
Solvers of linear systems are one of the most important algorithms in scientific computations. TNL offers the following iterative methods:
- Stationary methods
- Jacobi method (TNL::Solvers::Linear::Jacobi)
- Successive-overrelaxation method, SOR (TNL::Solvers::Linear::SOR)
- Krylov subspace methods
- Conjugate gradient method, CG (TNL::Solvers::Linear::CG)
- Biconjugate gradient stabilized method, BICGStab (TNL::Solvers::Linear::BICGStab)
- Biconjugate gradient stabilized method, BICGStab(l) (TNL::Solvers::Linear::BICGStabL)
- Transpose-free quasi-minimal residual method, TFQMR (TNL::Solvers::Linear::TFQMR)
- Generalized minimal residual method, GMRES (TNL::Solvers::Linear::GMRES) with various methods of orthogonalization:
- Classical Gramm-Schmidt, CGS
- Classical Gramm-Schmidt with reorthogonalization, CGSR
- Modified Gramm-Schmidt, MGS
- Modified Gramm-Schmidt with reorthogonalization, MGSR
- Compact WY form of the Householder reflections, CWY
The iterative solvers (not the stationary solvers like TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR) can be combined with the following preconditioners:
- Diagonal or Jacobi (TNL::Solvers::Linear::Preconditioners::Diagonal)
- ILU (Incomplete LU) - CPU only currently
- ILU(0) (TNL::Solvers::Linear::Preconditioners::ILU0)
- ILUT (ILU with thresholding) (TNL::Solvers::Linear::Preconditioners::ILUT)
Iterative solvers of linear systems
Basic setup
All iterative solvers for linear systems can be found in the namespace TNL::Solvers::Linear. The following example shows the use the iterative solvers:
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/Linear/TFQMR.h>
8template<
typename Device >
10iterativeLinearSolverExample()
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
30 const int rowIdx = row.getRowIndex();
32 row.setElement( 0, rowIdx, 2.5 );
33 row.setElement( 1, rowIdx + 1, -1 );
35 else if( rowIdx == size - 1 ) {
36 row.setElement( 0, rowIdx - 1, -1.0 );
37 row.setElement( 1, rowIdx, 2.5 );
40 row.setElement( 0, rowIdx - 1, -1.0 );
41 row.setElement( 1, rowIdx, 2.5 );
42 row.setElement( 2, rowIdx + 1, -1.0 );
49 matrix_ptr->forAllRows( f );
55 Vector x( size, 1.0 );
57 matrix_ptr->vectorProduct( x, b );
67 solver.setConvergenceResidue( 1.0e-6 );
73main(
int argc,
char* argv[] )
76 iterativeLinearSolverExample< TNL::Devices::Sequential >();
80 iterativeLinearSolverExample< TNL::Devices::Cuda >();
#define __cuda_callable__
Definition Macros.h:49
Vector extends Array with algebraic operations.
Definition Vector.h:36
Implementation of sparse matrix, i.e. matrix storing only non-zero elements.
Definition SparseMatrix.h:57
void setMatrix(const MatrixPointer &matrix)
Set the matrix of the linear system.
Definition LinearSolver.h:120
Iterative solver of linear systems based on the Transpose-free quasi-minimal residual (TFQMR) method.
Definition TFQMR.h:21
In this example we solve a linear system \( A \vec x = \vec b \) where
\[
A = \left(
\begin{array}{cccc}
2.5 & -1 & & & \\
-1 & 2.5 & -1 & & \\
& -1 & 2.5 & -1 & \\
& & -1 & 2.5 & -1 \\
& & & -1 & 2.5 \\
\end{array}
\right)
\]
The right-hand side vector \(\vec b \) is set to \(( 1.5, 0.5, 0.5, 0.5, 1.5 )^T \) so that the exact solution is \( \vec x = ( 1, 1, 1, 1, 1 )^T \). The elements of the matrix \( A \) are set using the method TNL::Matrices::SparseMatrix::forAllRows. In this example, we use the sparse matrix but any other matrix type can be used as well (see the namespace TNL::Matrices). Next we set the solution vector \( \vec x = ( 1, 1, 1, 1, 1 )^T \) and multiply it with matrix \( A \) to get the right-hand side vector \( \vec b \). Finally, we reset the vector \( \vec x \) to zero.
To solve the linear system in the example, we use the TFQMR solver. Other solvers can be used as well (see the namespace TNL::Solvers::Linear). The solver needs only one template parameter which is the matrix type. Next we create an instance of the solver and set the matrix of the linear system. Note that the matrix is passed to the solver as a std::shared_ptr. Then we set the stopping criterion for the iterative method in terms of the relative residue, i.e. \( \lVert \vec b - A \vec x \rVert / \lVert b \rVert \). The solver is executed by calling the TNL::Solvers::Linear::LinearSolver::solve method which accepts the right-hand side vector \( \vec b \) and the solution vector \( \vec x \).
The result looks as follows:
Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Setup with a solver monitor
Solution of large linear systems may take a lot of time. In such situations, it is useful to be able to monitor the convergence of the solver or the solver status in general. For this purpose, TNL provides a solver monitor which can show various metrics in real time, such as current number of iterations, current residue of the approximate solution, etc. The solver monitor in TNL runs in a separate thread and it refreshes the status of the solver with a configurable refresh rate (once per 500 ms by default). The use of the solver monitor is demonstrated in the following example.
5#include <TNL/Matrices/SparseMatrix.h>
6#include <TNL/Devices/Sequential.h>
7#include <TNL/Devices/Cuda.h>
8#include <TNL/Solvers/Linear/Jacobi.h>
10template<
typename Device >
12iterativeLinearSolverExample()
27 matrix_ptr->setDimensions( size, size );
28 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
32 const int rowIdx = row.getRowIndex();
34 row.setElement( 0, rowIdx, 2.5 );
35 row.setElement( 1, rowIdx + 1, -1 );
37 else if( rowIdx == size - 1 ) {
38 row.setElement( 0, rowIdx - 1, -1.0 );
39 row.setElement( 1, rowIdx, 2.5 );
42 row.setElement( 0, rowIdx - 1, -1.0 );
43 row.setElement( 1, rowIdx, 2.5 );
44 row.setElement( 2, rowIdx + 1, -1.0 );
51 matrix_ptr->forAllRows( f );
57 Vector x( size, 1.0 );
59 matrix_ptr->vectorProduct( x, b );
69 solver.setOmega( 0.0005 );
75 IterativeSolverMonitorType monitor;
77 monitor.setRefreshRate( 10 );
78 monitor.setVerbose( 1 );
79 monitor.setStage(
"Jacobi stage:" );
80 solver.setSolverMonitor( monitor );
81 solver.setConvergenceResidue( 1.0e-6 );
83 monitor.stopMainLoop();
88main(
int argc,
char* argv[] )
91 iterativeLinearSolverExample< TNL::Devices::Sequential >();
95 iterativeLinearSolverExample< TNL::Devices::Cuda >();
Iterative solver of linear systems based on the Jacobi method.
Definition Jacobi.h:21
A RAII wrapper for launching the SolverMonitor's main loop in a separate thread.
Definition SolverMonitor.h:133
First, we set up the same linear system as in the previous example, we create an instance of the Jacobi solver and we pass the matrix of the linear system to the solver. Then, we set the relaxation parameter \( \omega \) of the Jacobi solver to 0.0005. The reason is to artificially slow down the convergence, because we want to see some iterations in this example. Next, we create an instance of the solver monitor and a special thread for the monitor (an instance of the TNL::Solvers::SolverMonitorThread class). We use the following methods to configure the solver monitor:
Next, we call TNL::Solvers::IterativeSolver::setSolverMonitor to connect the solver with the monitor and we set the convergence criterion based on the relative residue. Finally, we start the solver using the TNL::Solvers::Linear::Jacobi::solve method and when the solver finishes, we stop the monitor using TNL::Solvers::SolverMonitor::stopMainLoop.
The result looks as follows:
Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 8211 RES: 0.20495
Jacobi stage: ITER: 18919 RES: 0.039517
Jacobi stage: ITER: 27705 RES: 0.010246
Jacobi stage: ITER: 36297 RES: 0.0027377
Jacobi stage: ITER: 43951 RES: 0.00084517
Jacobi stage: ITER: 48759 RES: 0.00040384
Jacobi stage: ITER: 54191 RES: 0.00017533
Jacobi stage: ITER: 58935 RES: 8.4603e-05
Jacobi stage: ITER: 64253 RES: 3.7368e-05
Jacobi stage: ITER: 73553 RES: 8.9559e-06
Jacobi stage: ITER: 81933 RES: 2.4722e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 659 RES: 0.81186
Jacobi stage: ITER: 1175 RES: 0.70565
Jacobi stage: ITER: 2059 RES: 0.57496
Jacobi stage: ITER: 2715 RES: 0.50385
Jacobi stage: ITER: 3173 RES: 0.46241
Jacobi stage: ITER: 3673 RES: 0.42314
Jacobi stage: ITER: 4535 RES: 0.36594
Jacobi stage: ITER: 4821 RES: 0.34908
Jacobi stage: ITER: 5279 RES: 0.32429
Jacobi stage: ITER: 5559 RES: 0.31011
Jacobi stage: ITER: 5855 RES: 0.29588
Jacobi stage: ITER: 6167 RES: 0.28168
Jacobi stage: ITER: 6263 RES: 0.27746
Jacobi stage: ITER: 6563 RES: 0.26472
Jacobi stage: ITER: 7113 RES: 0.24289
Jacobi stage: ITER: 7439 RES: 0.23097
Jacobi stage: ITER: 7891 RES: 0.21535
Jacobi stage: ITER: 8127 RES: 0.20763
Jacobi stage: ITER: 8603 RES: 0.19292
Jacobi stage: ITER: 9193 RES: 0.17609
Jacobi stage: ITER: 9525 RES: 0.16731
Jacobi stage: ITER: 9619 RES: 0.16496
Jacobi stage: ITER: 9929 RES: 0.15723
Jacobi stage: ITER: 10085 RES: 0.1535
Jacobi stage: ITER: 10227 RES: 0.15023
Jacobi stage: ITER: 10723 RES: 0.1392
Jacobi stage: ITER: 10853 RES: 0.1364
Jacobi stage: ITER: 11241 RES: 0.1285
Jacobi stage: ITER: 11679 RES: 0.12017
Jacobi stage: ITER: 12099 RES: 0.11266
Jacobi stage: ITER: 12331 RES: 0.10872
Jacobi stage: ITER: 12819 RES: 0.10086
Jacobi stage: ITER: 13155 RES: 0.095788
Jacobi stage: ITER: 13593 RES: 0.089527
Jacobi stage: ITER: 14599 RES: 0.076731
Jacobi stage: ITER: 15231 RES: 0.069632
Jacobi stage: ITER: 15665 RES: 0.065121
Jacobi stage: ITER: 15947 RES: 0.06238
Jacobi stage: ITER: 16479 RES: 0.057485
Jacobi stage: ITER: 16791 RES: 0.054795
Jacobi stage: ITER: 16975 RES: 0.053268
Jacobi stage: ITER: 17311 RES: 0.050589
Jacobi stage: ITER: 17527 RES: 0.048938
Jacobi stage: ITER: 17871 RES: 0.046419
Jacobi stage: ITER: 18591 RES: 0.041559
Jacobi stage: ITER: 18729 RES: 0.040675
Jacobi stage: ITER: 19047 RES: 0.038748
Jacobi stage: ITER: 20115 RES: 0.032885
Jacobi stage: ITER: 21109 RES: 0.02822
Jacobi stage: ITER: 22753 RES: 0.021923
Jacobi stage: ITER: 23345 RES: 0.020017
Jacobi stage: ITER: 24339 RES: 0.017188
Jacobi stage: ITER: 24663 RES: 0.016354
Jacobi stage: ITER: 25603 RES: 0.014155
Jacobi stage: ITER: 26387 RES: 0.012549
Jacobi stage: ITER: 27373 RES: 0.010782
Jacobi stage: ITER: 28003 RES: 0.0097905
Jacobi stage: ITER: 29171 RES: 0.0081826
Jacobi stage: ITER: 30059 RES: 0.0071393
Jacobi stage: ITER: 30319 RES: 0.0068598
Jacobi stage: ITER: 30531 RES: 0.00664
Jacobi stage: ITER: 30931 RES: 0.0062443
Jacobi stage: ITER: 31167 RES: 0.006022
Jacobi stage: ITER: 31403 RES: 0.0058076
Jacobi stage: ITER: 32127 RES: 0.0051964
Jacobi stage: ITER: 33321 RES: 0.0043243
Jacobi stage: ITER: 33919 RES: 0.003946
Jacobi stage: ITER: 34657 RES: 0.003522
Jacobi stage: ITER: 35535 RES: 0.0030786
Jacobi stage: ITER: 36483 RES: 0.0026615
Jacobi stage: ITER: 37299 RES: 0.0023479
Jacobi stage: ITER: 38503 RES: 0.0019515
Jacobi stage: ITER: 38807 RES: 0.0018625
Jacobi stage: ITER: 39311 RES: 0.0017237
Jacobi stage: ITER: 41095 RES: 0.0013106
Jacobi stage: ITER: 43207 RES: 0.00094749
Jacobi stage: ITER: 45013 RES: 0.00071773
Jacobi stage: ITER: 46561 RES: 0.00056585
Jacobi stage: ITER: 48415 RES: 0.00042575
Jacobi stage: ITER: 50563 RES: 0.0003061
Jacobi stage: ITER: 51795 RES: 0.00025333
Jacobi stage: ITER: 53327 RES: 0.00020021
Jacobi stage: ITER: 54493 RES: 0.00016733
Jacobi stage: ITER: 55605 RES: 0.00014106
Jacobi stage: ITER: 56699 RES: 0.00011927
Jacobi stage: ITER: 57475 RES: 0.00010587
Jacobi stage: ITER: 59233 RES: 8.0793e-05
Jacobi stage: ITER: 60167 RES: 7.0016e-05
Jacobi stage: ITER: 61063 RES: 6.1014e-05
Jacobi stage: ITER: 61961 RES: 5.3136e-05
Jacobi stage: ITER: 62581 RES: 4.8309e-05
Jacobi stage: ITER: 63359 RES: 4.2881e-05
Jacobi stage: ITER: 63663 RES: 4.0925e-05
Jacobi stage: ITER: 64131 RES: 3.8086e-05
Jacobi stage: ITER: 64279 RES: 3.723e-05
Jacobi stage: ITER: 64767 RES: 3.4541e-05
Jacobi stage: ITER: 64987 RES: 3.3394e-05
Jacobi stage: ITER: 65731 RES: 2.9787e-05
Jacobi stage: ITER: 66337 RES: 2.7132e-05
Jacobi stage: ITER: 67403 RES: 2.3041e-05
Jacobi stage: ITER: 68339 RES: 1.9955e-05
Jacobi stage: ITER: 68619 RES: 1.9115e-05
Jacobi stage: ITER: 68941 RES: 1.8187e-05
Jacobi stage: ITER: 69691 RES: 1.6213e-05
Jacobi stage: ITER: 70617 RES: 1.4059e-05
Jacobi stage: ITER: 71695 RES: 1.1918e-05
Jacobi stage: ITER: 73719 RES: 8.7331e-06
Jacobi stage: ITER: 75413 RES: 6.7302e-06
Jacobi stage: ITER: 77335 RES: 5.0113e-06
Jacobi stage: ITER: 79021 RES: 3.8668e-06
Jacobi stage: ITER: 81085 RES: 2.8162e-06
Jacobi stage: ITER: 83141 RES: 2.0536e-06
Jacobi stage: ITER: 84923 RES: 1.5623e-06
Jacobi stage: ITER: 86593 RES: 1.2085e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
The monitoring of the solver can be improved by time elapsed since the beginning of the computation as demonstrated in the following example:
6#include <TNL/Matrices/SparseMatrix.h>
7#include <TNL/Devices/Sequential.h>
8#include <TNL/Devices/Cuda.h>
9#include <TNL/Solvers/Linear/Jacobi.h>
11template<
typename Device >
13iterativeLinearSolverExample()
28 matrix_ptr->setDimensions( size, size );
29 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
33 const int rowIdx = row.getRowIndex();
35 row.setElement( 0, rowIdx, 2.5 );
36 row.setElement( 1, rowIdx + 1, -1 );
38 else if( rowIdx == size - 1 ) {
39 row.setElement( 0, rowIdx - 1, -1.0 );
40 row.setElement( 1, rowIdx, 2.5 );
43 row.setElement( 0, rowIdx - 1, -1.0 );
44 row.setElement( 1, rowIdx, 2.5 );
45 row.setElement( 2, rowIdx + 1, -1.0 );
52 matrix_ptr->forAllRows( f );
58 Vector x( size, 1.0 );
60 matrix_ptr->vectorProduct( x, b );
70 solver.setOmega( 0.0005 );
76 IterativeSolverMonitorType monitor;
78 monitor.setRefreshRate( 10 );
79 monitor.setVerbose( 1 );
80 monitor.setStage(
"Jacobi stage:" );
82 monitor.setTimer( timer );
84 solver.setSolverMonitor( monitor );
85 solver.setConvergenceResidue( 1.0e-6 );
87 monitor.stopMainLoop();
92main(
int argc,
char* argv[] )
95 iterativeLinearSolverExample< TNL::Devices::Sequential >();
99 iterativeLinearSolverExample< TNL::Devices::Cuda >();
Class for real time, CPU time and CPU cycles measuring.
Definition Timer.h:25
void start()
Starts timer.
Definition Timer.hpp:42
The only changes are around the lines where we create an instance of TNL::Timer, connect it with the monitor using TNL::Solvers::SolverMonitor::setTimer and start the timer with TNL::Timer::start.
The result looks as follows:
Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:0.003567
ELA: 0.10371 Jacobi stage: ITER: 5213 RES: 0.32764
ELA: 0.20384 Jacobi stage: ITER: 10419 RES: 0.14586
ELA: 0.30397 Jacobi stage: ITER: 15365 RES: 0.068193
ELA: 0.40409 Jacobi stage: ITER: 20471 RES: 0.031135
ELA: 0.5042 Jacobi stage: ITER: 26989 RES: 0.011437
ELA: 0.60432 Jacobi stage: ITER: 34557 RES: 0.0035766
ELA: 0.70445 Jacobi stage: ITER: 44731 RES: 0.00074974
ELA: 0.80458 Jacobi stage: ITER: 51973 RES: 0.00024642
ELA: 0.91353 Jacobi stage: ITER: 64535 RES: 3.5795e-05
ELA: 1.0137 Jacobi stage: ITER: 75935 RES: 6.2136e-06
ELA: 1.1169 Jacobi stage: ITER: 86443 RES: 1.237e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:0.004734
ELA: 0.10485 Jacobi stage: ITER: 1053 RES: 0.72788
ELA: 0.20806 Jacobi stage: ITER: 2115 RES: 0.56821
ELA: 0.30818 Jacobi stage: ITER: 3193 RES: 0.46074
ELA: 0.41139 Jacobi stage: ITER: 4063 RES: 0.39592
ELA: 0.51152 Jacobi stage: ITER: 4463 RES: 0.37032
ELA: 0.61165 Jacobi stage: ITER: 5795 RES: 0.2987
ELA: 0.71472 Jacobi stage: ITER: 6763 RES: 0.25658
ELA: 0.81485 Jacobi stage: ITER: 7787 RES: 0.21884
ELA: 0.91806 Jacobi stage: ITER: 8899 RES: 0.18431
ELA: 1.0182 Jacobi stage: ITER: 9863 RES: 0.15888
ELA: 1.1247 Jacobi stage: ITER: 10779 RES: 0.138
ELA: 1.2249 Jacobi stage: ITER: 12717 RES: 0.10242
ELA: 1.325 Jacobi stage: ITER: 14735 RES: 0.075145
ELA: 1.4381 Jacobi stage: ITER: 15927 RES: 0.062572
ELA: 1.5447 Jacobi stage: ITER: 17375 RES: 0.050094
ELA: 1.6481 Jacobi stage: ITER: 18367 RES: 0.043014
ELA: 1.7547 Jacobi stage: ITER: 19487 RES: 0.036216
ELA: 1.8549 Jacobi stage: ITER: 20543 RES: 0.030793
ELA: 1.955 Jacobi stage: ITER: 21493 RES: 0.026604
ELA: 2.0581 Jacobi stage: ITER: 22075 RES: 0.024336
ELA: 2.1582 Jacobi stage: ITER: 23683 RES: 0.01901
ELA: 2.2614 Jacobi stage: ITER: 25153 RES: 0.015163
ELA: 2.3781 Jacobi stage: ITER: 25855 RES: 0.013617
ELA: 2.4814 Jacobi stage: ITER: 26013 RES: 0.013287
ELA: 2.5981 Jacobi stage: ITER: 26375 RES: 0.012572
ELA: 2.7114 Jacobi stage: ITER: 27039 RES: 0.011353
ELA: 2.8181 Jacobi stage: ITER: 27939 RES: 0.0098872
ELA: 2.9182 Jacobi stage: ITER: 28419 RES: 0.0091845
ELA: 3.0281 Jacobi stage: ITER: 29625 RES: 0.0076291
ELA: 3.1282 Jacobi stage: ITER: 30463 RES: 0.0067097
ELA: 3.2347 Jacobi stage: ITER: 31679 RES: 0.0055666
ELA: 3.3481 Jacobi stage: ITER: 32959 RES: 0.004573
ELA: 3.4482 Jacobi stage: ITER: 33755 RES: 0.0040467
ELA: 3.5547 Jacobi stage: ITER: 34479 RES: 0.0036208
ELA: 3.6581 Jacobi stage: ITER: 35127 RES: 0.0032778
ELA: 3.7681 Jacobi stage: ITER: 35807 RES: 0.0029527
ELA: 3.8747 Jacobi stage: ITER: 37179 RES: 0.0023916
ELA: 3.9847 Jacobi stage: ITER: 37963 RES: 0.0021203
ELA: 4.0947 Jacobi stage: ITER: 39295 RES: 0.001728
ELA: 4.2047 Jacobi stage: ITER: 40907 RES: 0.001349
ELA: 4.3081 Jacobi stage: ITER: 41787 RES: 0.0011784
ELA: 4.4082 Jacobi stage: ITER: 43649 RES: 0.00088502
ELA: 4.5083 Jacobi stage: ITER: 44875 RES: 0.00073334
ELA: 4.6085 Jacobi stage: ITER: 45265 RES: 0.00069048
ELA: 4.7086 Jacobi stage: ITER: 47207 RES: 0.00051255
ELA: 4.8087 Jacobi stage: ITER: 49133 RES: 0.00038118
ELA: 4.9088 Jacobi stage: ITER: 50027 RES: 0.00033237
ELA: 5.0089 Jacobi stage: ITER: 50515 RES: 0.00030837
ELA: 5.1091 Jacobi stage: ITER: 50843 RES: 0.00029322
ELA: 5.2092 Jacobi stage: ITER: 51751 RES: 0.00025504
ELA: 5.3247 Jacobi stage: ITER: 53287 RES: 0.00020144
ELA: 5.4281 Jacobi stage: ITER: 54779 RES: 0.00016019
ELA: 5.5282 Jacobi stage: ITER: 55867 RES: 0.00013553
ELA: 5.6283 Jacobi stage: ITER: 56617 RES: 0.00012075
ELA: 5.7284 Jacobi stage: ITER: 57359 RES: 0.00010777
ELA: 5.8285 Jacobi stage: ITER: 57931 RES: 9.8709e-05
ELA: 5.9287 Jacobi stage: ITER: 58815 RES: 8.6176e-05
ELA: 6.0288 Jacobi stage: ITER: 59393 RES: 7.8831e-05
ELA: 6.1289 Jacobi stage: ITER: 60389 RES: 6.7648e-05
ELA: 6.2291 Jacobi stage: ITER: 61845 RES: 5.4092e-05
ELA: 6.3293 Jacobi stage: ITER: 63261 RES: 4.3518e-05
ELA: 6.4314 Jacobi stage: ITER: 64725 RES: 3.4754e-05
ELA: 6.5315 Jacobi stage: ITER: 66005 RES: 2.8551e-05
ELA: 6.6317 Jacobi stage: ITER: 66771 RES: 2.539e-05
ELA: 6.7347 Jacobi stage: ITER: 67291 RES: 2.3441e-05
ELA: 6.8349 Jacobi stage: ITER: 67665 RES: 2.2125e-05
ELA: 6.935 Jacobi stage: ITER: 68443 RES: 1.9639e-05
ELA: 7.0352 Jacobi stage: ITER: 69303 RES: 1.7209e-05
ELA: 7.1353 Jacobi stage: ITER: 69615 RES: 1.6404e-05
ELA: 7.2355 Jacobi stage: ITER: 70685 RES: 1.3913e-05
ELA: 7.3356 Jacobi stage: ITER: 71343 RES: 1.258e-05
ELA: 7.4358 Jacobi stage: ITER: 72823 RES: 1.0022e-05
ELA: 7.5359 Jacobi stage: ITER: 74247 RES: 8.0528e-06
ELA: 7.636 Jacobi stage: ITER: 74535 RES: 7.7043e-06
ELA: 7.7362 Jacobi stage: ITER: 74667 RES: 7.5497e-06
ELA: 7.8381 Jacobi stage: ITER: 74863 RES: 7.3258e-06
ELA: 7.9382 Jacobi stage: ITER: 75015 RES: 7.1567e-06
ELA: 8.0383 Jacobi stage: ITER: 75235 RES: 6.9189e-06
ELA: 8.1384 Jacobi stage: ITER: 75397 RES: 6.7468e-06
ELA: 8.2386 Jacobi stage: ITER: 75545 RES: 6.5952e-06
ELA: 8.3387 Jacobi stage: ITER: 75911 RES: 6.2365e-06
ELA: 8.4414 Jacobi stage: ITER: 76367 RES: 5.8147e-06
ELA: 8.5481 Jacobi stage: ITER: 77061 RES: 5.2251e-06
ELA: 8.6547 Jacobi stage: ITER: 77291 RES: 5.0453e-06
ELA: 8.7548 Jacobi stage: ITER: 77505 RES: 4.8806e-06
ELA: 8.8614 Jacobi stage: ITER: 77769 RES: 4.6867e-06
ELA: 8.9681 Jacobi stage: ITER: 78409 RES: 4.2479e-06
ELA: 9.0714 Jacobi stage: ITER: 79173 RES: 3.7775e-06
ELA: 9.1814 Jacobi stage: ITER: 79519 RES: 3.5831e-06
ELA: 9.2847 Jacobi stage: ITER: 80187 RES: 3.2337e-06
ELA: 9.3881 Jacobi stage: ITER: 80501 RES: 3.0805e-06
ELA: 9.4914 Jacobi stage: ITER: 81039 RES: 2.837e-06
ELA: 9.5947 Jacobi stage: ITER: 81873 RES: 2.4951e-06
ELA: 9.7014 Jacobi stage: ITER: 83263 RES: 2.0161e-06
ELA: 9.8047 Jacobi stage: ITER: 84233 RES: 1.7365e-06
ELA: 9.9147 Jacobi stage: ITER: 84887 RES: 1.571e-06
ELA: 10.021 Jacobi stage: ITER: 85267 RES: 1.4819e-06
ELA: 10.122 Jacobi stage: ITER: 85537 RES: 1.4213e-06
ELA: 10.228 Jacobi stage: ITER: 86091 RES: 1.3057e-06
ELA: 10.338 Jacobi stage: ITER: 86633 RES: 1.2011e-06
ELA: 10.438 Jacobi stage: ITER: 87147 RES: 1.1102e-06
ELA: 10.541 Jacobi stage: ITER: 87473 RES: 1.0557e-06
ELA: 10.642 Jacobi stage: ITER: 87707 RES: 1.0187e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Setup with preconditioner
Preconditioners of iterative solvers can significantly improve the performance of the solver. In the case of the linear systems, they are used mainly with the Krylov subspace methods. Preconditioners cannot be used with the starionary methods (TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR). The following example shows how to setup an iterative solver of linear systems with preconditioning.
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/Linear/TFQMR.h>
7#include <TNL/Solvers/Linear/Preconditioners/Diagonal.h>
9template<
typename Device >
11iterativeLinearSolverExample()
26 matrix_ptr->setDimensions( size, size );
27 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
31 const int rowIdx = row.getRowIndex();
33 row.setElement( 0, rowIdx, 2.5 );
34 row.setElement( 1, rowIdx + 1, -1 );
36 else if( rowIdx == size - 1 ) {
37 row.setElement( 0, rowIdx - 1, -1.0 );
38 row.setElement( 1, rowIdx, 2.5 );
41 row.setElement( 0, rowIdx - 1, -1.0 );
42 row.setElement( 1, rowIdx, 2.5 );
43 row.setElement( 2, rowIdx + 1, -1.0 );
50 matrix_ptr->forAllRows( f );
56 Vector x( size, 1.0 );
58 matrix_ptr->vectorProduct( x, b );
68 preconditioner_ptr->update( matrix_ptr );
70 solver.setMatrix( matrix_ptr );
71 solver.setPreconditioner( preconditioner_ptr );
72 solver.setConvergenceResidue( 1.0e-6 );
78main(
int argc,
char* argv[] )
81 iterativeLinearSolverExample< TNL::Devices::Sequential >();
85 iterativeLinearSolverExample< TNL::Devices::Cuda >();
Diagonal (Jacobi) preconditioner for iterative solvers of linear systems.
Definition Diagonal.h:21
In this example, we solve the same problem as in all other examples in this section. The only differences concerning the preconditioner happen in the solver setup. Similarly to the matrix of the linear system, the preconditioner needs to be passed to the solver as a std::shared_ptr. When the preconditioner object is created, we have to initialize it using the update method, which has to be called everytime the matrix of the linear system changes. This is important, for example, when solving time-dependent PDEs, but it does not happen in this example. Finally, we need to connect the solver with the preconditioner using the setPreconditioner method.
The result looks as follows:
Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Choosing the solver and preconditioner type at runtime
When developing a numerical solver, one often has to search for a combination of various methods and algorithms that fit given requirements the best. To make this easier, TNL provides the functions TNL::Solvers::getLinearSolver and TNL::Solvers::getPreconditioner for selecting the linear solver and preconditioner at runtime. The following example shows how to use these functions:
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/LinearSolverTypeResolver.h>
8template<
typename Device >
10iterativeLinearSolverExample()
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
30 const int rowIdx = row.getRowIndex();
32 row.setElement( 0, rowIdx, 2.5 );
33 row.setElement( 1, rowIdx + 1, -1 );
35 else if( rowIdx == size - 1 ) {
36 row.setElement( 0, rowIdx - 1, -1.0 );
37 row.setElement( 1, rowIdx, 2.5 );
40 row.setElement( 0, rowIdx - 1, -1.0 );
41 row.setElement( 1, rowIdx, 2.5 );
42 row.setElement( 2, rowIdx + 1, -1.0 );
49 matrix_ptr->forAllRows( f );
55 Vector x( size, 1.0 );
57 matrix_ptr->vectorProduct( x, b );
66 preconditioner_ptr->update( matrix_ptr );
67 solver_ptr->setMatrix( matrix_ptr );
68 solver_ptr->setPreconditioner( preconditioner_ptr );
69 solver_ptr->setConvergenceResidue( 1.0e-6 );
70 solver_ptr->solve( b, x );
75main(
int argc,
char* argv[] )
78 iterativeLinearSolverExample< TNL::Devices::Sequential >();
82 iterativeLinearSolverExample< TNL::Devices::Cuda >();
std::shared_ptr< Linear::LinearSolver< MatrixType > > getLinearSolver(std::string name)
Function returning shared pointer with linear solver given by its name in a form of a string.
Definition LinearSolverTypeResolver.h:84
std::shared_ptr< Linear::Preconditioners::Preconditioner< MatrixType > > getPreconditioner(std::string name)
Function returning shared pointer with linear preconditioner given by its name in a form of a string.
Definition LinearSolverTypeResolver.h:138
We still stay with the same problem and the only changes are in the solver setup. We first use TNL::Solvers::getLinearSolver to get a shared pointer holding the solver and then TNL::Solvers::getPreconditioner to get a shared pointer holding the preconditioner. The rest of the code is the same as in the previous examples with the only difference that we work with the pointer solver_ptr
instead of the direct instance solver
of the solver type.
The result looks as follows:
Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]