Template Numerical Library version\ main:978fbd2
Searching...
No Matches
Linear solvers

# Introduction

Solvers of linear systems are one of the most important algorithms in scientific computations. TNL offers the following iterative methods:

1. Stationary methods
2. Krylov subspace methods
1. Conjugate gradient method, CG (TNL::Solvers::Linear::CG)
2. Biconjugate gradient stabilized method, BICGStab (TNL::Solvers::Linear::BICGStab)
3. Biconjugate gradient stabilized method, BICGStab(l) (TNL::Solvers::Linear::BICGStabL)
4. Transpose-free quasi-minimal residual method, TFQMR (TNL::Solvers::Linear::TFQMR)
5. Generalized minimal residual method, GMRES (TNL::Solvers::Linear::GMRES) with various methods of orthogonalization:
1. Classical Gramm-Schmidt, CGS
2. Classical Gramm-Schmidt with reorthogonalization, CGSR
3. Modified Gramm-Schmidt, MGS
4. Modified Gramm-Schmidt with reorthogonalization, MGSR
5. Compact WY form of the Householder reflections, CWY

The iterative solvers (not the stationary solvers like TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR) can be combined with the following preconditioners:

1. Diagonal or Jacobi (TNL::Solvers::Linear::Preconditioners::Diagonal)
2. ILU (Incomplete LU) - CPU only currently

# Iterative solvers of linear systems

## Basic setup

All iterative solvers for linear systems can be found in the namespace TNL::Solvers::Linear. The following example shows the use the iterative solvers:

1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/Linear/TFQMR.h>
7
8template< typename Device >
9void
10iterativeLinearSolverExample()
11{
12 /***
13 * Set the following matrix (dots represent zero matrix elements):
14 *
15 * / 2.5 -1 . . . \
16 * | -1 2.5 -1 . . |
17 * | . -1 2.5 -1. . |
18 * | . . -1 2.5 -1 |
19 * \ . . . -1 2.5 /
20 */
23 const int size( 5 );
24 auto matrix_ptr = std::make_shared< MatrixType >();
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
27
28 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
29 {
30 const int rowIdx = row.getRowIndex();
31 if( rowIdx == 0 ) {
32 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
33 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
34 }
35 else if( rowIdx == size - 1 ) {
36 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
37 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
38 }
39 else {
40 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
41 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
42 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
43 }
44 };
45
46 /***
47 * Set the matrix elements.
48 */
49 matrix_ptr->forAllRows( f );
50 std::cout << *matrix_ptr << std::endl;
51
52 /***
53 * Set the right-hand side vector.
54 */
55 Vector x( size, 1.0 );
56 Vector b( size );
57 matrix_ptr->vectorProduct( x, b );
58 x = 0.0;
59 std::cout << "Vector b = " << b << std::endl;
60
61 /***
62 * Solve the linear system.
63 */
65 LinearSolver solver;
66 solver.setMatrix( matrix_ptr );
67 solver.setConvergenceResidue( 1.0e-6 );
68 solver.solve( b, x );
69 std::cout << "Vector x = " << x << std::endl;
70}
71
72int
73main( int argc, char* argv[] )
74{
75 std::cout << "Solving linear system on host: " << std::endl;
76 iterativeLinearSolverExample< TNL::Devices::Sequential >();
77
78#ifdef __CUDACC__
79 std::cout << "Solving linear system on CUDA device: " << std::endl;
80 iterativeLinearSolverExample< TNL::Devices::Cuda >();
81#endif
82}
#define __cuda_callable__
Definition Macros.h:49
Vector extends Array with algebraic operations.
Definition Vector.h:36
Implementation of sparse matrix, i.e. matrix storing only non-zero elements.
Definition SparseMatrix.h:57
void setMatrix(const MatrixPointer &matrix)
Set the matrix of the linear system.
Definition LinearSolver.h:120
Iterative solver of linear systems based on the Transpose-free quasi-minimal residual (TFQMR) method.
Definition TFQMR.h:21
T endl(T... args)
T make_shared(T... args)

In this example we solve a linear system $$A \vec x = \vec b$$ where

$A = \left( \begin{array}{cccc} 2.5 & -1 & & & \\ -1 & 2.5 & -1 & & \\ & -1 & 2.5 & -1 & \\ & & -1 & 2.5 & -1 \\ & & & -1 & 2.5 \\ \end{array} \right)$

The right-hand side vector $$\vec b$$ is set to $$( 1.5, 0.5, 0.5, 0.5, 1.5 )^T$$ so that the exact solution is $$\vec x = ( 1, 1, 1, 1, 1 )^T$$. The elements of the matrix $$A$$ are set using the method TNL::Matrices::SparseMatrix::forAllRows. In this example, we use the sparse matrix but any other matrix type can be used as well (see the namespace TNL::Matrices). Next we set the solution vector $$\vec x = ( 1, 1, 1, 1, 1 )^T$$ and multiply it with matrix $$A$$ to get the right-hand side vector $$\vec b$$. Finally, we reset the vector $$\vec x$$ to zero.

To solve the linear system in the example, we use the TFQMR solver. Other solvers can be used as well (see the namespace TNL::Solvers::Linear). The solver needs only one template parameter which is the matrix type. Next we create an instance of the solver and set the matrix of the linear system. Note that the matrix is passed to the solver as a std::shared_ptr. Then we set the stopping criterion for the iterative method in terms of the relative residue, i.e. $$\lVert \vec b - A \vec x \rVert / \lVert b \rVert$$. The solver is executed by calling the TNL::Solvers::Linear::LinearSolver::solve method which accepts the right-hand side vector $$\vec b$$ and the solution vector $$\vec x$$.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]

## Setup with a solver monitor

Solution of large linear systems may take a lot of time. In such situations, it is useful to be able to monitor the convergence of the solver or the solver status in general. For this purpose, TNL provides a solver monitor which can show various metrics in real time, such as current number of iterations, current residue of the approximate solution, etc. The solver monitor in TNL runs in a separate thread and it refreshes the status of the solver with a configurable refresh rate (once per 500 ms by default). The use of the solver monitor is demonstrated in the following example.

1#include <iostream>
2#include <memory>
3#include <chrono>
5#include <TNL/Matrices/SparseMatrix.h>
6#include <TNL/Devices/Sequential.h>
7#include <TNL/Devices/Cuda.h>
8#include <TNL/Solvers/Linear/Jacobi.h>
9
10template< typename Device >
11void
12iterativeLinearSolverExample()
13{
14 /***
15 * Set the following matrix (dots represent zero matrix elements):
16 *
17 * / 2.5 -1 . . . \
18 * | -1 2.5 -1 . . |
19 * | . -1 2.5 -1. . |
20 * | . . -1 2.5 -1 |
21 * \ . . . -1 2.5 /
22 */
25 const int size( 5 );
26 auto matrix_ptr = std::make_shared< MatrixType >();
27 matrix_ptr->setDimensions( size, size );
28 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
29
30 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
31 {
32 const int rowIdx = row.getRowIndex();
33 if( rowIdx == 0 ) {
34 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
35 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
36 }
37 else if( rowIdx == size - 1 ) {
38 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
39 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
40 }
41 else {
42 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
43 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
44 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
45 }
46 };
47
48 /***
49 * Set the matrix elements.
50 */
51 matrix_ptr->forAllRows( f );
52 std::cout << *matrix_ptr << std::endl;
53
54 /***
55 * Set the right-hand side vector.
56 */
57 Vector x( size, 1.0 );
58 Vector b( size );
59 matrix_ptr->vectorProduct( x, b );
60 x = 0.0;
61 std::cout << "Vector b = " << b << std::endl;
62
63 /***
64 * Setup solver of the linear system.
65 */
67 LinearSolver solver;
68 solver.setMatrix( matrix_ptr );
69 solver.setOmega( 0.0005 );
70
71 /***
72 * Setup monitor of the iterative solver.
73 */
74 using IterativeSolverMonitorType = TNL::Solvers::IterativeSolverMonitor< double, int >;
75 IterativeSolverMonitorType monitor;
77 monitor.setRefreshRate( 10 ); // refresh rate in milliseconds
78 monitor.setVerbose( 1 );
79 monitor.setStage( "Jacobi stage:" );
80 solver.setSolverMonitor( monitor );
81 solver.setConvergenceResidue( 1.0e-6 );
82 solver.solve( b, x );
83 monitor.stopMainLoop();
84 std::cout << "Vector x = " << x << std::endl;
85}
86
87int
88main( int argc, char* argv[] )
89{
90 std::cout << "Solving linear system on host: " << std::endl;
91 iterativeLinearSolverExample< TNL::Devices::Sequential >();
92
93#ifdef __CUDACC__
94 std::cout << "Solving linear system on CUDA device: " << std::endl;
95 iterativeLinearSolverExample< TNL::Devices::Cuda >();
96#endif
97}
Iterative solver of linear systems based on the Jacobi method.
Definition Jacobi.h:21
A RAII wrapper for launching the SolverMonitor's main loop in a separate thread.
Definition SolverMonitor.h:133

First, we set up the same linear system as in the previous example, we create an instance of the Jacobi solver and we pass the matrix of the linear system to the solver. Then, we set the relaxation parameter $$\omega$$ of the Jacobi solver to 0.0005. The reason is to artificially slow down the convergence, because we want to see some iterations in this example. Next, we create an instance of the solver monitor and a special thread for the monitor (an instance of the TNL::Solvers::SolverMonitorThread class). We use the following methods to configure the solver monitor:

Next, we call TNL::Solvers::IterativeSolver::setSolverMonitor to connect the solver with the monitor and we set the convergence criterion based on the relative residue. Finally, we start the solver using the TNL::Solvers::Linear::Jacobi::solve method and when the solver finishes, we stop the monitor using TNL::Solvers::SolverMonitor::stopMainLoop.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 8211 RES: 0.20495
Jacobi stage: ITER: 18919 RES: 0.039517
Jacobi stage: ITER: 27705 RES: 0.010246
Jacobi stage: ITER: 36297 RES: 0.0027377
Jacobi stage: ITER: 43951 RES: 0.00084517
Jacobi stage: ITER: 48759 RES: 0.00040384
Jacobi stage: ITER: 54191 RES: 0.00017533
Jacobi stage: ITER: 58935 RES: 8.4603e-05
Jacobi stage: ITER: 64253 RES: 3.7368e-05
Jacobi stage: ITER: 73553 RES: 8.9559e-06
Jacobi stage: ITER: 81933 RES: 2.4722e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 659 RES: 0.81186
Jacobi stage: ITER: 1175 RES: 0.70565
Jacobi stage: ITER: 2059 RES: 0.57496
Jacobi stage: ITER: 2715 RES: 0.50385
Jacobi stage: ITER: 3173 RES: 0.46241
Jacobi stage: ITER: 3673 RES: 0.42314
Jacobi stage: ITER: 4535 RES: 0.36594
Jacobi stage: ITER: 4821 RES: 0.34908
Jacobi stage: ITER: 5279 RES: 0.32429
Jacobi stage: ITER: 5559 RES: 0.31011
Jacobi stage: ITER: 5855 RES: 0.29588
Jacobi stage: ITER: 6167 RES: 0.28168
Jacobi stage: ITER: 6263 RES: 0.27746
Jacobi stage: ITER: 6563 RES: 0.26472
Jacobi stage: ITER: 7113 RES: 0.24289
Jacobi stage: ITER: 7439 RES: 0.23097
Jacobi stage: ITER: 7891 RES: 0.21535
Jacobi stage: ITER: 8127 RES: 0.20763
Jacobi stage: ITER: 8603 RES: 0.19292
Jacobi stage: ITER: 9193 RES: 0.17609
Jacobi stage: ITER: 9525 RES: 0.16731
Jacobi stage: ITER: 9619 RES: 0.16496
Jacobi stage: ITER: 9929 RES: 0.15723
Jacobi stage: ITER: 10085 RES: 0.1535
Jacobi stage: ITER: 10227 RES: 0.15023
Jacobi stage: ITER: 10723 RES: 0.1392
Jacobi stage: ITER: 10853 RES: 0.1364
Jacobi stage: ITER: 11241 RES: 0.1285
Jacobi stage: ITER: 11679 RES: 0.12017
Jacobi stage: ITER: 12099 RES: 0.11266
Jacobi stage: ITER: 12331 RES: 0.10872
Jacobi stage: ITER: 12819 RES: 0.10086
Jacobi stage: ITER: 13155 RES: 0.095788
Jacobi stage: ITER: 13593 RES: 0.089527
Jacobi stage: ITER: 14599 RES: 0.076731
Jacobi stage: ITER: 15231 RES: 0.069632
Jacobi stage: ITER: 15665 RES: 0.065121
Jacobi stage: ITER: 15947 RES: 0.06238
Jacobi stage: ITER: 16479 RES: 0.057485
Jacobi stage: ITER: 16791 RES: 0.054795
Jacobi stage: ITER: 16975 RES: 0.053268
Jacobi stage: ITER: 17311 RES: 0.050589
Jacobi stage: ITER: 17527 RES: 0.048938
Jacobi stage: ITER: 17871 RES: 0.046419
Jacobi stage: ITER: 18591 RES: 0.041559
Jacobi stage: ITER: 18729 RES: 0.040675
Jacobi stage: ITER: 19047 RES: 0.038748
Jacobi stage: ITER: 20115 RES: 0.032885
Jacobi stage: ITER: 21109 RES: 0.02822
Jacobi stage: ITER: 22753 RES: 0.021923
Jacobi stage: ITER: 23345 RES: 0.020017
Jacobi stage: ITER: 24339 RES: 0.017188
Jacobi stage: ITER: 24663 RES: 0.016354
Jacobi stage: ITER: 25603 RES: 0.014155
Jacobi stage: ITER: 26387 RES: 0.012549
Jacobi stage: ITER: 27373 RES: 0.010782
Jacobi stage: ITER: 28003 RES: 0.0097905
Jacobi stage: ITER: 29171 RES: 0.0081826
Jacobi stage: ITER: 30059 RES: 0.0071393
Jacobi stage: ITER: 30319 RES: 0.0068598
Jacobi stage: ITER: 30531 RES: 0.00664
Jacobi stage: ITER: 30931 RES: 0.0062443
Jacobi stage: ITER: 31167 RES: 0.006022
Jacobi stage: ITER: 31403 RES: 0.0058076
Jacobi stage: ITER: 32127 RES: 0.0051964
Jacobi stage: ITER: 33321 RES: 0.0043243
Jacobi stage: ITER: 33919 RES: 0.003946
Jacobi stage: ITER: 34657 RES: 0.003522
Jacobi stage: ITER: 35535 RES: 0.0030786
Jacobi stage: ITER: 36483 RES: 0.0026615
Jacobi stage: ITER: 37299 RES: 0.0023479
Jacobi stage: ITER: 38503 RES: 0.0019515
Jacobi stage: ITER: 38807 RES: 0.0018625
Jacobi stage: ITER: 39311 RES: 0.0017237
Jacobi stage: ITER: 41095 RES: 0.0013106
Jacobi stage: ITER: 43207 RES: 0.00094749
Jacobi stage: ITER: 45013 RES: 0.00071773
Jacobi stage: ITER: 46561 RES: 0.00056585
Jacobi stage: ITER: 48415 RES: 0.00042575
Jacobi stage: ITER: 50563 RES: 0.0003061
Jacobi stage: ITER: 51795 RES: 0.00025333
Jacobi stage: ITER: 53327 RES: 0.00020021
Jacobi stage: ITER: 54493 RES: 0.00016733
Jacobi stage: ITER: 55605 RES: 0.00014106
Jacobi stage: ITER: 56699 RES: 0.00011927
Jacobi stage: ITER: 57475 RES: 0.00010587
Jacobi stage: ITER: 59233 RES: 8.0793e-05
Jacobi stage: ITER: 60167 RES: 7.0016e-05
Jacobi stage: ITER: 61063 RES: 6.1014e-05
Jacobi stage: ITER: 61961 RES: 5.3136e-05
Jacobi stage: ITER: 62581 RES: 4.8309e-05
Jacobi stage: ITER: 63359 RES: 4.2881e-05
Jacobi stage: ITER: 63663 RES: 4.0925e-05
Jacobi stage: ITER: 64131 RES: 3.8086e-05
Jacobi stage: ITER: 64279 RES: 3.723e-05
Jacobi stage: ITER: 64767 RES: 3.4541e-05
Jacobi stage: ITER: 64987 RES: 3.3394e-05
Jacobi stage: ITER: 65731 RES: 2.9787e-05
Jacobi stage: ITER: 66337 RES: 2.7132e-05
Jacobi stage: ITER: 67403 RES: 2.3041e-05
Jacobi stage: ITER: 68339 RES: 1.9955e-05
Jacobi stage: ITER: 68619 RES: 1.9115e-05
Jacobi stage: ITER: 68941 RES: 1.8187e-05
Jacobi stage: ITER: 69691 RES: 1.6213e-05
Jacobi stage: ITER: 70617 RES: 1.4059e-05
Jacobi stage: ITER: 71695 RES: 1.1918e-05
Jacobi stage: ITER: 73719 RES: 8.7331e-06
Jacobi stage: ITER: 75413 RES: 6.7302e-06
Jacobi stage: ITER: 77335 RES: 5.0113e-06
Jacobi stage: ITER: 79021 RES: 3.8668e-06
Jacobi stage: ITER: 81085 RES: 2.8162e-06
Jacobi stage: ITER: 83141 RES: 2.0536e-06
Jacobi stage: ITER: 84923 RES: 1.5623e-06
Jacobi stage: ITER: 86593 RES: 1.2085e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]

The monitoring of the solver can be improved by time elapsed since the beginning of the computation as demonstrated in the following example:

1#include <iostream>
2#include <memory>
3#include <chrono>
5#include <TNL/Timer.h>
6#include <TNL/Matrices/SparseMatrix.h>
7#include <TNL/Devices/Sequential.h>
8#include <TNL/Devices/Cuda.h>
9#include <TNL/Solvers/Linear/Jacobi.h>
10
11template< typename Device >
12void
13iterativeLinearSolverExample()
14{
15 /***
16 * Set the following matrix (dots represent zero matrix elements):
17 *
18 * / 2.5 -1 . . . \
19 * | -1 2.5 -1 . . |
20 * | . -1 2.5 -1. . |
21 * | . . -1 2.5 -1 |
22 * \ . . . -1 2.5 /
23 */
26 const int size( 5 );
27 auto matrix_ptr = std::make_shared< MatrixType >();
28 matrix_ptr->setDimensions( size, size );
29 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
30
31 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
32 {
33 const int rowIdx = row.getRowIndex();
34 if( rowIdx == 0 ) {
35 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
36 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
37 }
38 else if( rowIdx == size - 1 ) {
39 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
40 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
41 }
42 else {
43 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
44 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
45 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
46 }
47 };
48
49 /***
50 * Set the matrix elements.
51 */
52 matrix_ptr->forAllRows( f );
53 std::cout << *matrix_ptr << std::endl;
54
55 /***
56 * Set the right-hand side vector.
57 */
58 Vector x( size, 1.0 );
59 Vector b( size );
60 matrix_ptr->vectorProduct( x, b );
61 x = 0.0;
62 std::cout << "Vector b = " << b << std::endl;
63
64 /***
65 * Setup solver of the linear system.
66 */
68 LinearSolver solver;
69 solver.setMatrix( matrix_ptr );
70 solver.setOmega( 0.0005 );
71
72 /***
73 * Setup monitor of the iterative solver.
74 */
75 using IterativeSolverMonitorType = TNL::Solvers::IterativeSolverMonitor< double, int >;
76 IterativeSolverMonitorType monitor;
78 monitor.setRefreshRate( 10 ); // refresh rate in milliseconds
79 monitor.setVerbose( 1 );
80 monitor.setStage( "Jacobi stage:" );
81 TNL::Timer timer;
82 monitor.setTimer( timer );
83 timer.start();
84 solver.setSolverMonitor( monitor );
85 solver.setConvergenceResidue( 1.0e-6 );
86 solver.solve( b, x );
87 monitor.stopMainLoop();
88 std::cout << "Vector x = " << x << std::endl;
89}
90
91int
92main( int argc, char* argv[] )
93{
94 std::cout << "Solving linear system on host: " << std::endl;
95 iterativeLinearSolverExample< TNL::Devices::Sequential >();
96
97#ifdef __CUDACC__
98 std::cout << "Solving linear system on CUDA device: " << std::endl;
99 iterativeLinearSolverExample< TNL::Devices::Cuda >();
100#endif
101}
Class for real time, CPU time and CPU cycles measuring.
Definition Timer.h:25
void start()
Starts timer.
Definition Timer.hpp:42

The only changes are around the lines where we create an instance of TNL::Timer, connect it with the monitor using TNL::Solvers::SolverMonitor::setTimer and start the timer with TNL::Timer::start.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:0.003567
ELA: 0.10371 Jacobi stage: ITER: 5213 RES: 0.32764
ELA: 0.20384 Jacobi stage: ITER: 10419 RES: 0.14586
ELA: 0.30397 Jacobi stage: ITER: 15365 RES: 0.068193
ELA: 0.40409 Jacobi stage: ITER: 20471 RES: 0.031135
ELA: 0.5042 Jacobi stage: ITER: 26989 RES: 0.011437
ELA: 0.60432 Jacobi stage: ITER: 34557 RES: 0.0035766
ELA: 0.70445 Jacobi stage: ITER: 44731 RES: 0.00074974
ELA: 0.80458 Jacobi stage: ITER: 51973 RES: 0.00024642
ELA: 0.91353 Jacobi stage: ITER: 64535 RES: 3.5795e-05
ELA: 1.0137 Jacobi stage: ITER: 75935 RES: 6.2136e-06
ELA: 1.1169 Jacobi stage: ITER: 86443 RES: 1.237e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:0.004734
ELA: 0.10485 Jacobi stage: ITER: 1053 RES: 0.72788
ELA: 0.20806 Jacobi stage: ITER: 2115 RES: 0.56821
ELA: 0.30818 Jacobi stage: ITER: 3193 RES: 0.46074
ELA: 0.41139 Jacobi stage: ITER: 4063 RES: 0.39592
ELA: 0.51152 Jacobi stage: ITER: 4463 RES: 0.37032
ELA: 0.61165 Jacobi stage: ITER: 5795 RES: 0.2987
ELA: 0.71472 Jacobi stage: ITER: 6763 RES: 0.25658
ELA: 0.81485 Jacobi stage: ITER: 7787 RES: 0.21884
ELA: 0.91806 Jacobi stage: ITER: 8899 RES: 0.18431
ELA: 1.0182 Jacobi stage: ITER: 9863 RES: 0.15888
ELA: 1.1247 Jacobi stage: ITER: 10779 RES: 0.138
ELA: 1.2249 Jacobi stage: ITER: 12717 RES: 0.10242
ELA: 1.325 Jacobi stage: ITER: 14735 RES: 0.075145
ELA: 1.4381 Jacobi stage: ITER: 15927 RES: 0.062572
ELA: 1.5447 Jacobi stage: ITER: 17375 RES: 0.050094
ELA: 1.6481 Jacobi stage: ITER: 18367 RES: 0.043014
ELA: 1.7547 Jacobi stage: ITER: 19487 RES: 0.036216
ELA: 1.8549 Jacobi stage: ITER: 20543 RES: 0.030793
ELA: 1.955 Jacobi stage: ITER: 21493 RES: 0.026604
ELA: 2.0581 Jacobi stage: ITER: 22075 RES: 0.024336
ELA: 2.1582 Jacobi stage: ITER: 23683 RES: 0.01901
ELA: 2.2614 Jacobi stage: ITER: 25153 RES: 0.015163
ELA: 2.3781 Jacobi stage: ITER: 25855 RES: 0.013617
ELA: 2.4814 Jacobi stage: ITER: 26013 RES: 0.013287
ELA: 2.5981 Jacobi stage: ITER: 26375 RES: 0.012572
ELA: 2.7114 Jacobi stage: ITER: 27039 RES: 0.011353
ELA: 2.8181 Jacobi stage: ITER: 27939 RES: 0.0098872
ELA: 2.9182 Jacobi stage: ITER: 28419 RES: 0.0091845
ELA: 3.0281 Jacobi stage: ITER: 29625 RES: 0.0076291
ELA: 3.1282 Jacobi stage: ITER: 30463 RES: 0.0067097
ELA: 3.2347 Jacobi stage: ITER: 31679 RES: 0.0055666
ELA: 3.3481 Jacobi stage: ITER: 32959 RES: 0.004573
ELA: 3.4482 Jacobi stage: ITER: 33755 RES: 0.0040467
ELA: 3.5547 Jacobi stage: ITER: 34479 RES: 0.0036208
ELA: 3.6581 Jacobi stage: ITER: 35127 RES: 0.0032778
ELA: 3.7681 Jacobi stage: ITER: 35807 RES: 0.0029527
ELA: 3.8747 Jacobi stage: ITER: 37179 RES: 0.0023916
ELA: 3.9847 Jacobi stage: ITER: 37963 RES: 0.0021203
ELA: 4.0947 Jacobi stage: ITER: 39295 RES: 0.001728
ELA: 4.2047 Jacobi stage: ITER: 40907 RES: 0.001349
ELA: 4.3081 Jacobi stage: ITER: 41787 RES: 0.0011784
ELA: 4.4082 Jacobi stage: ITER: 43649 RES: 0.00088502
ELA: 4.5083 Jacobi stage: ITER: 44875 RES: 0.00073334
ELA: 4.6085 Jacobi stage: ITER: 45265 RES: 0.00069048
ELA: 4.7086 Jacobi stage: ITER: 47207 RES: 0.00051255
ELA: 4.8087 Jacobi stage: ITER: 49133 RES: 0.00038118
ELA: 4.9088 Jacobi stage: ITER: 50027 RES: 0.00033237
ELA: 5.0089 Jacobi stage: ITER: 50515 RES: 0.00030837
ELA: 5.1091 Jacobi stage: ITER: 50843 RES: 0.00029322
ELA: 5.2092 Jacobi stage: ITER: 51751 RES: 0.00025504
ELA: 5.3247 Jacobi stage: ITER: 53287 RES: 0.00020144
ELA: 5.4281 Jacobi stage: ITER: 54779 RES: 0.00016019
ELA: 5.5282 Jacobi stage: ITER: 55867 RES: 0.00013553
ELA: 5.6283 Jacobi stage: ITER: 56617 RES: 0.00012075
ELA: 5.7284 Jacobi stage: ITER: 57359 RES: 0.00010777
ELA: 5.8285 Jacobi stage: ITER: 57931 RES: 9.8709e-05
ELA: 5.9287 Jacobi stage: ITER: 58815 RES: 8.6176e-05
ELA: 6.0288 Jacobi stage: ITER: 59393 RES: 7.8831e-05
ELA: 6.1289 Jacobi stage: ITER: 60389 RES: 6.7648e-05
ELA: 6.2291 Jacobi stage: ITER: 61845 RES: 5.4092e-05
ELA: 6.3293 Jacobi stage: ITER: 63261 RES: 4.3518e-05
ELA: 6.4314 Jacobi stage: ITER: 64725 RES: 3.4754e-05
ELA: 6.5315 Jacobi stage: ITER: 66005 RES: 2.8551e-05
ELA: 6.6317 Jacobi stage: ITER: 66771 RES: 2.539e-05
ELA: 6.7347 Jacobi stage: ITER: 67291 RES: 2.3441e-05
ELA: 6.8349 Jacobi stage: ITER: 67665 RES: 2.2125e-05
ELA: 6.935 Jacobi stage: ITER: 68443 RES: 1.9639e-05
ELA: 7.0352 Jacobi stage: ITER: 69303 RES: 1.7209e-05
ELA: 7.1353 Jacobi stage: ITER: 69615 RES: 1.6404e-05
ELA: 7.2355 Jacobi stage: ITER: 70685 RES: 1.3913e-05
ELA: 7.3356 Jacobi stage: ITER: 71343 RES: 1.258e-05
ELA: 7.4358 Jacobi stage: ITER: 72823 RES: 1.0022e-05
ELA: 7.5359 Jacobi stage: ITER: 74247 RES: 8.0528e-06
ELA: 7.636 Jacobi stage: ITER: 74535 RES: 7.7043e-06
ELA: 7.7362 Jacobi stage: ITER: 74667 RES: 7.5497e-06
ELA: 7.8381 Jacobi stage: ITER: 74863 RES: 7.3258e-06
ELA: 7.9382 Jacobi stage: ITER: 75015 RES: 7.1567e-06
ELA: 8.0383 Jacobi stage: ITER: 75235 RES: 6.9189e-06
ELA: 8.1384 Jacobi stage: ITER: 75397 RES: 6.7468e-06
ELA: 8.2386 Jacobi stage: ITER: 75545 RES: 6.5952e-06
ELA: 8.3387 Jacobi stage: ITER: 75911 RES: 6.2365e-06
ELA: 8.4414 Jacobi stage: ITER: 76367 RES: 5.8147e-06
ELA: 8.5481 Jacobi stage: ITER: 77061 RES: 5.2251e-06
ELA: 8.6547 Jacobi stage: ITER: 77291 RES: 5.0453e-06
ELA: 8.7548 Jacobi stage: ITER: 77505 RES: 4.8806e-06
ELA: 8.8614 Jacobi stage: ITER: 77769 RES: 4.6867e-06
ELA: 8.9681 Jacobi stage: ITER: 78409 RES: 4.2479e-06
ELA: 9.0714 Jacobi stage: ITER: 79173 RES: 3.7775e-06
ELA: 9.1814 Jacobi stage: ITER: 79519 RES: 3.5831e-06
ELA: 9.2847 Jacobi stage: ITER: 80187 RES: 3.2337e-06
ELA: 9.3881 Jacobi stage: ITER: 80501 RES: 3.0805e-06
ELA: 9.4914 Jacobi stage: ITER: 81039 RES: 2.837e-06
ELA: 9.5947 Jacobi stage: ITER: 81873 RES: 2.4951e-06
ELA: 9.7014 Jacobi stage: ITER: 83263 RES: 2.0161e-06
ELA: 9.8047 Jacobi stage: ITER: 84233 RES: 1.7365e-06
ELA: 9.9147 Jacobi stage: ITER: 84887 RES: 1.571e-06
ELA: 10.021 Jacobi stage: ITER: 85267 RES: 1.4819e-06
ELA: 10.122 Jacobi stage: ITER: 85537 RES: 1.4213e-06
ELA: 10.228 Jacobi stage: ITER: 86091 RES: 1.3057e-06
ELA: 10.338 Jacobi stage: ITER: 86633 RES: 1.2011e-06
ELA: 10.438 Jacobi stage: ITER: 87147 RES: 1.1102e-06
ELA: 10.541 Jacobi stage: ITER: 87473 RES: 1.0557e-06
ELA: 10.642 Jacobi stage: ITER: 87707 RES: 1.0187e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]

## Setup with preconditioner

Preconditioners of iterative solvers can significantly improve the performance of the solver. In the case of the linear systems, they are used mainly with the Krylov subspace methods. Preconditioners cannot be used with the starionary methods (TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR). The following example shows how to setup an iterative solver of linear systems with preconditioning.

1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/Linear/TFQMR.h>
7#include <TNL/Solvers/Linear/Preconditioners/Diagonal.h>
8
9template< typename Device >
10void
11iterativeLinearSolverExample()
12{
13 /***
14 * Set the following matrix (dots represent zero matrix elements):
15 *
16 * / 2.5 -1 . . . \
17 * | -1 2.5 -1 . . |
18 * | . -1 2.5 -1. . |
19 * | . . -1 2.5 -1 |
20 * \ . . . -1 2.5 /
21 */
24 const int size( 5 );
25 auto matrix_ptr = std::make_shared< MatrixType >();
26 matrix_ptr->setDimensions( size, size );
27 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
28
29 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
30 {
31 const int rowIdx = row.getRowIndex();
32 if( rowIdx == 0 ) {
33 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
34 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
35 }
36 else if( rowIdx == size - 1 ) {
37 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
38 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
39 }
40 else {
41 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
42 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
43 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
44 }
45 };
46
47 /***
48 * Set the matrix elements.
49 */
50 matrix_ptr->forAllRows( f );
51 std::cout << *matrix_ptr << std::endl;
52
53 /***
54 * Set the right-hand side vector.
55 */
56 Vector x( size, 1.0 );
57 Vector b( size );
58 matrix_ptr->vectorProduct( x, b );
59 x = 0.0;
60 std::cout << "Vector b = " << b << std::endl;
61
62 /***
63 * Solve the linear system using diagonal (Jacobi) preconditioner.
64 */
67 auto preconditioner_ptr = std::make_shared< Preconditioner >();
68 preconditioner_ptr->update( matrix_ptr );
69 LinearSolver solver;
70 solver.setMatrix( matrix_ptr );
71 solver.setPreconditioner( preconditioner_ptr );
72 solver.setConvergenceResidue( 1.0e-6 );
73 solver.solve( b, x );
74 std::cout << "Vector x = " << x << std::endl;
75}
76
77int
78main( int argc, char* argv[] )
79{
80 std::cout << "Solving linear system on host: " << std::endl;
81 iterativeLinearSolverExample< TNL::Devices::Sequential >();
82
83#ifdef __CUDACC__
84 std::cout << "Solving linear system on CUDA device: " << std::endl;
85 iterativeLinearSolverExample< TNL::Devices::Cuda >();
86#endif
87}
Diagonal (Jacobi) preconditioner for iterative solvers of linear systems.
Definition Diagonal.h:21

In this example, we solve the same problem as in all other examples in this section. The only differences concerning the preconditioner happen in the solver setup. Similarly to the matrix of the linear system, the preconditioner needs to be passed to the solver as a std::shared_ptr. When the preconditioner object is created, we have to initialize it using the update method, which has to be called everytime the matrix of the linear system changes. This is important, for example, when solving time-dependent PDEs, but it does not happen in this example. Finally, we need to connect the solver with the preconditioner using the setPreconditioner method.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]

## Choosing the solver and preconditioner type at runtime

When developing a numerical solver, one often has to search for a combination of various methods and algorithms that fit given requirements the best. To make this easier, TNL provides the functions TNL::Solvers::getLinearSolver and TNL::Solvers::getPreconditioner for selecting the linear solver and preconditioner at runtime. The following example shows how to use these functions:

1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/LinearSolverTypeResolver.h>
7
8template< typename Device >
9void
10iterativeLinearSolverExample()
11{
12 /***
13 * Set the following matrix (dots represent zero matrix elements):
14 *
15 * / 2.5 -1 . . . \
16 * | -1 2.5 -1 . . |
17 * | . -1 2.5 -1. . |
18 * | . . -1 2.5 -1 |
19 * \ . . . -1 2.5 /
20 */
23 const int size( 5 );
24 auto matrix_ptr = std::make_shared< MatrixType >();
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
27
28 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
29 {
30 const int rowIdx = row.getRowIndex();
31 if( rowIdx == 0 ) {
32 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
33 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
34 }
35 else if( rowIdx == size - 1 ) {
36 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
37 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
38 }
39 else {
40 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
41 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
42 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
43 }
44 };
45
46 /***
47 * Set the matrix elements.
48 */
49 matrix_ptr->forAllRows( f );
50 std::cout << *matrix_ptr << std::endl;
51
52 /***
53 * Set the right-hand side vector.
54 */
55 Vector x( size, 1.0 );
56 Vector b( size );
57 matrix_ptr->vectorProduct( x, b );
58 x = 0.0;
59 std::cout << "Vector b = " << b << std::endl;
60
61 /***
62 * Solve the linear system using diagonal (Jacobi) preconditioner.
63 */
64 auto solver_ptr = TNL::Solvers::getLinearSolver< MatrixType >( "tfqmr" );
65 auto preconditioner_ptr = TNL::Solvers::getPreconditioner< MatrixType >( "diagonal" );
66 preconditioner_ptr->update( matrix_ptr );
67 solver_ptr->setMatrix( matrix_ptr );
68 solver_ptr->setPreconditioner( preconditioner_ptr );
69 solver_ptr->setConvergenceResidue( 1.0e-6 );
70 solver_ptr->solve( b, x );
71 std::cout << "Vector x = " << x << std::endl;
72}
73
74int
75main( int argc, char* argv[] )
76{
77 std::cout << "Solving linear system on host: " << std::endl;
78 iterativeLinearSolverExample< TNL::Devices::Sequential >();
79
80#ifdef __CUDACC__
81 std::cout << "Solving linear system on CUDA device: " << std::endl;
82 iterativeLinearSolverExample< TNL::Devices::Cuda >();
83#endif
84}
std::shared_ptr< Linear::LinearSolver< MatrixType > > getLinearSolver(std::string name)
Function returning shared pointer with linear solver given by its name in a form of a string.
Definition LinearSolverTypeResolver.h:84
std::shared_ptr< Linear::Preconditioners::Preconditioner< MatrixType > > getPreconditioner(std::string name)
Function returning shared pointer with linear preconditioner given by its name in a form of a string.
Definition LinearSolverTypeResolver.h:138

We still stay with the same problem and the only changes are in the solver setup. We first use TNL::Solvers::getLinearSolver to get a shared pointer holding the solver and then TNL::Solvers::getPreconditioner to get a shared pointer holding the preconditioner. The rest of the code is the same as in the previous examples with the only difference that we work with the pointer solver_ptr instead of the direct instance solver of the solver type.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]