Template Numerical Library version\ main:df396df
Loading...
Searching...
No Matches
Linear solvers

Introduction

Solvers of linear systems are one of the most important algorithms in scientific computations. TNL offers the following iterative methods:

  1. Stationary methods
    1. Jacobi method (TNL::Solvers::Linear::Jacobi)
    2. Successive-overrelaxation method, SOR (TNL::Solvers::Linear::SOR)
  2. Krylov subspace methods
    1. Conjugate gradient method, CG (TNL::Solvers::Linear::CG)
    2. Biconjugate gradient stabilized method, BICGStab (TNL::Solvers::Linear::BICGStab)
    3. Biconjugate gradient stabilized method, BICGStab(l) (TNL::Solvers::Linear::BICGStabL)
    4. Transpose-free quasi-minimal residual method, TFQMR (TNL::Solvers::Linear::TFQMR)
    5. Generalized minimal residual method, GMRES (TNL::Solvers::Linear::GMRES) with various methods of orthogonalization:
      1. Classical Gramm-Schmidt, CGS
      2. Classical Gramm-Schmidt with reorthogonalization, CGSR
      3. Modified Gramm-Schmidt, MGS
      4. Modified Gramm-Schmidt with reorthogonalization, MGSR
      5. Compact WY form of the Householder reflections, CWY

The iterative solvers (not the stationary solvers like TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR) can be combined with the following preconditioners:

  1. Diagonal or Jacobi (TNL::Solvers::Linear::Preconditioners::Diagonal)
  2. ILU (Incomplete LU) - CPU only currently
    1. ILU(0) (TNL::Solvers::Linear::Preconditioners::ILU0)
    2. ILUT (ILU with thresholding) (TNL::Solvers::Linear::Preconditioners::ILUT)

Iterative solvers of linear systems

Basic setup

All iterative solvers for linear systems can be found in the namespace TNL::Solvers::Linear. The following example shows the use the iterative solvers:

1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/Linear/TFQMR.h>
7
8template< typename Device >
9void
10iterativeLinearSolverExample()
11{
12 /***
13 * Set the following matrix (dots represent zero matrix elements):
14 *
15 * / 2.5 -1 . . . \
16 * | -1 2.5 -1 . . |
17 * | . -1 2.5 -1. . |
18 * | . . -1 2.5 -1 |
19 * \ . . . -1 2.5 /
20 */
23 const int size( 5 );
24 auto matrix_ptr = std::make_shared< MatrixType >();
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
27
28 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
29 {
30 const int rowIdx = row.getRowIndex();
31 if( rowIdx == 0 ) {
32 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
33 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
34 }
35 else if( rowIdx == size - 1 ) {
36 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
37 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
38 }
39 else {
40 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
41 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
42 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
43 }
44 };
45
46 /***
47 * Set the matrix elements.
48 */
49 matrix_ptr->forAllRows( f );
50 std::cout << *matrix_ptr << std::endl;
51
52 /***
53 * Set the right-hand side vector.
54 */
55 Vector x( size, 1.0 );
56 Vector b( size );
57 matrix_ptr->vectorProduct( x, b );
58 x = 0.0;
59 std::cout << "Vector b = " << b << std::endl;
60
61 /***
62 * Solve the linear system.
63 */
65 LinearSolver solver;
66 solver.setMatrix( matrix_ptr );
67 solver.setConvergenceResidue( 1.0e-6 );
68 solver.solve( b, x );
69 std::cout << "Vector x = " << x << std::endl;
70}
71
72int
73main( int argc, char* argv[] )
74{
75 std::cout << "Solving linear system on host: " << std::endl;
76 iterativeLinearSolverExample< TNL::Devices::Sequential >();
77
78#ifdef __CUDACC__
79 std::cout << "Solving linear system on CUDA device: " << std::endl;
80 iterativeLinearSolverExample< TNL::Devices::Cuda >();
81#endif
82}
#define __cuda_callable__
Definition Macros.h:49
Vector extends Array with algebraic operations.
Definition Vector.h:36
Implementation of sparse matrix, i.e. matrix storing only non-zero elements.
Definition SparseMatrix.h:57
void setMatrix(const MatrixPointer &matrix)
Set the matrix of the linear system.
Definition LinearSolver.h:120
Iterative solver of linear systems based on the Transpose-free quasi-minimal residual (TFQMR) method.
Definition TFQMR.h:21
T endl(T... args)
T make_shared(T... args)

In this example we solve a linear system \( A \vec x = \vec b \) where

\[ A = \left( \begin{array}{cccc} 2.5 & -1 & & & \\ -1 & 2.5 & -1 & & \\ & -1 & 2.5 & -1 & \\ & & -1 & 2.5 & -1 \\ & & & -1 & 2.5 \\ \end{array} \right) \]

The right-hand side vector \(\vec b \) is set to \(( 1.5, 0.5, 0.5, 0.5, 1.5 )^T \) so that the exact solution is \( \vec x = ( 1, 1, 1, 1, 1 )^T \). The elements of the matrix \( A \) are set using the method TNL::Matrices::SparseMatrix::forAllRows. In this example, we use the sparse matrix but any other matrix type can be used as well (see the namespace TNL::Matrices). Next we set the solution vector \( \vec x = ( 1, 1, 1, 1, 1 )^T \) and multiply it with matrix \( A \) to get the right-hand side vector \( \vec b \). Finally, we reset the vector \( \vec x \) to zero.

To solve the linear system in the example, we use the TFQMR solver. Other solvers can be used as well (see the namespace TNL::Solvers::Linear). The solver needs only one template parameter which is the matrix type. Next we create an instance of the solver and set the matrix of the linear system. Note that the matrix is passed to the solver as a std::shared_ptr. Then we set the stopping criterion for the iterative method in terms of the relative residue, i.e. \( \lVert \vec b - A \vec x \rVert / \lVert b \rVert \). The solver is executed by calling the TNL::Solvers::Linear::LinearSolver::solve method which accepts the right-hand side vector \( \vec b \) and the solution vector \( \vec x \).

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]

Setup with a solver monitor

Solution of large linear systems may take a lot of time. In such situations, it is useful to be able to monitor the convergence of the solver or the solver status in general. For this purpose, TNL provides a solver monitor which can show various metrics in real time, such as current number of iterations, current residue of the approximate solution, etc. The solver monitor in TNL runs in a separate thread and it refreshes the status of the solver with a configurable refresh rate (once per 500 ms by default). The use of the solver monitor is demonstrated in the following example.

1#include <iostream>
2#include <memory>
3#include <chrono>
4#include <thread>
5#include <TNL/Matrices/SparseMatrix.h>
6#include <TNL/Devices/Sequential.h>
7#include <TNL/Devices/Cuda.h>
8#include <TNL/Solvers/Linear/Jacobi.h>
9
10template< typename Device >
11void
12iterativeLinearSolverExample()
13{
14 /***
15 * Set the following matrix (dots represent zero matrix elements):
16 *
17 * / 2.5 -1 . . . \
18 * | -1 2.5 -1 . . |
19 * | . -1 2.5 -1. . |
20 * | . . -1 2.5 -1 |
21 * \ . . . -1 2.5 /
22 */
25 const int size( 5 );
26 auto matrix_ptr = std::make_shared< MatrixType >();
27 matrix_ptr->setDimensions( size, size );
28 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
29
30 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
31 {
32 const int rowIdx = row.getRowIndex();
33 if( rowIdx == 0 ) {
34 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
35 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
36 }
37 else if( rowIdx == size - 1 ) {
38 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
39 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
40 }
41 else {
42 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
43 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
44 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
45 }
46 };
47
48 /***
49 * Set the matrix elements.
50 */
51 matrix_ptr->forAllRows( f );
52 std::cout << *matrix_ptr << std::endl;
53
54 /***
55 * Set the right-hand side vector.
56 */
57 Vector x( size, 1.0 );
58 Vector b( size );
59 matrix_ptr->vectorProduct( x, b );
60 x = 0.0;
61 std::cout << "Vector b = " << b << std::endl;
62
63 /***
64 * Setup solver of the linear system.
65 */
67 LinearSolver solver;
68 solver.setMatrix( matrix_ptr );
69 solver.setOmega( 0.0005 );
70
71 /***
72 * Setup monitor of the iterative solver.
73 */
74 using IterativeSolverMonitorType = TNL::Solvers::IterativeSolverMonitor< double, int >;
75 IterativeSolverMonitorType monitor;
76 TNL::Solvers::SolverMonitorThread monitorThread( monitor );
77 monitor.setRefreshRate( 10 ); // refresh rate in milliseconds
78 monitor.setVerbose( 1 );
79 monitor.setStage( "Jacobi stage:" );
80 solver.setSolverMonitor( monitor );
81 solver.setConvergenceResidue( 1.0e-6 );
82 solver.solve( b, x );
83 monitor.stopMainLoop();
84 std::cout << "Vector x = " << x << std::endl;
85}
86
87int
88main( int argc, char* argv[] )
89{
90 std::cout << "Solving linear system on host: " << std::endl;
91 iterativeLinearSolverExample< TNL::Devices::Sequential >();
92
93#ifdef __CUDACC__
94 std::cout << "Solving linear system on CUDA device: " << std::endl;
95 iterativeLinearSolverExample< TNL::Devices::Cuda >();
96#endif
97}
Iterative solver of linear systems based on the Jacobi method.
Definition Jacobi.h:21
A RAII wrapper for launching the SolverMonitor's main loop in a separate thread.
Definition SolverMonitor.h:133

First, we set up the same linear system as in the previous example, we create an instance of the Jacobi solver and we pass the matrix of the linear system to the solver. Then, we set the relaxation parameter \( \omega \) of the Jacobi solver to 0.0005. The reason is to artificially slow down the convergence, because we want to see some iterations in this example. Next, we create an instance of the solver monitor and a special thread for the monitor (an instance of the TNL::Solvers::SolverMonitorThread class). We use the following methods to configure the solver monitor:

Next, we call TNL::Solvers::IterativeSolver::setSolverMonitor to connect the solver with the monitor and we set the convergence criterion based on the relative residue. Finally, we start the solver using the TNL::Solvers::Linear::Jacobi::solve method and when the solver finishes, we stop the monitor using TNL::Solvers::SolverMonitor::stopMainLoop.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 8143 RES: 0.20699
Jacobi stage: ITER: 19035 RES: 0.038819
Jacobi stage: ITER: 30069 RES: 0.0071261
Jacobi stage: ITER: 39125 RES: 0.0017731
Jacobi stage: ITER: 49207 RES: 0.00037698
Jacobi stage: ITER: 57361 RES: 0.00010771
Jacobi stage: ITER: 66975 RES: 2.4591e-05
Jacobi stage: ITER: 78097 RES: 4.4564e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 403 RES: 0.87674
Jacobi stage: ITER: 1631 RES: 0.63193
Jacobi stage: ITER: 2925 RES: 0.48403
Jacobi stage: ITER: 3641 RES: 0.4255
Jacobi stage: ITER: 5023 RES: 0.33793
Jacobi stage: ITER: 6433 RES: 0.27007
Jacobi stage: ITER: 7823 RES: 0.21763
Jacobi stage: ITER: 9135 RES: 0.17773
Jacobi stage: ITER: 9979 RES: 0.15607
Jacobi stage: ITER: 10703 RES: 0.13962
Jacobi stage: ITER: 11479 RES: 0.12392
Jacobi stage: ITER: 12689 RES: 0.10287
Jacobi stage: ITER: 13301 RES: 0.093634
Jacobi stage: ITER: 14097 RES: 0.082857
Jacobi stage: ITER: 14751 RES: 0.07496
Jacobi stage: ITER: 15319 RES: 0.068697
Jacobi stage: ITER: 15653 RES: 0.065242
Jacobi stage: ITER: 16643 RES: 0.056055
Jacobi stage: ITER: 16909 RES: 0.053794
Jacobi stage: ITER: 17487 RES: 0.049239
Jacobi stage: ITER: 17897 RES: 0.04622
Jacobi stage: ITER: 18161 RES: 0.044383
Jacobi stage: ITER: 19731 RES: 0.034883
Jacobi stage: ITER: 20345 RES: 0.031734
Jacobi stage: ITER: 20557 RES: 0.030717
Jacobi stage: ITER: 20999 RES: 0.02871
Jacobi stage: ITER: 22383 RES: 0.023212
Jacobi stage: ITER: 23277 RES: 0.020227
Jacobi stage: ITER: 23855 RES: 0.018515
Jacobi stage: ITER: 24237 RES: 0.017454
Jacobi stage: ITER: 24665 RES: 0.016344
Jacobi stage: ITER: 25299 RES: 0.014832
Jacobi stage: ITER: 26001 RES: 0.013311
Jacobi stage: ITER: 26359 RES: 0.012603
Jacobi stage: ITER: 26615 RES: 0.012117
Jacobi stage: ITER: 27755 RES: 0.010171
Jacobi stage: ITER: 28139 RES: 0.0095881
Jacobi stage: ITER: 28621 RES: 0.0089012
Jacobi stage: ITER: 29043 RES: 0.0083451
Jacobi stage: ITER: 29495 RES: 0.0077853
Jacobi stage: ITER: 29901 RES: 0.0073124
Jacobi stage: ITER: 30291 RES: 0.0068893
Jacobi stage: ITER: 30623 RES: 0.0065468
Jacobi stage: ITER: 30973 RES: 0.0062022
Jacobi stage: ITER: 31401 RES: 0.0058076
Jacobi stage: ITER: 31835 RES: 0.0054348
Jacobi stage: ITER: 31987 RES: 0.0053093
Jacobi stage: ITER: 32093 RES: 0.005222
Jacobi stage: ITER: 32239 RES: 0.0051078
Jacobi stage: ITER: 32435 RES: 0.0049563
Jacobi stage: ITER: 32553 RES: 0.0048658
Jacobi stage: ITER: 32667 RES: 0.0047828
Jacobi stage: ITER: 32811 RES: 0.0046781
Jacobi stage: ITER: 32971 RES: 0.0045646
Jacobi stage: ITER: 33127 RES: 0.0044565
Jacobi stage: ITER: 33299 RES: 0.0043403
Jacobi stage: ITER: 33445 RES: 0.0042427
Jacobi stage: ITER: 33607 RES: 0.0041397
Jacobi stage: ITER: 33787 RES: 0.0040268
Jacobi stage: ITER: 33999 RES: 0.0038978
Jacobi stage: ITER: 34169 RES: 0.0037962
Jacobi stage: ITER: 34451 RES: 0.0036364
Jacobi stage: ITER: 34671 RES: 0.0035156
Jacobi stage: ITER: 34871 RES: 0.0034092
Jacobi stage: ITER: 35043 RES: 0.0033203
Jacobi stage: ITER: 35211 RES: 0.0032357
Jacobi stage: ITER: 35337 RES: 0.0031727
Jacobi stage: ITER: 35555 RES: 0.0030692
Jacobi stage: ITER: 35749 RES: 0.0029782
Jacobi stage: ITER: 36207 RES: 0.0027767
Jacobi stage: ITER: 36559 RES: 0.0026306
Jacobi stage: ITER: 37007 RES: 0.0024556
Jacobi stage: ITER: 37271 RES: 0.0023581
Jacobi stage: ITER: 37855 RES: 0.0021557
Jacobi stage: ITER: 38155 RES: 0.0020587
Jacobi stage: ITER: 38335 RES: 0.0020025
Jacobi stage: ITER: 38519 RES: 0.0019467
Jacobi stage: ITER: 38739 RES: 0.001882
Jacobi stage: ITER: 38927 RES: 0.0018285
Jacobi stage: ITER: 39321 RES: 0.0017206
Jacobi stage: ITER: 39627 RES: 0.0016421
Jacobi stage: ITER: 39839 RES: 0.0015894
Jacobi stage: ITER: 40031 RES: 0.0015433
Jacobi stage: ITER: 40219 RES: 0.0014993
Jacobi stage: ITER: 40411 RES: 0.0014558
Jacobi stage: ITER: 40597 RES: 0.0014143
Jacobi stage: ITER: 40869 RES: 0.0013564
Jacobi stage: ITER: 41299 RES: 0.0012701
Jacobi stage: ITER: 41499 RES: 0.0012317
Jacobi stage: ITER: 42191 RES: 0.0011075
Jacobi stage: ITER: 42383 RES: 0.0010753
Jacobi stage: ITER: 42577 RES: 0.0010434
Jacobi stage: ITER: 42819 RES: 0.0010057
Jacobi stage: ITER: 43205 RES: 0.00094749
Jacobi stage: ITER: 43503 RES: 0.00090537
Jacobi stage: ITER: 43739 RES: 0.00087314
Jacobi stage: ITER: 44237 RES: 0.00080859
Jacobi stage: ITER: 44895 RES: 0.00073109
Jacobi stage: ITER: 45337 RES: 0.00068289
Jacobi stage: ITER: 45655 RES: 0.00065053
Jacobi stage: ITER: 45873 RES: 0.00062892
Jacobi stage: ITER: 46091 RES: 0.0006084
Jacobi stage: ITER: 46299 RES: 0.00058926
Jacobi stage: ITER: 46467 RES: 0.00057425
Jacobi stage: ITER: 46695 RES: 0.00055449
Jacobi stage: ITER: 46941 RES: 0.00053377
Jacobi stage: ITER: 47127 RES: 0.00051889
Jacobi stage: ITER: 47315 RES: 0.00050412
Jacobi stage: ITER: 47473 RES: 0.00049188
Jacobi stage: ITER: 47815 RES: 0.00046685
Jacobi stage: ITER: 48279 RES: 0.00043474
Jacobi stage: ITER: 48489 RES: 0.00042081
Jacobi stage: ITER: 48711 RES: 0.00040683
Jacobi stage: ITER: 48989 RES: 0.0003897
Jacobi stage: ITER: 49271 RES: 0.0003733
Jacobi stage: ITER: 50231 RES: 0.00032212
Jacobi stage: ITER: 50429 RES: 0.00031237
Jacobi stage: ITER: 50775 RES: 0.00029629
Jacobi stage: ITER: 51951 RES: 0.00024733
Jacobi stage: ITER: 52303 RES: 0.00023431
Jacobi stage: ITER: 52687 RES: 0.00022089
Jacobi stage: ITER: 53017 RES: 0.00020991
Jacobi stage: ITER: 53367 RES: 0.00019898
Jacobi stage: ITER: 53751 RES: 0.00018759
Jacobi stage: ITER: 54087 RES: 0.00017815
Jacobi stage: ITER: 54473 RES: 0.00016784
Jacobi stage: ITER: 54869 RES: 0.00015794
Jacobi stage: ITER: 55181 RES: 0.00015055
Jacobi stage: ITER: 55801 RES: 0.00013687
Jacobi stage: ITER: 56155 RES: 0.00012967
Jacobi stage: ITER: 56571 RES: 0.00012164
Jacobi stage: ITER: 56927 RES: 0.00011517
Jacobi stage: ITER: 57255 RES: 0.00010951
Jacobi stage: ITER: 57775 RES: 0.0001011
Jacobi stage: ITER: 58111 RES: 9.6018e-05
Jacobi stage: ITER: 58477 RES: 9.0741e-05
Jacobi stage: ITER: 58843 RES: 8.5807e-05
Jacobi stage: ITER: 59163 RES: 8.1691e-05
Jacobi stage: ITER: 59643 RES: 7.5885e-05
Jacobi stage: ITER: 60523 RES: 6.629e-05
Jacobi stage: ITER: 60901 RES: 6.2532e-05
Jacobi stage: ITER: 61225 RES: 5.9496e-05
Jacobi stage: ITER: 61451 RES: 5.7484e-05
Jacobi stage: ITER: 61859 RES: 5.3992e-05
Jacobi stage: ITER: 62495 RES: 4.8967e-05
Jacobi stage: ITER: 63509 RES: 4.1892e-05
Jacobi stage: ITER: 64455 RES: 3.6237e-05
Jacobi stage: ITER: 65219 RES: 3.2225e-05
Jacobi stage: ITER: 65415 RES: 3.1269e-05
Jacobi stage: ITER: 66135 RES: 2.7995e-05
Jacobi stage: ITER: 66363 RES: 2.7032e-05
Jacobi stage: ITER: 66599 RES: 2.6069e-05
Jacobi stage: ITER: 66809 RES: 2.5234e-05
Jacobi stage: ITER: 66983 RES: 2.4576e-05
Jacobi stage: ITER: 68151 RES: 2.054e-05
Jacobi stage: ITER: 68837 RES: 1.848e-05
Jacobi stage: ITER: 69283 RES: 1.7262e-05
Jacobi stage: ITER: 69775 RES: 1.6005e-05
Jacobi stage: ITER: 70279 RES: 1.4813e-05
Jacobi stage: ITER: 70919 RES: 1.3426e-05
Jacobi stage: ITER: 71655 RES: 1.1991e-05
Jacobi stage: ITER: 72533 RES: 1.0475e-05
Jacobi stage: ITER: 73395 RES: 9.1787e-06
Jacobi stage: ITER: 74523 RES: 7.7185e-06
Jacobi stage: ITER: 75293 RES: 6.8554e-06
Jacobi stage: ITER: 75801 RES: 6.3408e-06
Jacobi stage: ITER: 76541 RES: 5.6596e-06
Jacobi stage: ITER: 77207 RES: 5.1108e-06
Jacobi stage: ITER: 77701 RES: 4.7359e-06
Jacobi stage: ITER: 78195 RES: 4.3912e-06
Jacobi stage: ITER: 78687 RES: 4.0716e-06
Jacobi stage: ITER: 79181 RES: 3.7729e-06
Jacobi stage: ITER: 79675 RES: 3.4983e-06
Jacobi stage: ITER: 80167 RES: 3.2436e-06
Jacobi stage: ITER: 80661 RES: 3.0057e-06
Jacobi stage: ITER: 82039 RES: 2.4331e-06
Jacobi stage: ITER: 83895 RES: 1.8295e-06
Jacobi stage: ITER: 84707 RES: 1.615e-06
Jacobi stage: ITER: 85023 RES: 1.5385e-06
Jacobi stage: ITER: 87791 RES: 1.0057e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]

The monitoring of the solver can be improved by time elapsed since the beginning of the computation as demonstrated in the following example:

1#include <iostream>
2#include <memory>
3#include <chrono>
4#include <thread>
5#include <TNL/Timer.h>
6#include <TNL/Matrices/SparseMatrix.h>
7#include <TNL/Devices/Sequential.h>
8#include <TNL/Devices/Cuda.h>
9#include <TNL/Solvers/Linear/Jacobi.h>
10
11template< typename Device >
12void
13iterativeLinearSolverExample()
14{
15 /***
16 * Set the following matrix (dots represent zero matrix elements):
17 *
18 * / 2.5 -1 . . . \
19 * | -1 2.5 -1 . . |
20 * | . -1 2.5 -1. . |
21 * | . . -1 2.5 -1 |
22 * \ . . . -1 2.5 /
23 */
26 const int size( 5 );
27 auto matrix_ptr = std::make_shared< MatrixType >();
28 matrix_ptr->setDimensions( size, size );
29 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
30
31 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
32 {
33 const int rowIdx = row.getRowIndex();
34 if( rowIdx == 0 ) {
35 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
36 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
37 }
38 else if( rowIdx == size - 1 ) {
39 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
40 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
41 }
42 else {
43 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
44 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
45 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
46 }
47 };
48
49 /***
50 * Set the matrix elements.
51 */
52 matrix_ptr->forAllRows( f );
53 std::cout << *matrix_ptr << std::endl;
54
55 /***
56 * Set the right-hand side vector.
57 */
58 Vector x( size, 1.0 );
59 Vector b( size );
60 matrix_ptr->vectorProduct( x, b );
61 x = 0.0;
62 std::cout << "Vector b = " << b << std::endl;
63
64 /***
65 * Setup solver of the linear system.
66 */
68 LinearSolver solver;
69 solver.setMatrix( matrix_ptr );
70 solver.setOmega( 0.0005 );
71
72 /***
73 * Setup monitor of the iterative solver.
74 */
75 using IterativeSolverMonitorType = TNL::Solvers::IterativeSolverMonitor< double, int >;
76 IterativeSolverMonitorType monitor;
77 TNL::Solvers::SolverMonitorThread mmonitorThread( monitor );
78 monitor.setRefreshRate( 10 ); // refresh rate in milliseconds
79 monitor.setVerbose( 1 );
80 monitor.setStage( "Jacobi stage:" );
81 TNL::Timer timer;
82 monitor.setTimer( timer );
83 timer.start();
84 solver.setSolverMonitor( monitor );
85 solver.setConvergenceResidue( 1.0e-6 );
86 solver.solve( b, x );
87 monitor.stopMainLoop();
88 std::cout << "Vector x = " << x << std::endl;
89}
90
91int
92main( int argc, char* argv[] )
93{
94 std::cout << "Solving linear system on host: " << std::endl;
95 iterativeLinearSolverExample< TNL::Devices::Sequential >();
96
97#ifdef __CUDACC__
98 std::cout << "Solving linear system on CUDA device: " << std::endl;
99 iterativeLinearSolverExample< TNL::Devices::Cuda >();
100#endif
101}
Class for real time, CPU time and CPU cycles measuring.
Definition Timer.h:25
void start()
Starts timer.
Definition Timer.hpp:42

The only changes are around the lines where we create an instance of TNL::Timer, connect it with the monitor using TNL::Solvers::SolverMonitor::setTimer and start the timer with TNL::Timer::start.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:1.5431e-
ELA: 0.10013 Jacobi stage: ITER: 6733 RES: 0.2577
ELA: 0.20362 Jacobi stage: ITER: 16407 RES: 0.058124
ELA: 0.30375 Jacobi stage: ITER: 27095 RES: 0.011256
ELA: 0.41028 Jacobi stage: ITER: 38555 RES: 0.001936
ELA: 0.51041 Jacobi stage: ITER: 49619 RES: 0.00035387
ELA: 0.61054 Jacobi stage: ITER: 60463 RES: 6.6904e-05
ELA: 0.71065 Jacobi stage: ITER: 68817 RES: 1.8537e-05
ELA: 0.81076 Jacobi stage: ITER: 79273 RES: 3.7199e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:0.002401
ELA: 0.10251 Jacobi stage: ITER: 1151 RES: 0.70999
ELA: 0.20262 Jacobi stage: ITER: 2399 RES: 0.53605
ELA: 0.30905 Jacobi stage: ITER: 4083 RES: 0.39459
ELA: 0.41239 Jacobi stage: ITER: 5613 RES: 0.30736
ELA: 0.51572 Jacobi stage: ITER: 6687 RES: 0.25964
ELA: 0.61905 Jacobi stage: ITER: 8039 RES: 0.21048
ELA: 0.71917 Jacobi stage: ITER: 8821 RES: 0.18648
ELA: 0.81929 Jacobi stage: ITER: 9069 RES: 0.17949
ELA: 0.91941 Jacobi stage: ITER: 10715 RES: 0.13937
ELA: 1.0195 Jacobi stage: ITER: 12219 RES: 0.1106
ELA: 1.1197 Jacobi stage: ITER: 14063 RES: 0.083316
ELA: 1.2291 Jacobi stage: ITER: 15667 RES: 0.065121
ELA: 1.3324 Jacobi stage: ITER: 16687 RES: 0.055677
ELA: 1.4391 Jacobi stage: ITER: 17663 RES: 0.047926
ELA: 1.5424 Jacobi stage: ITER: 18107 RES: 0.044739
ELA: 1.6457 Jacobi stage: ITER: 18373 RES: 0.042961
ELA: 1.7459 Jacobi stage: ITER: 19723 RES: 0.034926
ELA: 1.846 Jacobi stage: ITER: 20663 RES: 0.030231
ELA: 1.9557 Jacobi stage: ITER: 21375 RES: 0.027099
ELA: 2.0591 Jacobi stage: ITER: 21867 RES: 0.025126
ELA: 2.1624 Jacobi stage: ITER: 22309 RES: 0.02347
ELA: 2.2691 Jacobi stage: ITER: 22565 RES: 0.022565
ELA: 2.3724 Jacobi stage: ITER: 23235 RES: 0.020364
ELA: 2.4757 Jacobi stage: ITER: 24179 RES: 0.017616
ELA: 2.5791 Jacobi stage: ITER: 25481 RES: 0.014418
ELA: 2.6792 Jacobi stage: ITER: 26809 RES: 0.011758
ELA: 2.7793 Jacobi stage: ITER: 27941 RES: 0.0098812
ELA: 2.8794 Jacobi stage: ITER: 28423 RES: 0.0091789
ELA: 2.9824 Jacobi stage: ITER: 29075 RES: 0.0083041
ELA: 3.0825 Jacobi stage: ITER: 29937 RES: 0.0072721
ELA: 3.1826 Jacobi stage: ITER: 30395 RES: 0.0067802
ELA: 3.2827 Jacobi stage: ITER: 30815 RES: 0.0063527
ELA: 3.3828 Jacobi stage: ITER: 31227 RES: 0.0059668
ELA: 3.4829 Jacobi stage: ITER: 31745 RES: 0.0055087
ELA: 3.583 Jacobi stage: ITER: 32387 RES: 0.0049929
ELA: 3.6891 Jacobi stage: ITER: 33101 RES: 0.0044729
ELA: 3.7892 Jacobi stage: ITER: 33647 RES: 0.0041144
ELA: 3.8893 Jacobi stage: ITER: 34655 RES: 0.0035242
ELA: 3.9894 Jacobi stage: ITER: 36261 RES: 0.0027529
ELA: 4.0895 Jacobi stage: ITER: 38029 RES: 0.0020982
ELA: 4.1896 Jacobi stage: ITER: 39235 RES: 0.001744
ELA: 4.2897 Jacobi stage: ITER: 40697 RES: 0.0013928
ELA: 4.3898 Jacobi stage: ITER: 42263 RES: 0.0010953
ELA: 4.4899 Jacobi stage: ITER: 43623 RES: 0.00088884
ELA: 4.59 Jacobi stage: ITER: 44947 RES: 0.00072527
ELA: 4.6901 Jacobi stage: ITER: 46519 RES: 0.00056968
ELA: 4.7902 Jacobi stage: ITER: 47079 RES: 0.00052273
ELA: 4.8904 Jacobi stage: ITER: 47287 RES: 0.00050629
ELA: 4.9905 Jacobi stage: ITER: 47515 RES: 0.00048887
ELA: 5.0906 Jacobi stage: ITER: 47727 RES: 0.00047321
ELA: 5.1907 Jacobi stage: ITER: 47947 RES: 0.00045748
ELA: 5.2908 Jacobi stage: ITER: 48747 RES: 0.00040458
ELA: 5.3909 Jacobi stage: ITER: 49115 RES: 0.00038235
ELA: 5.491 Jacobi stage: ITER: 49809 RES: 0.00034358
ELA: 5.5912 Jacobi stage: ITER: 50421 RES: 0.00031276
ELA: 5.6913 Jacobi stage: ITER: 50727 RES: 0.00029849
ELA: 5.7914 Jacobi stage: ITER: 50919 RES: 0.00028981
ELA: 5.8915 Jacobi stage: ITER: 51277 RES: 0.00027422
ELA: 5.9916 Jacobi stage: ITER: 51511 RES: 0.00026462
ELA: 6.0917 Jacobi stage: ITER: 51889 RES: 0.00024962
ELA: 6.1918 Jacobi stage: ITER: 52123 RES: 0.00024088
ELA: 6.2919 Jacobi stage: ITER: 52973 RES: 0.00021133
ELA: 6.392 Jacobi stage: ITER: 53823 RES: 0.00018552
ELA: 6.4921 Jacobi stage: ITER: 54397 RES: 0.00016981
ELA: 6.5923 Jacobi stage: ITER: 55155 RES: 0.0001512
ELA: 6.6957 Jacobi stage: ITER: 55355 RES: 0.00014662
ELA: 6.799 Jacobi stage: ITER: 55557 RES: 0.0001421
ELA: 6.9024 Jacobi stage: ITER: 55759 RES: 0.0001378
ELA: 7.0025 Jacobi stage: ITER: 56081 RES: 0.00013111
ELA: 7.1091 Jacobi stage: ITER: 56243 RES: 0.00012793
ELA: 7.2092 Jacobi stage: ITER: 56391 RES: 0.00012505
ELA: 7.3093 Jacobi stage: ITER: 56551 RES: 0.00012202
ELA: 7.4094 Jacobi stage: ITER: 56717 RES: 0.00011891
ELA: 7.5095 Jacobi stage: ITER: 56919 RES: 0.00011531
ELA: 7.6096 Jacobi stage: ITER: 57139 RES: 0.00011148
ELA: 7.7098 Jacobi stage: ITER: 57431 RES: 0.00010659
ELA: 7.8124 Jacobi stage: ITER: 57657 RES: 0.00010292
ELA: 7.9191 Jacobi stage: ITER: 57855 RES: 9.9869e-05
ELA: 8.0224 Jacobi stage: ITER: 58155 RES: 9.5371e-05
ELA: 8.1257 Jacobi stage: ITER: 58793 RES: 8.6442e-05
ELA: 8.2291 Jacobi stage: ITER: 59551 RES: 7.6965e-05
ELA: 8.3292 Jacobi stage: ITER: 60353 RES: 6.8023e-05
ELA: 8.4324 Jacobi stage: ITER: 60577 RES: 6.5723e-05
ELA: 8.5391 Jacobi stage: ITER: 60775 RES: 6.3774e-05
ELA: 8.6418 Jacobi stage: ITER: 61571 RES: 5.6434e-05
ELA: 8.7419 Jacobi stage: ITER: 61977 RES: 5.3006e-05
ELA: 8.842 Jacobi stage: ITER: 62401 RES: 4.9664e-05
ELA: 8.9421 Jacobi stage: ITER: 63591 RES: 4.138e-05
ELA: 9.0423 Jacobi stage: ITER: 63883 RES: 3.9565e-05
ELA: 9.1452 Jacobi stage: ITER: 64503 RES: 3.5971e-05
ELA: 9.2453 Jacobi stage: ITER: 65133 RES: 3.2643e-05
ELA: 9.3454 Jacobi stage: ITER: 65479 RES: 3.0963e-05
ELA: 9.4456 Jacobi stage: ITER: 65661 RES: 3.01e-05
ELA: 9.5457 Jacobi stage: ITER: 65897 RES: 2.9029e-05
ELA: 9.6458 Jacobi stage: ITER: 66107 RES: 2.8116e-05
ELA: 9.7459 Jacobi stage: ITER: 66321 RES: 2.7198e-05
ELA: 9.8461 Jacobi stage: ITER: 66541 RES: 2.6295e-05
ELA: 9.9462 Jacobi stage: ITER: 66791 RES: 2.5312e-05
ELA: 10.056 Jacobi stage: ITER: 67027 RES: 2.4411e-05
ELA: 10.156 Jacobi stage: ITER: 67229 RES: 2.3658e-05
ELA: 10.256 Jacobi stage: ITER: 67413 RES: 2.2998e-05
ELA: 10.356 Jacobi stage: ITER: 68365 RES: 1.987e-05
ELA: 10.456 Jacobi stage: ITER: 68999 RES: 1.8031e-05
ELA: 10.556 Jacobi stage: ITER: 69351 RES: 1.7082e-05
ELA: 10.656 Jacobi stage: ITER: 69559 RES: 1.6545e-05
ELA: 10.762 Jacobi stage: ITER: 69893 RES: 1.5713e-05
ELA: 10.863 Jacobi stage: ITER: 70153 RES: 1.5098e-05
ELA: 10.966 Jacobi stage: ITER: 70607 RES: 1.4085e-05
ELA: 11.066 Jacobi stage: ITER: 71335 RES: 1.2595e-05
ELA: 11.169 Jacobi stage: ITER: 71627 RES: 1.2043e-05
ELA: 11.272 Jacobi stage: ITER: 71833 RES: 1.1664e-05
ELA: 11.372 Jacobi stage: ITER: 72083 RES: 1.1228e-05
ELA: 11.473 Jacobi stage: ITER: 72441 RES: 1.0624e-05
ELA: 11.573 Jacobi stage: ITER: 72771 RES: 1.0102e-05
ELA: 11.673 Jacobi stage: ITER: 73123 RES: 9.5703e-06
ELA: 11.773 Jacobi stage: ITER: 73471 RES: 9.0722e-06
ELA: 11.873 Jacobi stage: ITER: 73825 RES: 8.5894e-06
ELA: 11.973 Jacobi stage: ITER: 74195 RES: 8.1174e-06
ELA: 12.073 Jacobi stage: ITER: 74597 RES: 7.6289e-06
ELA: 12.173 Jacobi stage: ITER: 75107 RES: 7.0563e-06
ELA: 12.274 Jacobi stage: ITER: 75911 RES: 6.2365e-06
ELA: 12.374 Jacobi stage: ITER: 76279 RES: 5.8938e-06
ELA: 12.474 Jacobi stage: ITER: 76607 RES: 5.6042e-06
ELA: 12.574 Jacobi stage: ITER: 76965 RES: 5.3027e-06
ELA: 12.674 Jacobi stage: ITER: 77275 RES: 5.0577e-06
ELA: 12.774 Jacobi stage: ITER: 77601 RES: 4.8092e-06
ELA: 12.874 Jacobi stage: ITER: 77935 RES: 4.5701e-06
ELA: 12.979 Jacobi stage: ITER: 78271 RES: 4.3402e-06
ELA: 13.082 Jacobi stage: ITER: 78659 RES: 4.0891e-06
ELA: 13.186 Jacobi stage: ITER: 78981 RES: 3.8906e-06
ELA: 13.286 Jacobi stage: ITER: 79321 RES: 3.6926e-06
ELA: 13.386 Jacobi stage: ITER: 79765 RES: 3.4492e-06
ELA: 13.486 Jacobi stage: ITER: 80149 RES: 3.2516e-06
ELA: 13.588 Jacobi stage: ITER: 80499 RES: 3.0824e-06
ELA: 13.688 Jacobi stage: ITER: 80909 RES: 2.8934e-06
ELA: 13.79 Jacobi stage: ITER: 81653 RES: 2.5809e-06
ELA: 13.892 Jacobi stage: ITER: 81967 RES: 2.4601e-06
ELA: 13.999 Jacobi stage: ITER: 83203 RES: 2.0347e-06
ELA: 14.099 Jacobi stage: ITER: 83853 RES: 1.8408e-06
ELA: 14.199 Jacobi stage: ITER: 84071 RES: 1.7807e-06
ELA: 14.299 Jacobi stage: ITER: 84993 RES: 1.5451e-06
ELA: 14.4 Jacobi stage: ITER: 85659 RES: 1.3953e-06
ELA: 14.5 Jacobi stage: ITER: 86191 RES: 1.2858e-06
ELA: 14.6 Jacobi stage: ITER: 86431 RES: 1.2393e-06
ELA: 14.7 Jacobi stage: ITER: 87251 RES: 1.0926e-06
ELA: 14.8 Jacobi stage: ITER: 87657 RES: 1.0263e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]

Setup with preconditioner

Preconditioners of iterative solvers can significantly improve the performance of the solver. In the case of the linear systems, they are used mainly with the Krylov subspace methods. Preconditioners cannot be used with the starionary methods (TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR). The following example shows how to setup an iterative solver of linear systems with preconditioning.

1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/Linear/TFQMR.h>
7#include <TNL/Solvers/Linear/Preconditioners/Diagonal.h>
8
9template< typename Device >
10void
11iterativeLinearSolverExample()
12{
13 /***
14 * Set the following matrix (dots represent zero matrix elements):
15 *
16 * / 2.5 -1 . . . \
17 * | -1 2.5 -1 . . |
18 * | . -1 2.5 -1. . |
19 * | . . -1 2.5 -1 |
20 * \ . . . -1 2.5 /
21 */
24 const int size( 5 );
25 auto matrix_ptr = std::make_shared< MatrixType >();
26 matrix_ptr->setDimensions( size, size );
27 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
28
29 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
30 {
31 const int rowIdx = row.getRowIndex();
32 if( rowIdx == 0 ) {
33 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
34 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
35 }
36 else if( rowIdx == size - 1 ) {
37 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
38 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
39 }
40 else {
41 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
42 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
43 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
44 }
45 };
46
47 /***
48 * Set the matrix elements.
49 */
50 matrix_ptr->forAllRows( f );
51 std::cout << *matrix_ptr << std::endl;
52
53 /***
54 * Set the right-hand side vector.
55 */
56 Vector x( size, 1.0 );
57 Vector b( size );
58 matrix_ptr->vectorProduct( x, b );
59 x = 0.0;
60 std::cout << "Vector b = " << b << std::endl;
61
62 /***
63 * Solve the linear system using diagonal (Jacobi) preconditioner.
64 */
67 auto preconditioner_ptr = std::make_shared< Preconditioner >();
68 preconditioner_ptr->update( matrix_ptr );
69 LinearSolver solver;
70 solver.setMatrix( matrix_ptr );
71 solver.setPreconditioner( preconditioner_ptr );
72 solver.setConvergenceResidue( 1.0e-6 );
73 solver.solve( b, x );
74 std::cout << "Vector x = " << x << std::endl;
75}
76
77int
78main( int argc, char* argv[] )
79{
80 std::cout << "Solving linear system on host: " << std::endl;
81 iterativeLinearSolverExample< TNL::Devices::Sequential >();
82
83#ifdef __CUDACC__
84 std::cout << "Solving linear system on CUDA device: " << std::endl;
85 iterativeLinearSolverExample< TNL::Devices::Cuda >();
86#endif
87}
Diagonal (Jacobi) preconditioner for iterative solvers of linear systems.
Definition Diagonal.h:21

In this example, we solve the same problem as in all other examples in this section. The only differences concerning the preconditioner happen in the solver setup. Similarly to the matrix of the linear system, the preconditioner needs to be passed to the solver as a std::shared_ptr. When the preconditioner object is created, we have to initialize it using the update method, which has to be called everytime the matrix of the linear system changes. This is important, for example, when solving time-dependent PDEs, but it does not happen in this example. Finally, we need to connect the solver with the preconditioner using the setPreconditioner method.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]

Choosing the solver and preconditioner type at runtime

When developing a numerical solver, one often has to search for a combination of various methods and algorithms that fit given requirements the best. To make this easier, TNL provides the functions TNL::Solvers::getLinearSolver and TNL::Solvers::getPreconditioner for selecting the linear solver and preconditioner at runtime. The following example shows how to use these functions:

1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/LinearSolverTypeResolver.h>
7
8template< typename Device >
9void
10iterativeLinearSolverExample()
11{
12 /***
13 * Set the following matrix (dots represent zero matrix elements):
14 *
15 * / 2.5 -1 . . . \
16 * | -1 2.5 -1 . . |
17 * | . -1 2.5 -1. . |
18 * | . . -1 2.5 -1 |
19 * \ . . . -1 2.5 /
20 */
23 const int size( 5 );
24 auto matrix_ptr = std::make_shared< MatrixType >();
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
27
28 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
29 {
30 const int rowIdx = row.getRowIndex();
31 if( rowIdx == 0 ) {
32 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
33 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
34 }
35 else if( rowIdx == size - 1 ) {
36 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
37 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
38 }
39 else {
40 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
41 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
42 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
43 }
44 };
45
46 /***
47 * Set the matrix elements.
48 */
49 matrix_ptr->forAllRows( f );
50 std::cout << *matrix_ptr << std::endl;
51
52 /***
53 * Set the right-hand side vector.
54 */
55 Vector x( size, 1.0 );
56 Vector b( size );
57 matrix_ptr->vectorProduct( x, b );
58 x = 0.0;
59 std::cout << "Vector b = " << b << std::endl;
60
61 /***
62 * Solve the linear system using diagonal (Jacobi) preconditioner.
63 */
64 auto solver_ptr = TNL::Solvers::getLinearSolver< MatrixType >( "tfqmr" );
65 auto preconditioner_ptr = TNL::Solvers::getPreconditioner< MatrixType >( "diagonal" );
66 preconditioner_ptr->update( matrix_ptr );
67 solver_ptr->setMatrix( matrix_ptr );
68 solver_ptr->setPreconditioner( preconditioner_ptr );
69 solver_ptr->setConvergenceResidue( 1.0e-6 );
70 solver_ptr->solve( b, x );
71 std::cout << "Vector x = " << x << std::endl;
72}
73
74int
75main( int argc, char* argv[] )
76{
77 std::cout << "Solving linear system on host: " << std::endl;
78 iterativeLinearSolverExample< TNL::Devices::Sequential >();
79
80#ifdef __CUDACC__
81 std::cout << "Solving linear system on CUDA device: " << std::endl;
82 iterativeLinearSolverExample< TNL::Devices::Cuda >();
83#endif
84}
std::shared_ptr< Linear::LinearSolver< MatrixType > > getLinearSolver(std::string name)
Function returning shared pointer with linear solver given by its name in a form of a string.
Definition LinearSolverTypeResolver.h:84
std::shared_ptr< Linear::Preconditioners::Preconditioner< MatrixType > > getPreconditioner(std::string name)
Function returning shared pointer with linear preconditioner given by its name in a form of a string.
Definition LinearSolverTypeResolver.h:138

We still stay with the same problem and the only changes are in the solver setup. We first use TNL::Solvers::getLinearSolver to get a shared pointer holding the solver and then TNL::Solvers::getPreconditioner to get a shared pointer holding the preconditioner. The rest of the code is the same as in the previous examples with the only difference that we work with the pointer solver_ptr instead of the direct instance solver of the solver type.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]