Template Numerical Library version main:1655e92
Linear solvers tutorial

Introduction

Solvers of linear systems are one of the most important algorithms in scientific computations. TNL offers the followiing iterative methods:

  1. Stationary methods
    1. Jacobi method (TNL::Solvers::Linear::Jacobi)
    2. Successive-overrelaxation method, SOR (TNL::Solvers::Linear::SOR)
  2. Krylov subspace methods
    1. Conjugate gradient method, CG (TNL::Solvers::Linear::CG)
    2. Biconjugate gradient stabilized method, BICGStab (TNL::Solvers::Linear::BICGStab)
    3. Biconjugate gradient stabilized method, BICGStab(l) (TNL::Solvers::Linear::BICGStabL)
    4. Transpose-free quasi-minimal residual method, TFQMR (TNL::Solvers::Linear::TFQMR)
    5. Generalized minimal residual method, GMRES (TNL::Solvers::Linear::GMRES) with various methods of orthogonalization
      1. Classical Gramm-Schmidt, CGS
      2. Classical Gramm-Schmidt with reorthogonalization, CGSR
      3. Modified Gramm-Schmidt, MGS
      4. Modified Gramm-Schmidt with reorthogonalization, MGSR
      5. Compact WY form of the Householder reflections, CWY

The iterative solvers (not the stationary solvers like TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR) can be combined with the following preconditioners:

  1. Diagonal or Jacobi - TNL::Solvers::Linear::Preconditioners::Diagonal
  2. ILU (Incomplete LU) - CPU only currently
    1. ILU(0) TNL::Solvers::Linear::Preconditioners::ILU0
    2. ILUT (ILU with thresholding) TNL::Solvers::Linear::Preconditioners::ILUT

Iterative solvers of linear systems

Basic setup

All iterative solvers for linear systems can be found in the namespace TNL::Solvers::Linear. The following example shows the use the iterative solvers:

1#include <iostream>
2#include <memory>
3#include <TNL/Algorithms/ParallelFor.h>
4#include <TNL/Matrices/SparseMatrix.h>
5#include <TNL/Devices/Host.h>
6#include <TNL/Devices/Cuda.h>
7#include <TNL/Solvers/Linear/TFQMR.h>
8
9template< typename Device >
10void iterativeLinearSolverExample()
11{
12 /***
13 * Set the following matrix (dots represent zero matrix elements):
14 *
15 * / 2.5 -1 . . . \
16 * | -1 2.5 -1 . . |
17 * | . -1 2.5 -1. . |
18 * | . . -1 2.5 -1 |
19 * \ . . . -1 2.5 /
20 */
23 const int size( 5 );
24 auto matrix_ptr = std::make_shared< MatrixType >();
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
27
28 auto f = [=] __cuda_callable__ ( typename MatrixType::RowView& row ) mutable {
29 const int rowIdx = row.getRowIndex();
30 if( rowIdx == 0 )
31 {
32 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
33 row.setElement( 1, rowIdx+1, -1 ); // element above the diagonal
34 }
35 else if( rowIdx == size - 1 )
36 {
37 row.setElement( 0, rowIdx-1, -1.0 ); // element below the diagonal
38 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
39 }
40 else
41 {
42 row.setElement( 0, rowIdx-1, -1.0 ); // element below the diagonal
43 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
44 row.setElement( 2, rowIdx+1, -1.0 ); // element above the diagonal
45 }
46 };
47
48 /***
49 * Set the matrix elements.
50 */
51 matrix_ptr->forAllRows( f );
52 std::cout << *matrix_ptr << std::endl;
53
54 /***
55 * Set the right-hand side vector.
56 */
57 Vector x( size, 1.0 );
58 Vector b( size );
59 matrix_ptr->vectorProduct( x, b );
60 x = 0.0;
61 std::cout << "Vector b = " << b << std::endl;
62
63 /***
64 * Solve the linear system.
65 */
67 LinearSolver solver;
68 solver.setMatrix( matrix_ptr );
69 solver.setConvergenceResidue( 1.0e-6 );
70 solver.solve( b, x );
71 std::cout << "Vector x = " << x << std::endl;
72}
73
74int main( int argc, char* argv[] )
75{
76 std::cout << "Solving linear system on host: " << std::endl;
77 iterativeLinearSolverExample< TNL::Devices::Sequential >();
78
79#ifdef HAVE_CUDA
80 std::cout << "Solving linear system on CUDA device: " << std::endl;
81 iterativeLinearSolverExample< TNL::Devices::Cuda >();
82#endif
83}
#define __cuda_callable__
Definition: CudaCallable.h:22
Vector extends Array with algebraic operations.
Definition: Vector.h:40
Implementation of sparse matrix, i.e. matrix storing only non-zero elements.
Definition: SparseMatrix.h:57
void setMatrix(const MatrixPointer &matrix)
Set the matrix of the linear system.
Definition: LinearSolver.h:127
Iterative solver of linear systems based on the Transpose-free quasi-minimal residual (TFQMR) method.
Definition: TFQMR.h:26
T endl(T... args)

In this example we solve a linear system \( A \vec x = \vec b \) where

\[ A = \left( \begin{array}{cccc} 2.5 & -1 & & & \\ -1 & 2.5 & -1 & & \\ & -1 & 2.5 & -1 & \\ & & -1 & 2.5 & -1 \\ & & & -1 & 2.5 \\ \end{array} \right) \]

The right-hand side vector \(\vec b \) is set to \(( 1.5, 0.5, 0.5, 0.5, 1.5 )^T \) so that the exact solution is \( \vec x = ( 1, 1, 1, 1, 1 )^T\). The matrix elements of \(A $\) is set on the lines 12-51 by the means of the method TNL::Matrices::SparseMatrix::forAllElements. In this example, we use the sparse matrix but any other matrix type can be used as well (see the namespace TNL::Matrices). Next we set the solution vector \( \vec x = ( 1, 1, 1, 1, 1 )^T\) (line 57) and multiply it with matrix \( A \) to get the right-hand side vector \(\vec b\) (lines 58-59). Finally, we reset the vector \(\vec x \) to zero vector.

To solve the linear system, we use TFQMR method (line 66), as an example. Other solvers can be used as well (see the namespace TNL::Solvers::Linear). The solver needs only one template parameter which is the matrix type. Next we create an instance of the solver (line 67 ) and set the matrix of the linear system (line 68). Note, that matrix is passed to the solver as a shared smart pointer (std::shared_ptr). This is why we created an instance of the smart pointer on the line 24 instead of the sparse matrix itself. On the line 69, we set the value of residue under which the convergence occur and the solver is stopped. It serves as a convergence criterion. The solver is executed on the line 70 by calling the method TNL::Solvers::Linear::LinearSolver::solve. The method accepts the right-hand side vector \( \vec b\) and the solution vector \( \vec x\).

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]

Setup with a solver monitor

Solution of large linear systems may take a lot of time. In such situations, it is useful to be able to monitor the convergence of the solver of the solver status in general. For this purpose, TNL offers solver monitors. The solver monitor prints (or somehow visualizes) the number of iterations, the residue of the current solution approximation or some other metrics. Sometimes such information is printed after each iteration or after every ten iterations. The problem of this approach is the fact that one iteration of the solver may take only few milliseconds but also several minutes. In the former case, the monitor creates overwhelming amount of output which may even slowdown the solver. In the later case, the user waits long time for update of the solver status. The monitor in TNL rather runs in separate thread and it refreshes the status of the solver in preset time periods. The use of the iterative solver monitor is demonstrated in the following example.

1#include <iostream>
2#include <memory>
3#include <chrono>
4#include <thread>
5#include <TNL/Algorithms/ParallelFor.h>
6#include <TNL/Matrices/SparseMatrix.h>
7#include <TNL/Devices/Sequential.h>
8#include <TNL/Devices/Cuda.h>
9#include <TNL/Solvers/Linear/Jacobi.h>
10
11template< typename Device >
12void iterativeLinearSolverExample()
13{
14 /***
15 * Set the following matrix (dots represent zero matrix elements):
16 *
17 * / 2.5 -1 . . . \
18 * | -1 2.5 -1 . . |
19 * | . -1 2.5 -1. . |
20 * | . . -1 2.5 -1 |
21 * \ . . . -1 2.5 /
22 */
25 const int size( 5 );
26 auto matrix_ptr = std::make_shared< MatrixType >();
27 matrix_ptr->setDimensions( size, size );
28 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
29
30 auto f = [=] __cuda_callable__ ( typename MatrixType::RowView& row ) mutable {
31 const int rowIdx = row.getRowIndex();
32 if( rowIdx == 0 )
33 {
34 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
35 row.setElement( 1, rowIdx+1, -1 ); // element above the diagonal
36 }
37 else if( rowIdx == size - 1 )
38 {
39 row.setElement( 0, rowIdx-1, -1.0 ); // element below the diagonal
40 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
41 }
42 else
43 {
44 row.setElement( 0, rowIdx-1, -1.0 ); // element below the diagonal
45 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
46 row.setElement( 2, rowIdx+1, -1.0 ); // element above the diagonal
47 }
48 };
49
50 /***
51 * Set the matrix elements.
52 */
53 matrix_ptr->forAllRows( f );
54 std::cout << *matrix_ptr << std::endl;
55
56 /***
57 * Set the right-hand side vector.
58 */
59 Vector x( size, 1.0 );
60 Vector b( size );
61 matrix_ptr->vectorProduct( x, b );
62 x = 0.0;
63 std::cout << "Vector b = " << b << std::endl;
64
65 /***
66 * Setup solver of the linear system.
67 */
69 LinearSolver solver;
70 solver.setMatrix( matrix_ptr );
71 solver.setOmega( 0.0005 );
72
73 /***
74 * Setup monitor of the iterative solver.
75 */
76 using IterativeSolverMonitorType = TNL::Solvers::IterativeSolverMonitor< double, int >;
77 IterativeSolverMonitorType monitor;
78 TNL::Solvers::SolverMonitorThread monitorThread(monitor);
79 monitor.setRefreshRate(10); // refresh rate in milliseconds
80 monitor.setVerbose(1);
81 monitor.setStage( "Jacobi stage:" );
82 solver.setSolverMonitor(monitor);
83 solver.setConvergenceResidue( 1.0e-6 );
84 solver.solve( b, x );
85 monitor.stopMainLoop();
86 std::cout << "Vector x = " << x << std::endl;
87}
88
89int main( int argc, char* argv[] )
90{
91 std::cout << "Solving linear system on host: " << std::endl;
92 iterativeLinearSolverExample< TNL::Devices::Host >();
93
94#ifdef HAVE_CUDA
95 std::cout << "Solving linear system on CUDA device: " << std::endl;
96 iterativeLinearSolverExample< TNL::Devices::Cuda >();
97#endif
98}
Iterative solver of linear systems based on the Jacobi method.
Definition: Jacobi.h:26
A RAII wrapper for launching the SolverMonitor's main loop in a separate thread.
Definition: SolverMonitor.h:137

On the lines 1-70, we setup the same linear system as in the previous example, we create an instance of the Jacobi solver and we pass the matrix of the linear system to the solver. On the line 71, we set the relaxation parameter \( \omega \) of the Jacobi solver to 0.0005 (TNL::Solvers::Linear::Jacobi). The reason is to slowdown the convergence because we want to see some iterations in this example. Next we create an instance of the solver monitor (lines 76 and 77) and we create a special thread for the monitor (line 78, TNL::Solvers::SolverMonitorThread ). We set the refresh rate of the monitor to 10 milliseconds (line 79, TNL::Solvers::SolverMonitor::setRefreshRate). We set a verbosity of the monitor to 1 (line 80 TNL::Solvers::IterativeSolverMonitor::setVerbose). Next we set a name of the solver stage (line 81, TNL::Solvers::IterativeSolverMonitor::setStage). The monitor stages serve for distinguishing between different phases or stages of more complex solvers (for example when the linear system solver is embedded into a time dependent PDE solver). Next we connect the solver with the monitor (line 82, TNL::Solvers::IterativeSolver::setSolverMonitor) and we set the convergence criterion based on the value of the residue (line 83). Finally we start the solver (line 84, TNL::Solvers::Linear::Jacobi::solve) and when the solver finishes we have to stop the monitor (line 84, TNL::Solvers::SolverMonitor::stopMainLoop).

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 6885 RES: 0.25166
Jacobi stage: ITER: 13901 RES: 0.08539
Jacobi stage: ITER: 20941 RES: 0.028958
Jacobi stage: ITER: 27813 RES: 0.010077
Jacobi stage: ITER: 34863 RES: 0.0034134
Jacobi stage: ITER: 41997 RES: 0.0011407
Jacobi stage: ITER: 49157 RES: 0.00037977
Jacobi stage: ITER: 56283 RES: 0.00012714
Jacobi stage: ITER: 63033 RES: 4.5069e-05
Jacobi stage: ITER: 69753 RES: 1.6055e-05
Jacobi stage: ITER: 76487 RES: 5.7085e-06
Jacobi stage: ITER: 83229 RES: 2.026e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 1415 RES: 0.66491
Jacobi stage: ITER: 2079 RES: 0.57253
Jacobi stage: ITER: 2745 RES: 0.50077
Jacobi stage: ITER: 3411 RES: 0.44322
Jacobi stage: ITER: 4075 RES: 0.39512
Jacobi stage: ITER: 4739 RES: 0.35389
Jacobi stage: ITER: 5363 RES: 0.31996
Jacobi stage: ITER: 6027 RES: 0.28795
Jacobi stage: ITER: 6683 RES: 0.2598
Jacobi stage: ITER: 7315 RES: 0.23545
Jacobi stage: ITER: 7981 RES: 0.21231
Jacobi stage: ITER: 8645 RES: 0.19161
Jacobi stage: ITER: 10039 RES: 0.15464
Jacobi stage: ITER: 10701 RES: 0.13962
Jacobi stage: ITER: 11247 RES: 0.12842
Jacobi stage: ITER: 11699 RES: 0.1198
Jacobi stage: ITER: 12153 RES: 0.1117
Jacobi stage: ITER: 12607 RES: 0.1042
Jacobi stage: ITER: 13059 RES: 0.097211
Jacobi stage: ITER: 13513 RES: 0.090634
Jacobi stage: ITER: 13967 RES: 0.084554
Jacobi stage: ITER: 14419 RES: 0.078882
Jacobi stage: ITER: 14873 RES: 0.073546
Jacobi stage: ITER: 15327 RES: 0.068613
Jacobi stage: ITER: 15781 RES: 0.063971
Jacobi stage: ITER: 16235 RES: 0.05968
Jacobi stage: ITER: 16687 RES: 0.055677
Jacobi stage: ITER: 17143 RES: 0.051911
Jacobi stage: ITER: 17595 RES: 0.048429
Jacobi stage: ITER: 18103 RES: 0.044794
Jacobi stage: ITER: 18557 RES: 0.041764
Jacobi stage: ITER: 19011 RES: 0.038963
Jacobi stage: ITER: 19465 RES: 0.036327
Jacobi stage: ITER: 19919 RES: 0.03389
Jacobi stage: ITER: 20371 RES: 0.031617
Jacobi stage: ITER: 20825 RES: 0.029478
Jacobi stage: ITER: 21279 RES: 0.027501
Jacobi stage: ITER: 21731 RES: 0.025657
Jacobi stage: ITER: 22187 RES: 0.023921
Jacobi stage: ITER: 22639 RES: 0.022317
Jacobi stage: ITER: 23093 RES: 0.020807
Jacobi stage: ITER: 23547 RES: 0.019411
Jacobi stage: ITER: 24001 RES: 0.018098
Jacobi stage: ITER: 24455 RES: 0.016884
Jacobi stage: ITER: 24907 RES: 0.015752
Jacobi stage: ITER: 25361 RES: 0.014686
Jacobi stage: ITER: 25815 RES: 0.013701
Jacobi stage: ITER: 26269 RES: 0.012775
Jacobi stage: ITER: 26723 RES: 0.011918
Jacobi stage: ITER: 27177 RES: 0.011112
Jacobi stage: ITER: 27683 RES: 0.010284
Jacobi stage: ITER: 28139 RES: 0.0095881
Jacobi stage: ITER: 28591 RES: 0.008945
Jacobi stage: ITER: 29047 RES: 0.0083399
Jacobi stage: ITER: 29499 RES: 0.0077806
Jacobi stage: ITER: 29953 RES: 0.0072542
Jacobi stage: ITER: 30407 RES: 0.0067677
Jacobi stage: ITER: 30861 RES: 0.0063099
Jacobi stage: ITER: 31315 RES: 0.0058866
Jacobi stage: ITER: 31769 RES: 0.0054884
Jacobi stage: ITER: 32223 RES: 0.0051203
Jacobi stage: ITER: 32675 RES: 0.0047769
Jacobi stage: ITER: 33129 RES: 0.0044537
Jacobi stage: ITER: 33583 RES: 0.004155
Jacobi stage: ITER: 34037 RES: 0.003874
Jacobi stage: ITER: 34491 RES: 0.0036141
Jacobi stage: ITER: 34943 RES: 0.0033717
Jacobi stage: ITER: 35459 RES: 0.0031148
Jacobi stage: ITER: 35911 RES: 0.0029059
Jacobi stage: ITER: 36367 RES: 0.0027093
Jacobi stage: ITER: 36819 RES: 0.0025276
Jacobi stage: ITER: 37273 RES: 0.0023566
Jacobi stage: ITER: 37727 RES: 0.0021985
Jacobi stage: ITER: 38183 RES: 0.0020498
Jacobi stage: ITER: 38635 RES: 0.0019123
Jacobi stage: ITER: 39089 RES: 0.001783
Jacobi stage: ITER: 39543 RES: 0.0016634
Jacobi stage: ITER: 39997 RES: 0.0015509
Jacobi stage: ITER: 40451 RES: 0.0014468
Jacobi stage: ITER: 40905 RES: 0.001349
Jacobi stage: ITER: 41359 RES: 0.0012585
Jacobi stage: ITER: 41815 RES: 0.0011734
Jacobi stage: ITER: 42267 RES: 0.0010947
Jacobi stage: ITER: 42723 RES: 0.0010206
Jacobi stage: ITER: 43273 RES: 0.00093764
Jacobi stage: ITER: 43727 RES: 0.00087475
Jacobi stage: ITER: 44181 RES: 0.00081558
Jacobi stage: ITER: 44635 RES: 0.00076087
Jacobi stage: ITER: 45089 RES: 0.0007094
Jacobi stage: ITER: 45543 RES: 0.00066182
Jacobi stage: ITER: 45997 RES: 0.00061705
Jacobi stage: ITER: 46451 RES: 0.00057567
Jacobi stage: ITER: 46905 RES: 0.00053672
Jacobi stage: ITER: 47359 RES: 0.00050073
Jacobi stage: ITER: 47813 RES: 0.00046685
Jacobi stage: ITER: 48265 RES: 0.00043554
Jacobi stage: ITER: 48719 RES: 0.00040633
Jacobi stage: ITER: 49173 RES: 0.00037884
Jacobi stage: ITER: 49627 RES: 0.00035343
Jacobi stage: ITER: 50079 RES: 0.00032973
Jacobi stage: ITER: 50535 RES: 0.00030742
Jacobi stage: ITER: 50987 RES: 0.0002868
Jacobi stage: ITER: 51441 RES: 0.0002674
Jacobi stage: ITER: 51955 RES: 0.00024718
Jacobi stage: ITER: 52407 RES: 0.0002306
Jacobi stage: ITER: 52861 RES: 0.000215
Jacobi stage: ITER: 53315 RES: 0.00020058
Jacobi stage: ITER: 53767 RES: 0.00018713
Jacobi stage: ITER: 54221 RES: 0.00017447
Jacobi stage: ITER: 54675 RES: 0.00016277
Jacobi stage: ITER: 55129 RES: 0.00015175
Jacobi stage: ITER: 55583 RES: 0.00014158
Jacobi stage: ITER: 56035 RES: 0.00013208
Jacobi stage: ITER: 56489 RES: 0.00012315
Jacobi stage: ITER: 56943 RES: 0.00011489
Jacobi stage: ITER: 57397 RES: 0.00010711
Jacobi stage: ITER: 57851 RES: 9.993e-05
Jacobi stage: ITER: 58303 RES: 9.3227e-05
Jacobi stage: ITER: 58757 RES: 8.6921e-05
Jacobi stage: ITER: 59211 RES: 8.1091e-05
Jacobi stage: ITER: 59663 RES: 7.5652e-05
Jacobi stage: ITER: 60171 RES: 6.9973e-05
Jacobi stage: ITER: 60627 RES: 6.524e-05
Jacobi stage: ITER: 61079 RES: 6.0864e-05
Jacobi stage: ITER: 61533 RES: 5.6747e-05
Jacobi stage: ITER: 61987 RES: 5.2941e-05
Jacobi stage: ITER: 62433 RES: 4.942e-05
Jacobi stage: ITER: 62879 RES: 4.6162e-05
Jacobi stage: ITER: 63331 RES: 4.3066e-05
Jacobi stage: ITER: 63779 RES: 4.0202e-05
Jacobi stage: ITER: 64223 RES: 3.7552e-05
Jacobi stage: ITER: 64727 RES: 3.4754e-05
Jacobi stage: ITER: 65169 RES: 3.2463e-05
Jacobi stage: ITER: 65619 RES: 3.0304e-05
Jacobi stage: ITER: 66073 RES: 2.8254e-05
Jacobi stage: ITER: 66511 RES: 2.6424e-05
Jacobi stage: ITER: 66963 RES: 2.4652e-05
Jacobi stage: ITER: 67411 RES: 2.3013e-05
Jacobi stage: ITER: 67903 RES: 2.1337e-05
Jacobi stage: ITER: 68355 RES: 1.9906e-05
Jacobi stage: ITER: 68793 RES: 1.8605e-05
Jacobi stage: ITER: 69247 RES: 1.7358e-05
Jacobi stage: ITER: 69695 RES: 1.6203e-05
Jacobi stage: ITER: 70139 RES: 1.5135e-05
Jacobi stage: ITER: 70595 RES: 1.4111e-05
Jacobi stage: ITER: 71031 RES: 1.3197e-05
Jacobi stage: ITER: 71485 RES: 1.2304e-05
Jacobi stage: ITER: 71931 RES: 1.1493e-05
Jacobi stage: ITER: 72375 RES: 1.0736e-05
Jacobi stage: ITER: 72829 RES: 1.0009e-05
Jacobi stage: ITER: 73267 RES: 9.3609e-06
Jacobi stage: ITER: 73719 RES: 8.7331e-06
Jacobi stage: ITER: 74171 RES: 8.1473e-06
Jacobi stage: ITER: 74611 RES: 7.6149e-06
Jacobi stage: ITER: 75065 RES: 7.0998e-06
Jacobi stage: ITER: 75557 RES: 6.583e-06
Jacobi stage: ITER: 76007 RES: 6.1452e-06
Jacobi stage: ITER: 76459 RES: 5.7331e-06
Jacobi stage: ITER: 76899 RES: 5.3584e-06
Jacobi stage: ITER: 77351 RES: 4.999e-06
Jacobi stage: ITER: 77793 RES: 4.6694e-06
Jacobi stage: ITER: 78243 RES: 4.3589e-06
Jacobi stage: ITER: 78697 RES: 4.0641e-06
Jacobi stage: ITER: 79135 RES: 3.8008e-06
Jacobi stage: ITER: 79587 RES: 3.5459e-06
Jacobi stage: ITER: 80031 RES: 3.3121e-06
Jacobi stage: ITER: 80479 RES: 3.0919e-06
Jacobi stage: ITER: 80931 RES: 2.8845e-06
Jacobi stage: ITER: 81371 RES: 2.696e-06
Jacobi stage: ITER: 81827 RES: 2.5136e-06
Jacobi stage: ITER: 82279 RES: 2.345e-06
Jacobi stage: ITER: 82735 RES: 2.1864e-06
Jacobi stage: ITER: 83195 RES: 2.0372e-06
Jacobi stage: ITER: 83695 RES: 1.8866e-06
Jacobi stage: ITER: 84151 RES: 1.759e-06
Jacobi stage: ITER: 84603 RES: 1.641e-06
Jacobi stage: ITER: 85059 RES: 1.53e-06
Jacobi stage: ITER: 85513 RES: 1.4265e-06
Jacobi stage: ITER: 85967 RES: 1.3308e-06
Jacobi stage: ITER: 86421 RES: 1.2408e-06
Jacobi stage: ITER: 86875 RES: 1.1576e-06
Jacobi stage: ITER: 87329 RES: 1.0793e-06
Jacobi stage: ITER: 87783 RES: 1.0069e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]

The monitoring of the solver can be improved by time elapsed since the beginning of the computation as demonstrated in the following example:

1#include <iostream>
2#include <memory>
3#include <chrono>
4#include <thread>
5#include <TNL/Timer.h>
6#include <TNL/Algorithms/ParallelFor.h>
7#include <TNL/Matrices/SparseMatrix.h>
8#include <TNL/Devices/Sequential.h>
9#include <TNL/Devices/Cuda.h>
10#include <TNL/Solvers/Linear/Jacobi.h>
11
12template< typename Device >
13void iterativeLinearSolverExample()
14{
15 /***
16 * Set the following matrix (dots represent zero matrix elements):
17 *
18 * / 2.5 -1 . . . \
19 * | -1 2.5 -1 . . |
20 * | . -1 2.5 -1. . |
21 * | . . -1 2.5 -1 |
22 * \ . . . -1 2.5 /
23 */
26 const int size( 5 );
27 auto matrix_ptr = std::make_shared< MatrixType >();
28 matrix_ptr->setDimensions( size, size );
29 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
30
31 auto f = [=] __cuda_callable__ ( typename MatrixType::RowView& row ) mutable {
32 const int rowIdx = row.getRowIndex();
33 if( rowIdx == 0 )
34 {
35 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
36 row.setElement( 1, rowIdx+1, -1 ); // element above the diagonal
37 }
38 else if( rowIdx == size - 1 )
39 {
40 row.setElement( 0, rowIdx-1, -1.0 ); // element below the diagonal
41 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
42 }
43 else
44 {
45 row.setElement( 0, rowIdx-1, -1.0 ); // element below the diagonal
46 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
47 row.setElement( 2, rowIdx+1, -1.0 ); // element above the diagonal
48 }
49 };
50
51 /***
52 * Set the matrix elements.
53 */
54 matrix_ptr->forAllRows( f );
55 std::cout << *matrix_ptr << std::endl;
56
57 /***
58 * Set the right-hand side vector.
59 */
60 Vector x( size, 1.0 );
61 Vector b( size );
62 matrix_ptr->vectorProduct( x, b );
63 x = 0.0;
64 std::cout << "Vector b = " << b << std::endl;
65
66 /***
67 * Setup solver of the linear system.
68 */
70 LinearSolver solver;
71 solver.setMatrix( matrix_ptr );
72 solver.setOmega( 0.0005 );
73
74 /***
75 * Setup monitor of the iterative solver.
76 */
77 using IterativeSolverMonitorType = TNL::Solvers::IterativeSolverMonitor< double, int >;
78 IterativeSolverMonitorType monitor;
79 TNL::Solvers::SolverMonitorThread mmonitorThread(monitor);
80 monitor.setRefreshRate(10); // refresh rate in milliseconds
81 monitor.setVerbose(1);
82 monitor.setStage( "Jacobi stage:" );
83 TNL::Timer timer;
84 monitor.setTimer( timer );
85 timer.start();
86 solver.setSolverMonitor(monitor);
87 solver.setConvergenceResidue( 1.0e-6 );
88 solver.solve( b, x );
89 monitor.stopMainLoop();
90 std::cout << "Vector x = " << x << std::endl;
91}
92
93int main( int argc, char* argv[] )
94{
95 std::cout << "Solving linear system on host: " << std::endl;
96 iterativeLinearSolverExample< TNL::Devices::Sequential >();
97
98#ifdef HAVE_CUDA
99 std::cout << "Solving linear system on CUDA device: " << std::endl;
100 iterativeLinearSolverExample< TNL::Devices::Cuda >();
101#endif
102}
Class for real time, CPU time and CPU cycles measuring.
Definition: Timer.h:28
void start()
Starts timer.
Definition: Timer.hpp:49

The only changes happen on lines 83-85 where we create an instance of TNL timer (line 83, TNL::Timer), connect it with the monitor (line 84, TNL::Solvers::SolverMonitor::setTimer) and start the timer (line 85, TNL::Timer::start).

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:2.4859e-
ELA: 0.10014 Jacobi stage: ITER: 18021 RES: 0.045348
ELA: 0.20028 Jacobi stage: ITER: 36015 RES: 0.0028598
ELA: 0.30041 Jacobi stage: ITER: 54085 RES: 0.00017815
ELA: 0.40053 Jacobi stage: ITER: 72023 RES: 1.1332e-05
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:2.2646e-
ELA: 0.10016 Jacobi stage: ITER: 451 RES: 0.86383
ELA: 0.2003 Jacobi stage: ITER: 903 RES: 0.75805
ELA: 0.30067 Jacobi stage: ITER: 1359 RES: 0.67401
ELA: 0.40081 Jacobi stage: ITER: 1811 RES: 0.60674
ELA: 0.50095 Jacobi stage: ITER: 2267 RES: 0.55058
ELA: 0.60119 Jacobi stage: ITER: 2719 RES: 0.50346
ELA: 0.70134 Jacobi stage: ITER: 3173 RES: 0.46241
ELA: 0.80148 Jacobi stage: ITER: 3627 RES: 0.42669
ELA: 0.90161 Jacobi stage: ITER: 4081 RES: 0.39459
ELA: 1.0018 Jacobi stage: ITER: 4535 RES: 0.36594
ELA: 1.1019 Jacobi stage: ITER: 4987 RES: 0.33991
ELA: 1.202 Jacobi stage: ITER: 5441 RES: 0.31589
ELA: 1.303 Jacobi stage: ITER: 5899 RES: 0.29383
ELA: 1.4031 Jacobi stage: ITER: 6351 RES: 0.27365
ELA: 1.5032 Jacobi stage: ITER: 6805 RES: 0.25482
ELA: 1.6034 Jacobi stage: ITER: 7313 RES: 0.23545
ELA: 1.7035 Jacobi stage: ITER: 7767 RES: 0.21952
ELA: 1.8037 Jacobi stage: ITER: 8221 RES: 0.20457
ELA: 1.9038 Jacobi stage: ITER: 8675 RES: 0.19079
ELA: 2.0039 Jacobi stage: ITER: 9129 RES: 0.17784
ELA: 2.1041 Jacobi stage: ITER: 9583 RES: 0.16588
ELA: 2.2042 Jacobi stage: ITER: 10037 RES: 0.15464
ELA: 2.3043 Jacobi stage: ITER: 10491 RES: 0.14425
ELA: 2.4045 Jacobi stage: ITER: 10945 RES: 0.13448
ELA: 2.5046 Jacobi stage: ITER: 11399 RES: 0.12546
ELA: 2.6047 Jacobi stage: ITER: 11851 RES: 0.11704
ELA: 2.7048 Jacobi stage: ITER: 12307 RES: 0.10912
ELA: 2.805 Jacobi stage: ITER: 12759 RES: 0.1018
ELA: 2.9051 Jacobi stage: ITER: 13213 RES: 0.094909
ELA: 3.0053 Jacobi stage: ITER: 13667 RES: 0.088542
ELA: 3.1054 Jacobi stage: ITER: 14121 RES: 0.082552
ELA: 3.2055 Jacobi stage: ITER: 14575 RES: 0.077015
ELA: 3.3057 Jacobi stage: ITER: 15029 RES: 0.071805
ELA: 3.4058 Jacobi stage: ITER: 15483 RES: 0.066988
ELA: 3.506 Jacobi stage: ITER: 15935 RES: 0.062495
ELA: 3.6061 Jacobi stage: ITER: 16443 RES: 0.057804
ELA: 3.7062 Jacobi stage: ITER: 16899 RES: 0.053894
ELA: 3.8064 Jacobi stage: ITER: 17351 RES: 0.050279
ELA: 3.9065 Jacobi stage: ITER: 17807 RES: 0.046878
ELA: 4.0067 Jacobi stage: ITER: 18259 RES: 0.043733
ELA: 4.1068 Jacobi stage: ITER: 18713 RES: 0.040775
ELA: 4.207 Jacobi stage: ITER: 19167 RES: 0.03804
ELA: 4.3071 Jacobi stage: ITER: 19623 RES: 0.035467
ELA: 4.4073 Jacobi stage: ITER: 20075 RES: 0.033088
ELA: 4.5074 Jacobi stage: ITER: 20529 RES: 0.03085
ELA: 4.6076 Jacobi stage: ITER: 20983 RES: 0.028781
ELA: 4.7077 Jacobi stage: ITER: 21439 RES: 0.026834
ELA: 4.8079 Jacobi stage: ITER: 21891 RES: 0.025034
ELA: 4.908 Jacobi stage: ITER: 22345 RES: 0.02334
ELA: 5.0081 Jacobi stage: ITER: 22799 RES: 0.021775
ELA: 5.1082 Jacobi stage: ITER: 23253 RES: 0.020302
ELA: 5.2084 Jacobi stage: ITER: 23707 RES: 0.01894
ELA: 5.3085 Jacobi stage: ITER: 24167 RES: 0.017648
ELA: 5.4087 Jacobi stage: ITER: 24667 RES: 0.016344
ELA: 5.5088 Jacobi stage: ITER: 25121 RES: 0.015238
ELA: 5.6089 Jacobi stage: ITER: 25575 RES: 0.014216
ELA: 5.7091 Jacobi stage: ITER: 26029 RES: 0.013254
ELA: 5.8092 Jacobi stage: ITER: 26483 RES: 0.012365
ELA: 5.9093 Jacobi stage: ITER: 26937 RES: 0.011529
ELA: 6.0095 Jacobi stage: ITER: 27391 RES: 0.010756
ELA: 6.1096 Jacobi stage: ITER: 27845 RES: 0.010028
ELA: 6.2097 Jacobi stage: ITER: 28299 RES: 0.0093554
ELA: 6.3098 Jacobi stage: ITER: 28753 RES: 0.0087225
ELA: 6.41 Jacobi stage: ITER: 29207 RES: 0.0081375
ELA: 6.5101 Jacobi stage: ITER: 29661 RES: 0.007587
ELA: 6.6103 Jacobi stage: ITER: 30115 RES: 0.0070781
ELA: 6.7104 Jacobi stage: ITER: 30571 RES: 0.0065993
ELA: 6.8106 Jacobi stage: ITER: 31023 RES: 0.0061567
ELA: 6.9107 Jacobi stage: ITER: 31479 RES: 0.0057402
ELA: 7.0108 Jacobi stage: ITER: 32031 RES: 0.0052736
ELA: 7.111 Jacobi stage: ITER: 32483 RES: 0.0049199
ELA: 7.2111 Jacobi stage: ITER: 32939 RES: 0.0045871
ELA: 7.3112 Jacobi stage: ITER: 33391 RES: 0.0042794
ELA: 7.4113 Jacobi stage: ITER: 33847 RES: 0.0039899
ELA: 7.5114 Jacobi stage: ITER: 34299 RES: 0.0037223
ELA: 7.6116 Jacobi stage: ITER: 34755 RES: 0.0034705
ELA: 7.7117 Jacobi stage: ITER: 35207 RES: 0.0032377
ELA: 7.8118 Jacobi stage: ITER: 35661 RES: 0.0030187
ELA: 7.912 Jacobi stage: ITER: 36115 RES: 0.0028162
ELA: 8.0121 Jacobi stage: ITER: 36571 RES: 0.0026257
ELA: 8.1123 Jacobi stage: ITER: 37023 RES: 0.0024496
ELA: 8.2124 Jacobi stage: ITER: 37479 RES: 0.0022839
ELA: 8.3125 Jacobi stage: ITER: 37931 RES: 0.0021307
ELA: 8.4127 Jacobi stage: ITER: 38385 RES: 0.0019866
ELA: 8.5128 Jacobi stage: ITER: 38839 RES: 0.0018533
ELA: 8.613 Jacobi stage: ITER: 39293 RES: 0.001728
ELA: 8.7131 Jacobi stage: ITER: 39747 RES: 0.0016121
ELA: 8.8133 Jacobi stage: ITER: 40199 RES: 0.0015039
ELA: 8.9134 Jacobi stage: ITER: 40713 RES: 0.0013893
ELA: 9.0135 Jacobi stage: ITER: 41167 RES: 0.0012962
ELA: 9.1137 Jacobi stage: ITER: 41621 RES: 0.0012085
ELA: 9.2138 Jacobi stage: ITER: 42075 RES: 0.0011274
ELA: 9.314 Jacobi stage: ITER: 42527 RES: 0.0010518
ELA: 9.4141 Jacobi stage: ITER: 42983 RES: 0.00098065
ELA: 9.5143 Jacobi stage: ITER: 43435 RES: 0.00091488
ELA: 9.6144 Jacobi stage: ITER: 43889 RES: 0.00085299
ELA: 9.7146 Jacobi stage: ITER: 44343 RES: 0.00079578
ELA: 9.8147 Jacobi stage: ITER: 44795 RES: 0.0007424
ELA: 9.9148 Jacobi stage: ITER: 45251 RES: 0.00069218
ELA: 10.015 Jacobi stage: ITER: 45703 RES: 0.00064576
ELA: 10.115 Jacobi stage: ITER: 46157 RES: 0.00060207
ELA: 10.215 Jacobi stage: ITER: 46611 RES: 0.00056169
ELA: 10.315 Jacobi stage: ITER: 47063 RES: 0.00052369
ELA: 10.416 Jacobi stage: ITER: 47519 RES: 0.00048857
ELA: 10.516 Jacobi stage: ITER: 47971 RES: 0.0004558
ELA: 10.616 Jacobi stage: ITER: 48425 RES: 0.00042497
ELA: 10.716 Jacobi stage: ITER: 48879 RES: 0.00039646
ELA: 10.816 Jacobi stage: ITER: 49387 RES: 0.0003667
ELA: 10.916 Jacobi stage: ITER: 49839 RES: 0.00034211
ELA: 11.016 Jacobi stage: ITER: 50295 RES: 0.00031897
ELA: 11.117 Jacobi stage: ITER: 50747 RES: 0.00029757
ELA: 11.217 Jacobi stage: ITER: 51203 RES: 0.00027744
ELA: 11.317 Jacobi stage: ITER: 51639 RES: 0.00025947
ELA: 11.417 Jacobi stage: ITER: 52091 RES: 0.00024207
ELA: 11.517 Jacobi stage: ITER: 52547 RES: 0.00022569
ELA: 11.617 Jacobi stage: ITER: 52983 RES: 0.00021107
ELA: 11.718 Jacobi stage: ITER: 53435 RES: 0.00019692
ELA: 11.842 Jacobi stage: ITER: 53883 RES: 0.00018382
ELA: 11.942 Jacobi stage: ITER: 54327 RES: 0.0001717
ELA: 12.042 Jacobi stage: ITER: 54781 RES: 0.00016009
ELA: 12.143 Jacobi stage: ITER: 55219 RES: 0.00014972
ELA: 12.243 Jacobi stage: ITER: 55671 RES: 0.00013968
ELA: 12.343 Jacobi stage: ITER: 56123 RES: 0.00013031
ELA: 12.443 Jacobi stage: ITER: 56609 RES: 0.0001209
ELA: 12.543 Jacobi stage: ITER: 57063 RES: 0.00011279
ELA: 12.643 Jacobi stage: ITER: 57501 RES: 0.00010542
ELA: 12.743 Jacobi stage: ITER: 57955 RES: 9.8346e-05
ELA: 12.844 Jacobi stage: ITER: 58409 RES: 9.1694e-05
ELA: 12.944 Jacobi stage: ITER: 58847 RES: 8.5754e-05
ELA: 13.044 Jacobi stage: ITER: 59301 RES: 7.9953e-05
ELA: 13.144 Jacobi stage: ITER: 59743 RES: 7.4728e-05
ELA: 13.244 Jacobi stage: ITER: 60191 RES: 6.9759e-05
ELA: 13.344 Jacobi stage: ITER: 60643 RES: 6.508e-05
ELA: 13.445 Jacobi stage: ITER: 61083 RES: 6.0827e-05
ELA: 13.545 Jacobi stage: ITER: 61535 RES: 5.6747e-05
ELA: 13.645 Jacobi stage: ITER: 61971 RES: 5.3071e-05
ELA: 13.745 Jacobi stage: ITER: 62425 RES: 4.9481e-05
ELA: 13.845 Jacobi stage: ITER: 62879 RES: 4.6162e-05
ELA: 13.945 Jacobi stage: ITER: 63315 RES: 4.3172e-05
ELA: 14.045 Jacobi stage: ITER: 63767 RES: 4.0276e-05
ELA: 14.146 Jacobi stage: ITER: 64251 RES: 3.7391e-05
ELA: 14.246 Jacobi stage: ITER: 64707 RES: 3.4861e-05
ELA: 14.346 Jacobi stage: ITER: 65163 RES: 3.2503e-05
ELA: 14.446 Jacobi stage: ITER: 65599 RES: 3.0398e-05
ELA: 14.546 Jacobi stage: ITER: 66053 RES: 2.8341e-05
ELA: 14.646 Jacobi stage: ITER: 66501 RES: 2.6457e-05
ELA: 14.747 Jacobi stage: ITER: 66945 RES: 2.4712e-05
ELA: 14.847 Jacobi stage: ITER: 67399 RES: 2.3055e-05
ELA: 14.947 Jacobi stage: ITER: 67835 RES: 2.1562e-05
ELA: 15.047 Jacobi stage: ITER: 68291 RES: 2.0103e-05
ELA: 15.147 Jacobi stage: ITER: 68737 RES: 1.8766e-05
ELA: 15.247 Jacobi stage: ITER: 69183 RES: 1.7529e-05
ELA: 15.347 Jacobi stage: ITER: 69635 RES: 1.6353e-05
ELA: 15.448 Jacobi stage: ITER: 70075 RES: 1.5285e-05
ELA: 15.548 Jacobi stage: ITER: 70527 RES: 1.4251e-05
ELA: 15.648 Jacobi stage: ITER: 70983 RES: 1.3295e-05
ELA: 15.748 Jacobi stage: ITER: 71435 RES: 1.2403e-05
ELA: 15.848 Jacobi stage: ITER: 71891 RES: 1.1564e-05
ELA: 15.948 Jacobi stage: ITER: 72399 RES: 1.0696e-05
ELA: 16.048 Jacobi stage: ITER: 72853 RES: 9.9725e-06
ELA: 16.149 Jacobi stage: ITER: 73307 RES: 9.3036e-06
ELA: 16.249 Jacobi stage: ITER: 73763 RES: 8.6743e-06
ELA: 16.349 Jacobi stage: ITER: 74215 RES: 8.0925e-06
ELA: 16.449 Jacobi stage: ITER: 74671 RES: 7.545e-06
ELA: 16.549 Jacobi stage: ITER: 75123 RES: 7.039e-06
ELA: 16.649 Jacobi stage: ITER: 75579 RES: 6.5628e-06
ELA: 16.75 Jacobi stage: ITER: 76033 RES: 6.1189e-06
ELA: 16.85 Jacobi stage: ITER: 76487 RES: 5.7085e-06
ELA: 16.95 Jacobi stage: ITER: 77089 RES: 5.2027e-06
ELA: 17.05 Jacobi stage: ITER: 77747 RES: 4.704e-06
ELA: 17.15 Jacobi stage: ITER: 78411 RES: 4.2479e-06
ELA: 17.25 Jacobi stage: ITER: 79077 RES: 3.8336e-06
ELA: 17.35 Jacobi stage: ITER: 79741 RES: 3.4619e-06
ELA: 17.451 Jacobi stage: ITER: 80403 RES: 3.1282e-06
ELA: 17.551 Jacobi stage: ITER: 82331 RES: 2.3264e-06
ELA: 17.651 Jacobi stage: ITER: 82995 RES: 2.1008e-06
ELA: 17.751 Jacobi stage: ITER: 83659 RES: 1.8971e-06
ELA: 17.851 Jacobi stage: ITER: 84325 RES: 1.7121e-06
ELA: 17.951 Jacobi stage: ITER: 84989 RES: 1.5461e-06
ELA: 18.051 Jacobi stage: ITER: 85653 RES: 1.3962e-06
ELA: 18.152 Jacobi stage: ITER: 86317 RES: 1.2608e-06
ELA: 18.252 Jacobi stage: ITER: 86979 RES: 1.1392e-06
ELA: 18.352 Jacobi stage: ITER: 87643 RES: 1.0288e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]

Setup with preconditioner

Preconditioners of iterative solvers can significantly improve the performance of the solver. In the case of the linear systems, they are used mainly with the Krylov subspace methods. Preconditioners cannot be used with the starionary methods (TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR). The following example shows how to setup an iterative solver of linear systems with preconditioning.

1#include <iostream>
2#include <memory>
3#include <TNL/Algorithms/ParallelFor.h>
4#include <TNL/Matrices/SparseMatrix.h>
5#include <TNL/Devices/Host.h>
6#include <TNL/Devices/Cuda.h>
7#include <TNL/Solvers/Linear/TFQMR.h>
8#include <TNL/Solvers/Linear/Preconditioners/Diagonal.h>
9
10template< typename Device >
11void iterativeLinearSolverExample()
12{
13 /***
14 * Set the following matrix (dots represent zero matrix elements):
15 *
16 * / 2.5 -1 . . . \
17 * | -1 2.5 -1 . . |
18 * | . -1 2.5 -1. . |
19 * | . . -1 2.5 -1 |
20 * \ . . . -1 2.5 /
21 */
24 const int size( 5 );
25 auto matrix_ptr = std::make_shared< MatrixType >();
26 matrix_ptr->setDimensions( size, size );
27 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
28
29 auto f = [=] __cuda_callable__ ( typename MatrixType::RowView& row ) mutable {
30 const int rowIdx = row.getRowIndex();
31 if( rowIdx == 0 )
32 {
33 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
34 row.setElement( 1, rowIdx+1, -1 ); // element above the diagonal
35 }
36 else if( rowIdx == size - 1 )
37 {
38 row.setElement( 0, rowIdx-1, -1.0 ); // element below the diagonal
39 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
40 }
41 else
42 {
43 row.setElement( 0, rowIdx-1, -1.0 ); // element below the diagonal
44 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
45 row.setElement( 2, rowIdx+1, -1.0 ); // element above the diagonal
46 }
47 };
48
49 /***
50 * Set the matrix elements.
51 */
52 matrix_ptr->forAllRows( f );
53 std::cout << *matrix_ptr << std::endl;
54
55 /***
56 * Set the right-hand side vector.
57 */
58 Vector x( size, 1.0 );
59 Vector b( size );
60 matrix_ptr->vectorProduct( x, b );
61 x = 0.0;
62 std::cout << "Vector b = " << b << std::endl;
63
64 /***
65 * Solve the linear system using diagonal (Jacobi) preconditioner.
66 */
69 auto preconditioner_ptr = std::make_shared< Preconditioner >();
70 preconditioner_ptr->update( matrix_ptr );
71 LinearSolver solver;
72 solver.setMatrix( matrix_ptr );
73 solver.setPreconditioner( preconditioner_ptr );
74 solver.setConvergenceResidue( 1.0e-6 );
75 solver.solve( b, x );
76 std::cout << "Vector x = " << x << std::endl;
77}
78
79int main( int argc, char* argv[] )
80{
81 std::cout << "Solving linear system on host: " << std::endl;
82 iterativeLinearSolverExample< TNL::Devices::Sequential >();
83
84#ifdef HAVE_CUDA
85 std::cout << "Solving linear system on CUDA device: " << std::endl;
86 iterativeLinearSolverExample< TNL::Devices::Cuda >();
87#endif
88}
Diagonal (Jacobi) preconditioner for iterative solvers of linear systems.
Definition: Diagonal.h:29

In this example, we solve the same problem as in all other examples in this section. The only differences concerning the preconditioner happen on the lines (68-72). Similar to the matrix of the linear system, the preconditioner is passed to the solver by the means of smart shared pointer (std::shared_ptr). The instance is created on the lines 68 and 69. Next we have to initialize the preconditioner (line 70, TNL::Solvers::Linear::Preconditioners::Preconditioner::update). The method update has to be called everytime the matrix of the linear system changes. This is important for example when solving time dependent PDEs but it does not happen in this example. Finally, we need to connect the solver with the preconditioner (line 73, TNL::Solvers::Linear::LinearSolver).

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]

Choosing the solver and preconditioner type at runtime

When developing a numerical solver, one often has to search for a combination of various methods and algorithms that fit given requirements the best. To make this easier, TNL offers choosing the type of both linear solver and preconditioner at runtime by means of functions TNL::Solvers::getLinearSolver and TNL::Solvers::getPreconditioner. The following example shows how to use these functions:

1#include <iostream>
2#include <memory>
3#include <TNL/Algorithms/ParallelFor.h>
4#include <TNL/Matrices/SparseMatrix.h>
5#include <TNL/Devices/Host.h>
6#include <TNL/Devices/Cuda.h>
7#include <TNL/Solvers/LinearSolverTypeResolver.h>
8
9template< typename Device >
10void iterativeLinearSolverExample()
11{
12 /***
13 * Set the following matrix (dots represent zero matrix elements):
14 *
15 * / 2.5 -1 . . . \
16 * | -1 2.5 -1 . . |
17 * | . -1 2.5 -1. . |
18 * | . . -1 2.5 -1 |
19 * \ . . . -1 2.5 /
20 */
23 const int size( 5 );
24 auto matrix_ptr = std::make_shared< MatrixType >();
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
27
28 auto f = [=] __cuda_callable__ ( typename MatrixType::RowView& row ) mutable {
29 const int rowIdx = row.getRowIndex();
30 if( rowIdx == 0 )
31 {
32 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
33 row.setElement( 1, rowIdx+1, -1 ); // element above the diagonal
34 }
35 else if( rowIdx == size - 1 )
36 {
37 row.setElement( 0, rowIdx-1, -1.0 ); // element below the diagonal
38 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
39 }
40 else
41 {
42 row.setElement( 0, rowIdx-1, -1.0 ); // element below the diagonal
43 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
44 row.setElement( 2, rowIdx+1, -1.0 ); // element above the diagonal
45 }
46 };
47
48 /***
49 * Set the matrix elements.
50 */
51 matrix_ptr->forAllRows( f );
52 std::cout << *matrix_ptr << std::endl;
53
54 /***
55 * Set the right-hand side vector.
56 */
57 Vector x( size, 1.0 );
58 Vector b( size );
59 matrix_ptr->vectorProduct( x, b );
60 x = 0.0;
61 std::cout << "Vector b = " << b << std::endl;
62
63 /***
64 * Solve the linear system using diagonal (Jacobi) preconditioner.
65 */
66 auto solver_ptr = TNL::Solvers::getLinearSolver< MatrixType >( "tfqmr" );
67 auto preconditioner_ptr = TNL::Solvers::getPreconditioner< MatrixType >( "diagonal");
68 preconditioner_ptr->update( matrix_ptr );
69 solver_ptr->setMatrix( matrix_ptr );
70 solver_ptr->setPreconditioner( preconditioner_ptr );
71 solver_ptr->setConvergenceResidue( 1.0e-6 );
72 solver_ptr->solve( b, x );
73 std::cout << "Vector x = " << x << std::endl;
74}
75
76int main( int argc, char* argv[] )
77{
78 std::cout << "Solving linear system on host: " << std::endl;
79 iterativeLinearSolverExample< TNL::Devices::Sequential >();
80
81#ifdef HAVE_CUDA
82 std::cout << "Solving linear system on CUDA device: " << std::endl;
83 iterativeLinearSolverExample< TNL::Devices::Cuda >();
84#endif
85}

We still stay with the same problem and the only changes can be seen on lines 66-70. We first create an instance of shared pointer holding the solver (line 66, TNL::Solvers::getLinearSolver) and the same with the preconditioner (line 67, TNL::Solvers::getPreconditioner). The rest of the code is the same as in the previous examples with the only difference that we work with the pointer solver_ptr instead of the direct instance solver of the solver type.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]