Template Numerical Library version\ main:bb09b17
Loading...
Searching...
No Matches
Linear solvers

Introduction

Solvers of linear systems are one of the most important algorithms in scientific computations. TNL offers the following iterative methods:

  1. Stationary methods
    1. Jacobi method (TNL::Solvers::Linear::Jacobi)
    2. Successive-overrelaxation method, SOR (TNL::Solvers::Linear::SOR)
  2. Krylov subspace methods
    1. Conjugate gradient method, CG (TNL::Solvers::Linear::CG)
    2. Biconjugate gradient stabilized method, BICGStab (TNL::Solvers::Linear::BICGStab)
    3. Biconjugate gradient stabilized method, BICGStab(l) (TNL::Solvers::Linear::BICGStabL)
    4. Transpose-free quasi-minimal residual method, TFQMR (TNL::Solvers::Linear::TFQMR)
    5. Generalized minimal residual method, GMRES (TNL::Solvers::Linear::GMRES) with various methods of orthogonalization:
      1. Classical Gramm-Schmidt, CGS
      2. Classical Gramm-Schmidt with reorthogonalization, CGSR
      3. Modified Gramm-Schmidt, MGS
      4. Modified Gramm-Schmidt with reorthogonalization, MGSR
      5. Compact WY form of the Householder reflections, CWY

The iterative solvers (not the stationary solvers like TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR) can be combined with the following preconditioners:

  1. Diagonal or Jacobi (TNL::Solvers::Linear::Preconditioners::Diagonal)
  2. ILU (Incomplete LU) - CPU only currently
    1. ILU(0) (TNL::Solvers::Linear::Preconditioners::ILU0)
    2. ILUT (ILU with thresholding) (TNL::Solvers::Linear::Preconditioners::ILUT)

Iterative solvers of linear systems

Basic setup

All iterative solvers for linear systems can be found in the namespace TNL::Solvers::Linear. The following example shows the use the iterative solvers:

1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/Linear/TFQMR.h>
7
8template< typename Device >
9void
10iterativeLinearSolverExample()
11{
12 /***
13 * Set the following matrix (dots represent zero matrix elements):
14 *
15 * / 2.5 -1 . . . \
16 * | -1 2.5 -1 . . |
17 * | . -1 2.5 -1. . |
18 * | . . -1 2.5 -1 |
19 * \ . . . -1 2.5 /
20 */
23 const int size( 5 );
24 auto matrix_ptr = std::make_shared< MatrixType >();
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
27
28 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
29 {
30 const int rowIdx = row.getRowIndex();
31 if( rowIdx == 0 ) {
32 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
33 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
34 }
35 else if( rowIdx == size - 1 ) {
36 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
37 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
38 }
39 else {
40 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
41 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
42 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
43 }
44 };
45
46 /***
47 * Set the matrix elements.
48 */
49 matrix_ptr->forAllRows( f );
50 std::cout << *matrix_ptr << std::endl;
51
52 /***
53 * Set the right-hand side vector.
54 */
55 Vector x( size, 1.0 );
56 Vector b( size );
57 matrix_ptr->vectorProduct( x, b );
58 x = 0.0;
59 std::cout << "Vector b = " << b << std::endl;
60
61 /***
62 * Solve the linear system.
63 */
65 LinearSolver solver;
66 solver.setMatrix( matrix_ptr );
67 solver.setConvergenceResidue( 1.0e-6 );
68 solver.solve( b, x );
69 std::cout << "Vector x = " << x << std::endl;
70}
71
72int
73main( int argc, char* argv[] )
74{
75 std::cout << "Solving linear system on host: " << std::endl;
76 iterativeLinearSolverExample< TNL::Devices::Sequential >();
77
78#ifdef __CUDACC__
79 std::cout << "Solving linear system on CUDA device: " << std::endl;
80 iterativeLinearSolverExample< TNL::Devices::Cuda >();
81#endif
82}
#define __cuda_callable__
Definition Macros.h:49
Vector extends Array with algebraic operations.
Definition Vector.h:36
Implementation of sparse matrix, i.e. matrix storing only non-zero elements.
Definition SparseMatrix.h:57
void setMatrix(const MatrixPointer &matrix)
Set the matrix of the linear system.
Definition LinearSolver.h:120
Iterative solver of linear systems based on the Transpose-free quasi-minimal residual (TFQMR) method.
Definition TFQMR.h:21
T endl(T... args)
T make_shared(T... args)

In this example we solve a linear system \( A \vec x = \vec b \) where

\[ A = \left( \begin{array}{cccc} 2.5 & -1 & & & \\ -1 & 2.5 & -1 & & \\ & -1 & 2.5 & -1 & \\ & & -1 & 2.5 & -1 \\ & & & -1 & 2.5 \\ \end{array} \right) \]

The right-hand side vector \(\vec b \) is set to \(( 1.5, 0.5, 0.5, 0.5, 1.5 )^T \) so that the exact solution is \( \vec x = ( 1, 1, 1, 1, 1 )^T \). The elements of the matrix \( A \) are set using the method TNL::Matrices::SparseMatrix::forAllRows. In this example, we use the sparse matrix but any other matrix type can be used as well (see the namespace TNL::Matrices). Next we set the solution vector \( \vec x = ( 1, 1, 1, 1, 1 )^T \) and multiply it with matrix \( A \) to get the right-hand side vector \( \vec b \). Finally, we reset the vector \( \vec x \) to zero.

To solve the linear system in the example, we use the TFQMR solver. Other solvers can be used as well (see the namespace TNL::Solvers::Linear). The solver needs only one template parameter which is the matrix type. Next we create an instance of the solver and set the matrix of the linear system. Note that the matrix is passed to the solver as a std::shared_ptr. Then we set the stopping criterion for the iterative method in terms of the relative residue, i.e. \( \lVert \vec b - A \vec x \rVert / \lVert b \rVert \). The solver is executed by calling the TNL::Solvers::Linear::LinearSolver::solve method which accepts the right-hand side vector \( \vec b \) and the solution vector \( \vec x \).

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]

Setup with a solver monitor

Solution of large linear systems may take a lot of time. In such situations, it is useful to be able to monitor the convergence of the solver or the solver status in general. For this purpose, TNL provides a solver monitor which can show various metrics in real time, such as current number of iterations, current residue of the approximate solution, etc. The solver monitor in TNL runs in a separate thread and it refreshes the status of the solver with a configurable refresh rate (once per 500 ms by default). The use of the solver monitor is demonstrated in the following example.

1#include <iostream>
2#include <memory>
3#include <chrono>
4#include <thread>
5#include <TNL/Matrices/SparseMatrix.h>
6#include <TNL/Devices/Sequential.h>
7#include <TNL/Devices/Cuda.h>
8#include <TNL/Solvers/Linear/Jacobi.h>
9
10template< typename Device >
11void
12iterativeLinearSolverExample()
13{
14 /***
15 * Set the following matrix (dots represent zero matrix elements):
16 *
17 * / 2.5 -1 . . . \
18 * | -1 2.5 -1 . . |
19 * | . -1 2.5 -1. . |
20 * | . . -1 2.5 -1 |
21 * \ . . . -1 2.5 /
22 */
25 const int size( 5 );
26 auto matrix_ptr = std::make_shared< MatrixType >();
27 matrix_ptr->setDimensions( size, size );
28 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
29
30 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
31 {
32 const int rowIdx = row.getRowIndex();
33 if( rowIdx == 0 ) {
34 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
35 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
36 }
37 else if( rowIdx == size - 1 ) {
38 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
39 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
40 }
41 else {
42 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
43 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
44 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
45 }
46 };
47
48 /***
49 * Set the matrix elements.
50 */
51 matrix_ptr->forAllRows( f );
52 std::cout << *matrix_ptr << std::endl;
53
54 /***
55 * Set the right-hand side vector.
56 */
57 Vector x( size, 1.0 );
58 Vector b( size );
59 matrix_ptr->vectorProduct( x, b );
60 x = 0.0;
61 std::cout << "Vector b = " << b << std::endl;
62
63 /***
64 * Setup solver of the linear system.
65 */
67 LinearSolver solver;
68 solver.setMatrix( matrix_ptr );
69 solver.setOmega( 0.0005 );
70
71 /***
72 * Setup monitor of the iterative solver.
73 */
74 using IterativeSolverMonitorType = TNL::Solvers::IterativeSolverMonitor< double, int >;
75 IterativeSolverMonitorType monitor;
76 TNL::Solvers::SolverMonitorThread monitorThread( monitor );
77 monitor.setRefreshRate( 10 ); // refresh rate in milliseconds
78 monitor.setVerbose( 1 );
79 monitor.setStage( "Jacobi stage:" );
80 solver.setSolverMonitor( monitor );
81 solver.setConvergenceResidue( 1.0e-6 );
82 solver.solve( b, x );
83 monitor.stopMainLoop();
84 std::cout << "Vector x = " << x << std::endl;
85}
86
87int
88main( int argc, char* argv[] )
89{
90 std::cout << "Solving linear system on host: " << std::endl;
91 iterativeLinearSolverExample< TNL::Devices::Sequential >();
92
93#ifdef __CUDACC__
94 std::cout << "Solving linear system on CUDA device: " << std::endl;
95 iterativeLinearSolverExample< TNL::Devices::Cuda >();
96#endif
97}
Iterative solver of linear systems based on the Jacobi method.
Definition Jacobi.h:21
A RAII wrapper for launching the SolverMonitor's main loop in a separate thread.
Definition SolverMonitor.h:133

First, we set up the same linear system as in the previous example, we create an instance of the Jacobi solver and we pass the matrix of the linear system to the solver. Then, we set the relaxation parameter \( \omega \) of the Jacobi solver to 0.0005. The reason is to artificially slow down the convergence, because we want to see some iterations in this example. Next, we create an instance of the solver monitor and a special thread for the monitor (an instance of the TNL::Solvers::SolverMonitorThread class). We use the following methods to configure the solver monitor:

Next, we call TNL::Solvers::IterativeSolver::setSolverMonitor to connect the solver with the monitor and we set the convergence criterion based on the relative residue. Finally, we start the solver using the TNL::Solvers::Linear::Jacobi::solve method and when the solver finishes, we stop the monitor using TNL::Solvers::SolverMonitor::stopMainLoop.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 34195 RES: 0.0037799
Jacobi stage: ITER: 68367 RES: 1.987e-05
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Jacobi stage: ITER: 5315 RES: 0.32242
Jacobi stage: ITER: 6107 RES: 0.28435
Jacobi stage: ITER: 6783 RES: 0.25578
Jacobi stage: ITER: 7387 RES: 0.23284
Jacobi stage: ITER: 8007 RES: 0.21152
Jacobi stage: ITER: 8639 RES: 0.19185
Jacobi stage: ITER: 9271 RES: 0.17404
Jacobi stage: ITER: 9903 RES: 0.15791
Jacobi stage: ITER: 10535 RES: 0.14328
Jacobi stage: ITER: 11165 RES: 0.13001
Jacobi stage: ITER: 11797 RES: 0.11798
Jacobi stage: ITER: 12429 RES: 0.10706
Jacobi stage: ITER: 13061 RES: 0.097151
Jacobi stage: ITER: 13693 RES: 0.088162
Jacobi stage: ITER: 14325 RES: 0.080005
Jacobi stage: ITER: 14957 RES: 0.072603
Jacobi stage: ITER: 15587 RES: 0.065927
Jacobi stage: ITER: 16219 RES: 0.059827
Jacobi stage: ITER: 16851 RES: 0.054292
Jacobi stage: ITER: 17485 RES: 0.049239
Jacobi stage: ITER: 18091 RES: 0.044877
Jacobi stage: ITER: 18707 RES: 0.040825
Jacobi stage: ITER: 19315 RES: 0.037185
Jacobi stage: ITER: 19919 RES: 0.03389
Jacobi stage: ITER: 20537 RES: 0.030812
Jacobi stage: ITER: 21141 RES: 0.028082
Jacobi stage: ITER: 21755 RES: 0.025562
Jacobi stage: ITER: 22361 RES: 0.023283
Jacobi stage: ITER: 22967 RES: 0.02122
Jacobi stage: ITER: 23585 RES: 0.019293
Jacobi stage: ITER: 24189 RES: 0.017583
Jacobi stage: ITER: 24807 RES: 0.015996
Jacobi stage: ITER: 25405 RES: 0.014588
Jacobi stage: ITER: 25997 RES: 0.01332
Jacobi stage: ITER: 26595 RES: 0.012154
Jacobi stage: ITER: 27203 RES: 0.011071
Jacobi stage: ITER: 27779 RES: 0.010133
Jacobi stage: ITER: 28375 RES: 0.0092468
Jacobi stage: ITER: 29009 RES: 0.0083862
Jacobi stage: ITER: 29645 RES: 0.0076057
Jacobi stage: ITER: 30279 RES: 0.006902
Jacobi stage: ITER: 30915 RES: 0.0062597
Jacobi stage: ITER: 31551 RES: 0.0056771
Jacobi stage: ITER: 32187 RES: 0.0051487
Jacobi stage: ITER: 32821 RES: 0.0046695
Jacobi stage: ITER: 33457 RES: 0.0042349
Jacobi stage: ITER: 34091 RES: 0.0038431
Jacobi stage: ITER: 34727 RES: 0.0034855
Jacobi stage: ITER: 35363 RES: 0.0031611
Jacobi stage: ITER: 35999 RES: 0.0028669
Jacobi stage: ITER: 36633 RES: 0.0026
Jacobi stage: ITER: 37269 RES: 0.0023581
Jacobi stage: ITER: 37903 RES: 0.0021399
Jacobi stage: ITER: 38539 RES: 0.0019407
Jacobi stage: ITER: 39175 RES: 0.0017601
Jacobi stage: ITER: 39811 RES: 0.0015963
Jacobi stage: ITER: 40447 RES: 0.0014477
Jacobi stage: ITER: 41081 RES: 0.001313
Jacobi stage: ITER: 41715 RES: 0.0011915
Jacobi stage: ITER: 42351 RES: 0.0010806
Jacobi stage: ITER: 42985 RES: 0.00098005
Jacobi stage: ITER: 43619 RES: 0.00088938
Jacobi stage: ITER: 44253 RES: 0.00080661
Jacobi stage: ITER: 44887 RES: 0.00073199
Jacobi stage: ITER: 45523 RES: 0.00066386
Jacobi stage: ITER: 46155 RES: 0.00060244
Jacobi stage: ITER: 46791 RES: 0.00054637
Jacobi stage: ITER: 47423 RES: 0.00049583
Jacobi stage: ITER: 48057 RES: 0.00044968
Jacobi stage: ITER: 48691 RES: 0.00040808
Jacobi stage: ITER: 49321 RES: 0.00037033
Jacobi stage: ITER: 49951 RES: 0.00033627
Jacobi stage: ITER: 50581 RES: 0.00030516
Jacobi stage: ITER: 51211 RES: 0.0002771
Jacobi stage: ITER: 51841 RES: 0.00025147
Jacobi stage: ITER: 52471 RES: 0.00022834
Jacobi stage: ITER: 53099 RES: 0.00020735
Jacobi stage: ITER: 53729 RES: 0.00018816
Jacobi stage: ITER: 54359 RES: 0.00017086
Jacobi stage: ITER: 54987 RES: 0.00015515
Jacobi stage: ITER: 55617 RES: 0.0001408
Jacobi stage: ITER: 56247 RES: 0.00012785
Jacobi stage: ITER: 56877 RES: 0.00011602
Jacobi stage: ITER: 57505 RES: 0.00010535
Jacobi stage: ITER: 58135 RES: 9.5664e-05
Jacobi stage: ITER: 58763 RES: 8.6868e-05
Jacobi stage: ITER: 59393 RES: 7.8831e-05
Jacobi stage: ITER: 60023 RES: 7.1582e-05
Jacobi stage: ITER: 60651 RES: 6.5e-05
Jacobi stage: ITER: 61283 RES: 5.8986e-05
Jacobi stage: ITER: 61911 RES: 5.3562e-05
Jacobi stage: ITER: 62543 RES: 4.8607e-05
Jacobi stage: ITER: 63171 RES: 4.4137e-05
Jacobi stage: ITER: 63801 RES: 4.0054e-05
Jacobi stage: ITER: 64431 RES: 3.6371e-05
Jacobi stage: ITER: 65061 RES: 3.3006e-05
Jacobi stage: ITER: 65689 RES: 2.9971e-05
Jacobi stage: ITER: 66319 RES: 2.7215e-05
Jacobi stage: ITER: 66947 RES: 2.4712e-05
Jacobi stage: ITER: 67575 RES: 2.244e-05
Jacobi stage: ITER: 68207 RES: 2.0364e-05
Jacobi stage: ITER: 68837 RES: 1.848e-05
Jacobi stage: ITER: 69467 RES: 1.6781e-05
Jacobi stage: ITER: 70095 RES: 1.5238e-05
Jacobi stage: ITER: 70725 RES: 1.3828e-05
Jacobi stage: ITER: 71355 RES: 1.2556e-05
Jacobi stage: ITER: 71983 RES: 1.1402e-05
Jacobi stage: ITER: 72611 RES: 1.0353e-05
Jacobi stage: ITER: 73239 RES: 9.4013e-06
Jacobi stage: ITER: 73869 RES: 8.5315e-06
Jacobi stage: ITER: 74499 RES: 7.747e-06
Jacobi stage: ITER: 75127 RES: 7.0346e-06
Jacobi stage: ITER: 75755 RES: 6.3878e-06
Jacobi stage: ITER: 76383 RES: 5.8004e-06
Jacobi stage: ITER: 77011 RES: 5.267e-06
Jacobi stage: ITER: 77639 RES: 4.7827e-06
Jacobi stage: ITER: 78269 RES: 4.3402e-06
Jacobi stage: ITER: 78899 RES: 3.9411e-06
Jacobi stage: ITER: 79527 RES: 3.5787e-06
Jacobi stage: ITER: 80155 RES: 3.2496e-06
Jacobi stage: ITER: 80783 RES: 2.9508e-06
Jacobi stage: ITER: 81415 RES: 2.6778e-06
Jacobi stage: ITER: 82043 RES: 2.4316e-06
Jacobi stage: ITER: 82675 RES: 2.2066e-06
Jacobi stage: ITER: 83305 RES: 2.0025e-06
Jacobi stage: ITER: 83935 RES: 1.8183e-06
Jacobi stage: ITER: 84563 RES: 1.6511e-06
Jacobi stage: ITER: 85191 RES: 1.4993e-06
Jacobi stage: ITER: 85819 RES: 1.3614e-06
Jacobi stage: ITER: 86449 RES: 1.2355e-06
Jacobi stage: ITER: 87079 RES: 1.1219e-06
Jacobi stage: ITER: 87707 RES: 1.0187e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]

The monitoring of the solver can be improved by time elapsed since the beginning of the computation as demonstrated in the following example:

1#include <iostream>
2#include <memory>
3#include <chrono>
4#include <thread>
5#include <TNL/Timer.h>
6#include <TNL/Matrices/SparseMatrix.h>
7#include <TNL/Devices/Sequential.h>
8#include <TNL/Devices/Cuda.h>
9#include <TNL/Solvers/Linear/Jacobi.h>
10
11template< typename Device >
12void
13iterativeLinearSolverExample()
14{
15 /***
16 * Set the following matrix (dots represent zero matrix elements):
17 *
18 * / 2.5 -1 . . . \
19 * | -1 2.5 -1 . . |
20 * | . -1 2.5 -1. . |
21 * | . . -1 2.5 -1 |
22 * \ . . . -1 2.5 /
23 */
26 const int size( 5 );
27 auto matrix_ptr = std::make_shared< MatrixType >();
28 matrix_ptr->setDimensions( size, size );
29 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
30
31 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
32 {
33 const int rowIdx = row.getRowIndex();
34 if( rowIdx == 0 ) {
35 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
36 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
37 }
38 else if( rowIdx == size - 1 ) {
39 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
40 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
41 }
42 else {
43 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
44 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
45 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
46 }
47 };
48
49 /***
50 * Set the matrix elements.
51 */
52 matrix_ptr->forAllRows( f );
53 std::cout << *matrix_ptr << std::endl;
54
55 /***
56 * Set the right-hand side vector.
57 */
58 Vector x( size, 1.0 );
59 Vector b( size );
60 matrix_ptr->vectorProduct( x, b );
61 x = 0.0;
62 std::cout << "Vector b = " << b << std::endl;
63
64 /***
65 * Setup solver of the linear system.
66 */
68 LinearSolver solver;
69 solver.setMatrix( matrix_ptr );
70 solver.setOmega( 0.0005 );
71
72 /***
73 * Setup monitor of the iterative solver.
74 */
75 using IterativeSolverMonitorType = TNL::Solvers::IterativeSolverMonitor< double, int >;
76 IterativeSolverMonitorType monitor;
77 TNL::Solvers::SolverMonitorThread mmonitorThread( monitor );
78 monitor.setRefreshRate( 10 ); // refresh rate in milliseconds
79 monitor.setVerbose( 1 );
80 monitor.setStage( "Jacobi stage:" );
81 TNL::Timer timer;
82 monitor.setTimer( timer );
83 timer.start();
84 solver.setSolverMonitor( monitor );
85 solver.setConvergenceResidue( 1.0e-6 );
86 solver.solve( b, x );
87 monitor.stopMainLoop();
88 std::cout << "Vector x = " << x << std::endl;
89}
90
91int
92main( int argc, char* argv[] )
93{
94 std::cout << "Solving linear system on host: " << std::endl;
95 iterativeLinearSolverExample< TNL::Devices::Sequential >();
96
97#ifdef __CUDACC__
98 std::cout << "Solving linear system on CUDA device: " << std::endl;
99 iterativeLinearSolverExample< TNL::Devices::Cuda >();
100#endif
101}
Class for real time, CPU time and CPU cycles measuring.
Definition Timer.h:25
void start()
Starts timer.
Definition Timer.hpp:42

The only changes are around the lines where we create an instance of TNL::Timer, connect it with the monitor using TNL::Solvers::SolverMonitor::setTimer and start the timer with TNL::Timer::start.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:4.101e-0
ELA: 0.10011 Jacobi stage: ITER: 34329 RES: 0.003704
ELA: 0.2002 Jacobi stage: ITER: 64983 RES: 3.3414e-05
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
ELA:1.0819e-
ELA: 0.1001 Jacobi stage: ITER: 775 RES: 0.78539
ELA: 0.2002 Jacobi stage: ITER: 1499 RES: 0.6517
ELA: 0.30029 Jacobi stage: ITER: 2091 RES: 0.57108
ELA: 0.40039 Jacobi stage: ITER: 2715 RES: 0.50385
ELA: 0.50048 Jacobi stage: ITER: 3343 RES: 0.44862
ELA: 0.60057 Jacobi stage: ITER: 3975 RES: 0.40188
ELA: 0.70066 Jacobi stage: ITER: 4607 RES: 0.36163
ELA: 0.80075 Jacobi stage: ITER: 5239 RES: 0.32638
ELA: 0.90084 Jacobi stage: ITER: 5871 RES: 0.29513
ELA: 1.0009 Jacobi stage: ITER: 6503 RES: 0.26721
ELA: 1.101 Jacobi stage: ITER: 7135 RES: 0.24213
ELA: 1.2011 Jacobi stage: ITER: 7765 RES: 0.21952
ELA: 1.3012 Jacobi stage: ITER: 8397 RES: 0.19909
ELA: 1.4013 Jacobi stage: ITER: 9029 RES: 0.1806
ELA: 1.5014 Jacobi stage: ITER: 9661 RES: 0.16385
ELA: 1.6015 Jacobi stage: ITER: 10293 RES: 0.14867
ELA: 1.7016 Jacobi stage: ITER: 10925 RES: 0.1349
ELA: 1.8017 Jacobi stage: ITER: 11557 RES: 0.12241
ELA: 1.9018 Jacobi stage: ITER: 12191 RES: 0.11108
ELA: 2.0019 Jacobi stage: ITER: 12803 RES: 0.10111
ELA: 2.102 Jacobi stage: ITER: 13415 RES: 0.092037
ELA: 2.2021 Jacobi stage: ITER: 14017 RES: 0.083881
ELA: 2.3022 Jacobi stage: ITER: 14633 RES: 0.076308
ELA: 2.4023 Jacobi stage: ITER: 15239 RES: 0.069547
ELA: 2.5024 Jacobi stage: ITER: 15847 RES: 0.063345
ELA: 2.6025 Jacobi stage: ITER: 16461 RES: 0.057627
ELA: 2.7026 Jacobi stage: ITER: 17063 RES: 0.052553
ELA: 2.8027 Jacobi stage: ITER: 17679 RES: 0.047808
ELA: 2.9029 Jacobi stage: ITER: 18287 RES: 0.043546
ELA: 3.003 Jacobi stage: ITER: 18895 RES: 0.039663
ELA: 3.103 Jacobi stage: ITER: 19509 RES: 0.036082
ELA: 3.2032 Jacobi stage: ITER: 20111 RES: 0.032906
ELA: 3.3032 Jacobi stage: ITER: 20707 RES: 0.030027
ELA: 3.4033 Jacobi stage: ITER: 21307 RES: 0.027383
ELA: 3.5035 Jacobi stage: ITER: 21917 RES: 0.024926
ELA: 3.6036 Jacobi stage: ITER: 22483 RES: 0.022858
ELA: 3.7036 Jacobi stage: ITER: 23079 RES: 0.020858
ELA: 3.8037 Jacobi stage: ITER: 23713 RES: 0.018917
ELA: 3.9038 Jacobi stage: ITER: 24347 RES: 0.017167
ELA: 4.0039 Jacobi stage: ITER: 24983 RES: 0.015569
ELA: 4.104 Jacobi stage: ITER: 25619 RES: 0.01412
ELA: 4.2041 Jacobi stage: ITER: 26255 RES: 0.012806
ELA: 4.3042 Jacobi stage: ITER: 26891 RES: 0.011614
ELA: 4.4042 Jacobi stage: ITER: 27527 RES: 0.010533
ELA: 4.5043 Jacobi stage: ITER: 28161 RES: 0.0095528
ELA: 4.6044 Jacobi stage: ITER: 28797 RES: 0.0086638
ELA: 4.7045 Jacobi stage: ITER: 29431 RES: 0.0078622
ELA: 4.8046 Jacobi stage: ITER: 30067 RES: 0.0071305
ELA: 4.9047 Jacobi stage: ITER: 30703 RES: 0.0064669
ELA: 5.0048 Jacobi stage: ITER: 31339 RES: 0.005865
ELA: 5.1048 Jacobi stage: ITER: 31975 RES: 0.0053191
ELA: 5.2049 Jacobi stage: ITER: 32609 RES: 0.0048241
ELA: 5.305 Jacobi stage: ITER: 33243 RES: 0.0043778
ELA: 5.4051 Jacobi stage: ITER: 33879 RES: 0.0039703
ELA: 5.5052 Jacobi stage: ITER: 34515 RES: 0.0036008
ELA: 5.6052 Jacobi stage: ITER: 35151 RES: 0.0032657
ELA: 5.7053 Jacobi stage: ITER: 35787 RES: 0.0029618
ELA: 5.8054 Jacobi stage: ITER: 36421 RES: 0.0026861
ELA: 5.9055 Jacobi stage: ITER: 37055 RES: 0.0024376
ELA: 6.0056 Jacobi stage: ITER: 37689 RES: 0.0022107
ELA: 6.1057 Jacobi stage: ITER: 38323 RES: 0.0020062
ELA: 6.2057 Jacobi stage: ITER: 38959 RES: 0.0018195
ELA: 6.3058 Jacobi stage: ITER: 39593 RES: 0.0016501
ELA: 6.4059 Jacobi stage: ITER: 40227 RES: 0.0014975
ELA: 6.506 Jacobi stage: ITER: 40861 RES: 0.0013581
ELA: 6.606 Jacobi stage: ITER: 41495 RES: 0.0012325
ELA: 6.7061 Jacobi stage: ITER: 42127 RES: 0.0011185
ELA: 6.8062 Jacobi stage: ITER: 42763 RES: 0.0010144
ELA: 6.9063 Jacobi stage: ITER: 43395 RES: 0.00092052
ELA: 7.0064 Jacobi stage: ITER: 44027 RES: 0.00083536
ELA: 7.1065 Jacobi stage: ITER: 44659 RES: 0.00075807
ELA: 7.2066 Jacobi stage: ITER: 45287 RES: 0.00068837
ELA: 7.3067 Jacobi stage: ITER: 45919 RES: 0.00062468
ELA: 7.4068 Jacobi stage: ITER: 46549 RES: 0.00056689
ELA: 7.5069 Jacobi stage: ITER: 47179 RES: 0.00051476
ELA: 7.607 Jacobi stage: ITER: 47807 RES: 0.00046743
ELA: 7.7071 Jacobi stage: ITER: 48437 RES: 0.00042418
ELA: 7.8071 Jacobi stage: ITER: 49067 RES: 0.00038518
ELA: 7.9072 Jacobi stage: ITER: 49695 RES: 0.00034976
ELA: 8.0073 Jacobi stage: ITER: 50325 RES: 0.0003174
ELA: 8.1074 Jacobi stage: ITER: 50955 RES: 0.00028821
ELA: 8.2075 Jacobi stage: ITER: 51585 RES: 0.00026155
ELA: 8.3076 Jacobi stage: ITER: 52215 RES: 0.0002375
ELA: 8.4077 Jacobi stage: ITER: 52843 RES: 0.00021566
ELA: 8.5078 Jacobi stage: ITER: 53471 RES: 0.00019583
ELA: 8.6079 Jacobi stage: ITER: 54101 RES: 0.00017771
ELA: 8.708 Jacobi stage: ITER: 54731 RES: 0.00016137
ELA: 8.8081 Jacobi stage: ITER: 55359 RES: 0.00014653
ELA: 8.9082 Jacobi stage: ITER: 55991 RES: 0.00013298
ELA: 9.0082 Jacobi stage: ITER: 56621 RES: 0.00012067
ELA: 9.1083 Jacobi stage: ITER: 57251 RES: 0.00010958
ELA: 9.2084 Jacobi stage: ITER: 57879 RES: 9.9501e-05
ELA: 9.3085 Jacobi stage: ITER: 58509 RES: 9.0296e-05
ELA: 9.4086 Jacobi stage: ITER: 59139 RES: 8.1993e-05
ELA: 9.5087 Jacobi stage: ITER: 59769 RES: 7.4407e-05
ELA: 9.6088 Jacobi stage: ITER: 60399 RES: 6.7565e-05
ELA: 9.7089 Jacobi stage: ITER: 61027 RES: 6.1352e-05
ELA: 9.809 Jacobi stage: ITER: 61655 RES: 5.571e-05
ELA: 9.9091 Jacobi stage: ITER: 62283 RES: 5.0588e-05
ELA: 10.009 Jacobi stage: ITER: 62913 RES: 4.5908e-05
ELA: 10.109 Jacobi stage: ITER: 63543 RES: 4.1686e-05
ELA: 10.209 Jacobi stage: ITER: 64175 RES: 3.783e-05
ELA: 10.31 Jacobi stage: ITER: 64803 RES: 3.4351e-05
ELA: 10.41 Jacobi stage: ITER: 65433 RES: 3.1173e-05
ELA: 10.51 Jacobi stage: ITER: 66061 RES: 2.8307e-05
ELA: 10.61 Jacobi stage: ITER: 66691 RES: 2.5704e-05
ELA: 10.71 Jacobi stage: ITER: 67319 RES: 2.334e-05
ELA: 10.81 Jacobi stage: ITER: 67947 RES: 2.1194e-05
ELA: 10.91 Jacobi stage: ITER: 68575 RES: 1.9245e-05
ELA: 11.01 Jacobi stage: ITER: 69205 RES: 1.7464e-05
ELA: 11.11 Jacobi stage: ITER: 69835 RES: 1.5859e-05
ELA: 11.21 Jacobi stage: ITER: 70463 RES: 1.44e-05
ELA: 11.31 Jacobi stage: ITER: 71091 RES: 1.3076e-05
ELA: 11.41 Jacobi stage: ITER: 71719 RES: 1.1874e-05
ELA: 11.511 Jacobi stage: ITER: 72347 RES: 1.0782e-05
ELA: 11.611 Jacobi stage: ITER: 72975 RES: 9.7904e-06
ELA: 11.711 Jacobi stage: ITER: 73605 RES: 8.8846e-06
ELA: 11.811 Jacobi stage: ITER: 74233 RES: 8.0676e-06
ELA: 11.911 Jacobi stage: ITER: 74861 RES: 7.3258e-06
ELA: 12.011 Jacobi stage: ITER: 75491 RES: 6.6521e-06
ELA: 12.111 Jacobi stage: ITER: 76121 RES: 6.0367e-06
ELA: 12.211 Jacobi stage: ITER: 76751 RES: 5.4816e-06
ELA: 12.311 Jacobi stage: ITER: 77379 RES: 4.9775e-06
ELA: 12.411 Jacobi stage: ITER: 78011 RES: 4.5171e-06
ELA: 12.511 Jacobi stage: ITER: 78641 RES: 4.0992e-06
ELA: 12.612 Jacobi stage: ITER: 79267 RES: 3.7245e-06
ELA: 12.712 Jacobi stage: ITER: 79897 RES: 3.3799e-06
ELA: 12.812 Jacobi stage: ITER: 80525 RES: 3.0691e-06
ELA: 12.912 Jacobi stage: ITER: 81155 RES: 2.7869e-06
ELA: 13.012 Jacobi stage: ITER: 81785 RES: 2.5291e-06
ELA: 13.112 Jacobi stage: ITER: 82413 RES: 2.2965e-06
ELA: 13.212 Jacobi stage: ITER: 83253 RES: 2.0185e-06
ELA: 13.312 Jacobi stage: ITER: 84207 RES: 1.7439e-06
ELA: 13.412 Jacobi stage: ITER: 85163 RES: 1.5058e-06
ELA: 13.512 Jacobi stage: ITER: 86123 RES: 1.2993e-06
ELA: 13.612 Jacobi stage: ITER: 87099 RES: 1.1184e-06
Vector x = [ 0.999999, 0.999999, 0.999998, 0.999999, 0.999999 ]

Setup with preconditioner

Preconditioners of iterative solvers can significantly improve the performance of the solver. In the case of the linear systems, they are used mainly with the Krylov subspace methods. Preconditioners cannot be used with the starionary methods (TNL::Solvers::Linear::Jacobi and TNL::Solvers::Linear::SOR). The following example shows how to setup an iterative solver of linear systems with preconditioning.

1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/Linear/TFQMR.h>
7#include <TNL/Solvers/Linear/Preconditioners/Diagonal.h>
8
9template< typename Device >
10void
11iterativeLinearSolverExample()
12{
13 /***
14 * Set the following matrix (dots represent zero matrix elements):
15 *
16 * / 2.5 -1 . . . \
17 * | -1 2.5 -1 . . |
18 * | . -1 2.5 -1. . |
19 * | . . -1 2.5 -1 |
20 * \ . . . -1 2.5 /
21 */
24 const int size( 5 );
25 auto matrix_ptr = std::make_shared< MatrixType >();
26 matrix_ptr->setDimensions( size, size );
27 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
28
29 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
30 {
31 const int rowIdx = row.getRowIndex();
32 if( rowIdx == 0 ) {
33 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
34 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
35 }
36 else if( rowIdx == size - 1 ) {
37 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
38 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
39 }
40 else {
41 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
42 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
43 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
44 }
45 };
46
47 /***
48 * Set the matrix elements.
49 */
50 matrix_ptr->forAllRows( f );
51 std::cout << *matrix_ptr << std::endl;
52
53 /***
54 * Set the right-hand side vector.
55 */
56 Vector x( size, 1.0 );
57 Vector b( size );
58 matrix_ptr->vectorProduct( x, b );
59 x = 0.0;
60 std::cout << "Vector b = " << b << std::endl;
61
62 /***
63 * Solve the linear system using diagonal (Jacobi) preconditioner.
64 */
67 auto preconditioner_ptr = std::make_shared< Preconditioner >();
68 preconditioner_ptr->update( matrix_ptr );
69 LinearSolver solver;
70 solver.setMatrix( matrix_ptr );
71 solver.setPreconditioner( preconditioner_ptr );
72 solver.setConvergenceResidue( 1.0e-6 );
73 solver.solve( b, x );
74 std::cout << "Vector x = " << x << std::endl;
75}
76
77int
78main( int argc, char* argv[] )
79{
80 std::cout << "Solving linear system on host: " << std::endl;
81 iterativeLinearSolverExample< TNL::Devices::Sequential >();
82
83#ifdef __CUDACC__
84 std::cout << "Solving linear system on CUDA device: " << std::endl;
85 iterativeLinearSolverExample< TNL::Devices::Cuda >();
86#endif
87}
Diagonal (Jacobi) preconditioner for iterative solvers of linear systems.
Definition Diagonal.h:21

In this example, we solve the same problem as in all other examples in this section. The only differences concerning the preconditioner happen in the solver setup. Similarly to the matrix of the linear system, the preconditioner needs to be passed to the solver as a std::shared_ptr. When the preconditioner object is created, we have to initialize it using the update method, which has to be called everytime the matrix of the linear system changes. This is important, for example, when solving time-dependent PDEs, but it does not happen in this example. Finally, we need to connect the solver with the preconditioner using the setPreconditioner method.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]

Choosing the solver and preconditioner type at runtime

When developing a numerical solver, one often has to search for a combination of various methods and algorithms that fit given requirements the best. To make this easier, TNL provides the functions TNL::Solvers::getLinearSolver and TNL::Solvers::getPreconditioner for selecting the linear solver and preconditioner at runtime. The following example shows how to use these functions:

1#include <iostream>
2#include <memory>
3#include <TNL/Matrices/SparseMatrix.h>
4#include <TNL/Devices/Host.h>
5#include <TNL/Devices/Cuda.h>
6#include <TNL/Solvers/LinearSolverTypeResolver.h>
7
8template< typename Device >
9void
10iterativeLinearSolverExample()
11{
12 /***
13 * Set the following matrix (dots represent zero matrix elements):
14 *
15 * / 2.5 -1 . . . \
16 * | -1 2.5 -1 . . |
17 * | . -1 2.5 -1. . |
18 * | . . -1 2.5 -1 |
19 * \ . . . -1 2.5 /
20 */
23 const int size( 5 );
24 auto matrix_ptr = std::make_shared< MatrixType >();
25 matrix_ptr->setDimensions( size, size );
26 matrix_ptr->setRowCapacities( Vector( { 2, 3, 3, 3, 2 } ) );
27
28 auto f = [ = ] __cuda_callable__( typename MatrixType::RowView & row ) mutable
29 {
30 const int rowIdx = row.getRowIndex();
31 if( rowIdx == 0 ) {
32 row.setElement( 0, rowIdx, 2.5 ); // diagonal element
33 row.setElement( 1, rowIdx + 1, -1 ); // element above the diagonal
34 }
35 else if( rowIdx == size - 1 ) {
36 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
37 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
38 }
39 else {
40 row.setElement( 0, rowIdx - 1, -1.0 ); // element below the diagonal
41 row.setElement( 1, rowIdx, 2.5 ); // diagonal element
42 row.setElement( 2, rowIdx + 1, -1.0 ); // element above the diagonal
43 }
44 };
45
46 /***
47 * Set the matrix elements.
48 */
49 matrix_ptr->forAllRows( f );
50 std::cout << *matrix_ptr << std::endl;
51
52 /***
53 * Set the right-hand side vector.
54 */
55 Vector x( size, 1.0 );
56 Vector b( size );
57 matrix_ptr->vectorProduct( x, b );
58 x = 0.0;
59 std::cout << "Vector b = " << b << std::endl;
60
61 /***
62 * Solve the linear system using diagonal (Jacobi) preconditioner.
63 */
64 auto solver_ptr = TNL::Solvers::getLinearSolver< MatrixType >( "tfqmr" );
65 auto preconditioner_ptr = TNL::Solvers::getPreconditioner< MatrixType >( "diagonal" );
66 preconditioner_ptr->update( matrix_ptr );
67 solver_ptr->setMatrix( matrix_ptr );
68 solver_ptr->setPreconditioner( preconditioner_ptr );
69 solver_ptr->setConvergenceResidue( 1.0e-6 );
70 solver_ptr->solve( b, x );
71 std::cout << "Vector x = " << x << std::endl;
72}
73
74int
75main( int argc, char* argv[] )
76{
77 std::cout << "Solving linear system on host: " << std::endl;
78 iterativeLinearSolverExample< TNL::Devices::Sequential >();
79
80#ifdef __CUDACC__
81 std::cout << "Solving linear system on CUDA device: " << std::endl;
82 iterativeLinearSolverExample< TNL::Devices::Cuda >();
83#endif
84}
std::shared_ptr< Linear::LinearSolver< MatrixType > > getLinearSolver(std::string name)
Function returning shared pointer with linear solver given by its name in a form of a string.
Definition LinearSolverTypeResolver.h:84
std::shared_ptr< Linear::Preconditioners::Preconditioner< MatrixType > > getPreconditioner(std::string name)
Function returning shared pointer with linear preconditioner given by its name in a form of a string.
Definition LinearSolverTypeResolver.h:138

We still stay with the same problem and the only changes are in the solver setup. We first use TNL::Solvers::getLinearSolver to get a shared pointer holding the solver and then TNL::Solvers::getPreconditioner to get a shared pointer holding the preconditioner. The rest of the code is the same as in the previous examples with the only difference that we work with the pointer solver_ptr instead of the direct instance solver of the solver type.

The result looks as follows:

Solving linear system on host:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]
Solving linear system on CUDA device:
Row: 0 -> 0:2.5 1:-1
Row: 1 -> 0:-1 1:2.5 2:-1
Row: 2 -> 1:-1 2:2.5 3:-1
Row: 3 -> 2:-1 3:2.5 4:-1
Row: 4 -> 3:-1 4:2.5
Vector b = [ 1.5, 0.5, 0.5, 0.5, 1.5 ]
Vector x = [ 1, 1, 1, 1, 1 ]