Template Numerical Library version\ main:40802e2e
Loading...
Searching...
No Matches
TNL::Algorithms::SequentialFor< Device > Struct Template Reference

Wrapper to ParallelFor which makes it run sequentially. More...

#include <TNL/Algorithms/SequentialFor.h>

Static Public Member Functions

template<typename Index , typename Function >
static void exec (Index start, Index end, Function f)
 Static method for execution of the loop.
 

Detailed Description

template<typename Device = Devices::Sequential>
struct TNL::Algorithms::SequentialFor< Device >

Wrapper to ParallelFor which makes it run sequentially.

It is helpfull for debuging or just sequential for loops on GPUs.

Member Function Documentation

◆ exec()

template<typename Device = Devices::Sequential>
template<typename Index , typename Function >
static void TNL::Algorithms::SequentialFor< Device >::exec ( Index start,
Index end,
Function f )
inlinestatic

Static method for execution of the loop.

Template Parameters
Indexdefines the type of indexes over which the loop iterates.
Functionis the type of function to be called in each iteration.
Parameters
startthe for-loop iterates over index interval [start, end).
endthe for-loop iterates over index interval [start, end).
fis the function to be called in each iteration
Example
#include <iostream>
#include <cstdlib>
#include <TNL/Containers/Vector.h>
#include <TNL/Algorithms/parallelFor.h>
#include <TNL/Algorithms/SequentialFor.h>
using namespace TNL;
using namespace TNL::Containers;
template< typename Device >
void
printVector()
{
const int size( 60 );
auto view = v.getView();
auto print = [ = ] __cuda_callable__( int i ) mutable
{
if( i % 5 == 0 )
printf( "v[ %d ] = %f \n", i, view[ i ] ); // we use printf because of compatibility with GPU kernels
};
std::cout << "Printing vector using parallel for: " << std::endl;
Algorithms::parallelFor< Device >( 0, v.getSize(), print );
std::cout << "Printing vector using sequential for: " << std::endl;
}
int
main( int argc, char* argv[] )
{
std::cout << "Example on the host:" << std::endl;
printVector< TNL::Devices::Host >();
#ifdef __CUDACC__
std::cout << "Example on CUDA GPU:" << std::endl;
printVector< TNL::Devices::Cuda >();
#endif
return EXIT_SUCCESS;
}
#define __cuda_callable__
Definition Macros.h:49
Vector extends Array with algebraic operations.
Definition Vector.h:36
T endl(T... args)
Namespace for TNL containers.
Definition Array.h:17
The main TNL namespace.
Definition AtomicOperations.h:9
Wrapper to ParallelFor which makes it run sequentially.
Definition SequentialFor.h:18
Output
Example on the host:
Printing vector using parallel for:
v[ 0 ] = 1.000000
v[ 5 ] = 1.000000
v[ 10 ] = 1.000000
v[ 15 ] = 1.000000
v[ 20 ] = 1.000000
v[ 25 ] = 1.000000
v[ 30 ] = 1.000000
v[ 35 ] = 1.000000
v[ 40 ] = 1.000000
v[ 45 ] = 1.000000
v[ 50 ] = 1.000000
v[ 55 ] = 1.000000
Printing vector using sequential for:
v[ 0 ] = 1.000000
v[ 5 ] = 1.000000
v[ 10 ] = 1.000000
v[ 15 ] = 1.000000
v[ 20 ] = 1.000000
v[ 25 ] = 1.000000
v[ 30 ] = 1.000000
v[ 35 ] = 1.000000
v[ 40 ] = 1.000000
v[ 45 ] = 1.000000
v[ 50 ] = 1.000000
v[ 55 ] = 1.000000
Example on CUDA GPU:
Printing vector using parallel for:
v[ 35 ] = 1.000000
v[ 40 ] = 1.000000
v[ 45 ] = 1.000000
v[ 50 ] = 1.000000
v[ 55 ] = 1.000000
v[ 0 ] = 1.000000
v[ 5 ] = 1.000000
v[ 10 ] = 1.000000
v[ 15 ] = 1.000000
v[ 20 ] = 1.000000
v[ 25 ] = 1.000000
v[ 30 ] = 1.000000
Printing vector using sequential for:
v[ 0 ] = 1.000000
v[ 5 ] = 1.000000
v[ 10 ] = 1.000000
v[ 15 ] = 1.000000
v[ 20 ] = 1.000000
v[ 25 ] = 1.000000
v[ 30 ] = 1.000000
v[ 35 ] = 1.000000
v[ 40 ] = 1.000000
v[ 45 ] = 1.000000
v[ 50 ] = 1.000000
v[ 55 ] = 1.000000

The documentation for this struct was generated from the following file: