Template Numerical Library version\ main:94209208
Loading...
Searching...
No Matches
TNL::Pointers::SharedPointer< Object, Device > Class Template Reference

Cross-device shared smart pointer. More...

#include <TNL/Pointers/SharedPointer.h>

Detailed Description

template<typename Object, typename Device = typename Object::DeviceType>
class TNL::Pointers::SharedPointer< Object, Device >

Cross-device shared smart pointer.

This smart pointer is inspired by std::shared_ptr from STL library. It means that the object owned by the smart pointer can be shared with other smart pointers. One can make a copy of this smart pointer. In addition, the smart pointer is able to work across different devices which means that the object owned by the smart pointer is mirrored on both host and device.

**NOTE: When using smart pointers to pass objects on GPU, one must call Pointers::synchronizeSmartPointersOnDevice< Devices::Cuda >() before calling a CUDA kernel working with smart pointers.**

Template Parameters
Objectis a type of object to be owned by the pointer.
Deviceis device where the object is to be allocated. The object is always allocated on the host system as well for easier object manipulation.

See also UniquePointer and DevicePointer.

See also SharedPointer< Object, Devices::Host > and SharedPointer< Object, Devices::Cuda >.

Example
#include <iostream>
#include <cstdlib>
#include <TNL/Containers/Array.h>
#include <TNL/Pointers/SharedPointer.h>
using namespace TNL;
struct Tuple
{
Tuple( const int size ) : a1( size ), a2( size ) {}
void
setSize( const int size )
{
a1->setSize( size );
a2->setSize( size );
}
};
#ifdef __CUDACC__
__global__
void
printTuple( const Tuple t )
{
printf( "Tuple size is: %d\n", t.a1->getSize() );
for( int i = 0; i < t.a1->getSize(); i++ ) {
printf( "a1[ %d ] = %d \n", i, ( *t.a1 )[ i ] );
printf( "a2[ %d ] = %d \n", i, ( *t.a2 )[ i ] );
}
}
#endif
int
main( int argc, char* argv[] )
{
/***
* Create a tuple of arrays and print them in CUDA kernel
*/
#ifdef __CUDACC__
Tuple t( 3 );
*t.a1 = 1;
*t.a2 = 2;
Pointers::synchronizeSmartPointersOnDevice< Devices::Cuda >();
printTuple<<< 1, 1 >>>( t );
/***
* Resize the arrays
*/
t.setSize( 5 );
*t.a1 = 3;
*t.a2 = 4;
Pointers::synchronizeSmartPointersOnDevice< Devices::Cuda >();
printTuple<<< 1, 1 >>>( t );
#endif
return EXIT_SUCCESS;
}
Array is responsible for memory management, access to array elements, and general array operations.
Definition Array.h:64
Cross-device shared smart pointer.
Definition SharedPointer.h:44
The main TNL namespace.
Definition AtomicOperations.h:9
Output
Tuple size is: 3
a1[ 0 ] = 1
a2[ 0 ] = 2
a1[ 1 ] = 1
a2[ 1 ] = 2
a1[ 2 ] = 1
a2[ 2 ] = 2
Tuple size is: 5
a1[ 0 ] = 3
a2[ 0 ] = 4
a1[ 1 ] = 3
a2[ 1 ] = 4
a1[ 2 ] = 3
a2[ 2 ] = 4
a1[ 3 ] = 3
a2[ 3 ] = 4
a1[ 4 ] = 3
a2[ 4 ] = 4

The documentation for this class was generated from the following file: