Template Numerical Library version\ main:11b8437
Loading...
Searching...
No Matches
TNL::Pointers::UniquePointer< Object, Device > Class Template Reference

Cross-device unique smart pointer. More...

#include <TNL/Pointers/UniquePointer.h>

Detailed Description

template<typename Object, typename Device = typename Object::DeviceType>
class TNL::Pointers::UniquePointer< Object, Device >

Cross-device unique smart pointer.

This smart pointer is inspired by std::unique_ptr from STL library. It means that the object owned by the smart pointer is accessible only through this smart pointer. One cannot make any copy of this smart pointer. In addition, the smart pointer is able to work across different devices which means that the object owned by the smart pointer is mirrored on both host and device.

**NOTE: When using smart pointers to pass objects on GPU, one must call Pointers::synchronizeSmartPointersOnDevice< Devices::Cuda >() before calling a CUDA kernel working with smart pointers.**

Template Parameters
Objectis a type of object to be owned by the pointer.
Deviceis device where the object is to be allocated. The object is always allocated on the host system as well for easier object manipulation.

See also SharedPointer and DevicePointer.

See also UniquePointer< Object, Devices::Host > and UniquePointer< Object, Devices::Cuda >.

Example
#include <iostream>
#include <cstdlib>
#include <TNL/Containers/Array.h>
#include <TNL/Pointers/UniquePointer.h>
using namespace TNL;
#ifdef __CUDACC__
__global__
void
printArray( const ArrayCuda* ptr )
{
printf( "Array size is: %d\n", ptr->getSize() );
for( int i = 0; i < ptr->getSize(); i++ )
printf( "a[ %d ] = %d \n", i, ( *ptr )[ i ] );
}
#endif
int
main( int argc, char* argv[] )
{
/***
* Create an array and print its elements in CUDA kernel
*/
#ifdef __CUDACC__
array_ptr.modifyData< Devices::Host >() = 1;
Pointers::synchronizeSmartPointersOnDevice< Devices::Cuda >();
printArray<<< 1, 1 >>>( &array_ptr.getData< Devices::Cuda >() );
/***
* Resize the array and print it again
*/
array_ptr.modifyData< Devices::Host >().setSize( 5 );
array_ptr.modifyData< Devices::Host >() = 2;
Pointers::synchronizeSmartPointersOnDevice< Devices::Cuda >();
printArray<<< 1, 1 >>>( &array_ptr.getData< Devices::Cuda >() );
#endif
return EXIT_SUCCESS;
}
Array is responsible for memory management, access to array elements, and general array operations.
Definition Array.h:64
Definition GPU.h:16
Definition Host.h:19
Cross-device unique smart pointer.
Definition UniquePointer.h:48
The main TNL namespace.
Definition AtomicOperations.h:9
Output
Array size is: 10
a[ 0 ] = 1
a[ 1 ] = 1
a[ 2 ] = 1
a[ 3 ] = 1
a[ 4 ] = 1
a[ 5 ] = 1
a[ 6 ] = 1
a[ 7 ] = 1
a[ 8 ] = 1
a[ 9 ] = 1
Array size is: 5
a[ 0 ] = 2
a[ 1 ] = 2
a[ 2 ] = 2
a[ 3 ] = 2
a[ 4 ] = 2

The documentation for this class was generated from the following file: