Class CUDAMessageList

Class Documentation

class CUDAMessageList

This is the internal device memory handler for CUDAMessage

Public Functions

explicit CUDAMessageList(CUDAMessage &cuda_message, detail::CUDAScatter &scatter, cudaStream_t stream, unsigned int streamId)

populates CUDA message map

Initially allocates message lists based on cuda_message.getMaximumListSize()

CUDAMessageList class

virtual ~CUDAMessageList()

Destroys the CUDAMessageList object.

Frees all message list memory

A destructor.

void cleanupAllocatedData()

Release all variable array memory in each list to 0

void *getReadMessageListVariablePointer(std::string variable_name)

Returns a pointer to the memory for the named variable in d_list

Parameters:

variable_name – Name of the variable to get pointer to

Returns:

void pointer to variable array in device memory

void *getWriteMessageListVariablePointer(std::string variable_name)

Returns a pointer to the memory for the named variable in d_swap_list

Parameters:

variable_name – Name of the variable to get pointer to

Returns:

void pointer to variable array in device memory

void resize(CUDAScatter &scatter, cudaStream_t stream, unsigned int streamId = 0, unsigned int keep_len = 0)

Resize the internal message list buffers to the length of the parent CUDAMessage Retain keep_len items from d_list during the resize (d_swap_list data is lost)

Note

This class has no way of knowing if keep_len exceeds the old buffer length size

Parameters:
  • scatter – Scatter instance and scan arrays to be used (CUDASimulation::singletons->scatter)

  • stream – The CUDAStream to use for CUDA operations

  • streamId – The stream index to use for accessing stream specific resources such as scan compaction arrays and buffers

  • keep_len – If specified, number of items to retain through the resize

Throws:

If – keep_len exceeds the new buffer length

void zeroMessageData(cudaStream_t stream)

Memset all variable arrays in each list to 0

virtual void swap()

Swap d_list and d_swap_list

virtual unsigned int scatter(unsigned int newCount, detail::CUDAScatter &scatter, cudaStream_t stream, unsigned int streamId, bool append)

Perform a compaction using d_message_scan_flag and d_message_position

Parameters:
  • newCount – Number of new messages to be scattered

  • scatter – Scatter instance and scan arrays to be used (CUDASimulation::singletons->scatter)

  • stream – The CUDAStream to use for CUDA operations

  • streamId – The stream index to use for accessing stream specific resources such as scan compaction arrays and buffers

  • append – If true scattered messages will append to the existing message list, otherwise truncate

Returns:

Total number of messages now in list (includes old + new counts if appending)

virtual unsigned int scatterAll(unsigned int newCount, detail::CUDAScatter &scatter, cudaStream_t stream, unsigned int streamId)

Copy all message data from d_swap_list to d_list This ALWAYS performs and append to the existing message list count Used by swap() when appending messagelists

Parameters:
  • newCount – Number of new messages to be scattered

  • scatter – Scatter instance and scan arrays to be used (CUDASimulation::singletons->scatter)

  • stream – The CUDAStream to use for CUDA operations

  • streamId – The stream index to use for accessing stream specific resources such as scan compaction arrays and buffers

Returns:

Total number of messages now in list (includes old + new counts)

inline const CUDAMessageMap &getReadList()
Returns:

Returns the map<variable_name, device_ptr> for reading message data

inline const CUDAMessageMap &getWriteList()
Returns:

Returns the map<variable_name, device_ptr> for writing message data (aka swap buffers)

Protected Functions

void allocateDeviceMessageList(CUDAMessageMap &memory_map)

Allocates device memory for the provided message list

Parameters:

memory_map – Message list to perform operation on

void releaseDeviceMessageList(CUDAMessageMap &memory_map)

Frees device memory for the provided message list

Parameters:

memory_map – Message list to perform operation on

void zeroDeviceMessageList_async(CUDAMessageMap &memory_map, cudaStream_t stream, unsigned int skip_offset = 0)

Zeros device memory for the provided message list

Parameters:
  • memory_map – Message list to perform operation on

  • stream – The CUDAStream to use for CUDA operations

  • skip_offset – Number of items at the start of the list to not zero