BALL  1.4.2
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Friends Macros Groups Pages
Public Types | Protected Member Functions | Protected Attributes | List of all members
BALL::MPISupport Class Reference

#include <BALL/SYSTEM/MPISupport.h>

Public Types

enum  TAGS { TAG_SYSTEM, TAG_OPTIONS }
 

Public Member Functions

 MPISupport (MPI_Comm default_communicator=MPI_COMM_WORLD)
 
 MPISupport (int argc, char **argv, MPI_Comm default_communicator=MPI_COMM_WORLD, bool accept_communicator=true)
 
 ~MPISupport ()
 
Index getRank ()
 Return the rank of this process. More...
 
Index getSize ()
 Return the number of processes in MPI_COMM_WORLD. More...
 
MPI_Comm getDefaultCommunicator ()
 Return the default communicator used for MPI calls. More...
 
void setDefaultCommunicator (MPI_Comm default_communicator)
 Set the default communicator used for MPI calls. More...
 
bool getFinalizeOnDestruct ()
 
void setFinalizeOnDestruct (bool new_value)
 Decides whether MPI_Finalize will be called in the destructor. More...
 
void setMpiInfo (const MPI_Info &mpi_info)
 
bool isMaster ()
 Returns true if this process is the master, false otherwise. More...
 
void init (int argc, char **argv, bool accept_communicator=true)
 
void setBarrier ()
 
void sendSystem (const System &system, bool broadcast=true, int receiver=0)
 
SystemreceiveSystem (bool broadcast=true, int source=MPI_ANY_SOURCE)
 
void sendOptions (const Options &options, bool broadcast=true, int receiver=0)
 
OptionsreceiveOptions (bool broadcast=true, int source=MPI_ANY_SOURCE)
 
template<typename valuetype >
void distributeDatapoints (const std::vector< valuetype > &input, std::vector< valuetype > &our_share)
 
template<typename valuetype >
void acceptDatapoints (std::vector< valuetype > &our_share)
 
template<typename valuetype >
void combineDatapoints (const std::vector< valuetype > &our_share) throw (Exception::OutOfMemory)
 
template<typename valuetype >
void acceptCombinedDatapoints (std::vector< valuetype > &combined_set, std::vector< valuetype > &our_share) throw (Exception::OutOfMemory)
 
void * distributeDatapoints (const void *input, int size, Size &numpoints, MPI_Datatype datatype)
 
void * acceptDatapoints (Size &numpoints, MPI_Datatype datatype)
 
void combineDatapoints (const void *input, int size, MPI_Datatype datatype)
 
void * acceptCombinedDatapoints (const void *input, int size, Size &numpoints, MPI_Datatype datatype)
 
template<typename valuetype >
valuetype getSum (valuetype &local_value)
 
template<typename valuetype >
valuetype getProduct (valuetype &local_value)
 
template<typename valuetype >
valuetype getMaximum (valuetype &local_value)
 
template<typename valuetype >
valuetype getMinimum (valuetype &local_value)
 

Protected Member Functions

void registerTypes_ ()
 
void sendPersistenceStream_ (const std::ostringstream &stream, int tag=MPI_ANY_TAG, bool broadcast=true, int receiver=0)
 
void receivePersistenceStream_ (std::istringstream &in, int tag=MPI_ANY_TAG, bool broadcast=true, int source=0)
 

Protected Attributes

Index rank_
 
Index comm_size_
 
bool finalize_on_destruct_
 
MPI_Comm default_communicator_
 
MPI_Info mpi_info_object_
 
MPI_Datatype mpi_Vector3_float_type_
 
MPI_Datatype mpi_Vector3_double_type_
 

Detailed Description

This class provides some of the most important MPI functions to BALL classes.

Definition at line 44 of file MPISupport.h.

Member Enumeration Documentation

TAGS used for send and receive operations

Enumerator
TAG_SYSTEM 
TAG_OPTIONS 

Definition at line 50 of file MPISupport.h.

Constructor & Destructor Documentation

BALL::MPISupport::MPISupport ( MPI_Comm  default_communicator = MPI_COMM_WORLD)

Constructors and DestructorsDefault constructor. Does not call MPI_Init. default_communicator can be used to change the broadcasting behaviour of the MPI calls. The default is MPI_COMM_WORLD.

BALL::MPISupport::MPISupport ( int  argc,
char **  argv,
MPI_Comm  default_communicator = MPI_COMM_WORLD,
bool  accept_communicator = true 
)

Detailed constructor. If MPI has not been initialized yet, argc and argv are passed to MPI_Init. If MPI has already been initialized, we can't re-initialize and throw a Precondition Exception. default_communicator can be used to change the broadcasting behaviour of the MPI calls. The default is MPI_COMM_WORLD.

If we find a parent of this process, we have been spawned somehow. In this case, we will by default use a communicator containing the old communicator of our parent and all spawned processes and none of the spawned processes will become a new master process. This behaviour can be avoided by setting accept_communicator to false

BALL::MPISupport::~MPISupport ( )

Destructor. By default, we call MPI_Finalize here. This can be avoided by setting the finalize_on_destruct_ flag of the class to false

Member Function Documentation

template<typename valuetype >
void BALL::MPISupport::acceptCombinedDatapoints ( std::vector< valuetype > &  combined_set,
std::vector< valuetype > &  our_share 
) throw (Exception::OutOfMemory)

Accept datapoints that are combined from all processes of the communicator.

void* BALL::MPISupport::acceptCombinedDatapoints ( const void *  input,
int  size,
Size numpoints,
MPI_Datatype  datatype 
)

Combine distributed data from all processes in the communicator and return the result. The array input contains our own share of the data we need to combine. The number of points gathered is stored in numpoints. Note that all other processes in the communicator have to call combineDatapoints. The caller has to ensure that the returned array is free()'d. If memory allocation fails, 0 is returned.

template<typename valuetype >
void BALL::MPISupport::acceptDatapoints ( std::vector< valuetype > &  our_share)

Accept datapoints that are distributed by some source in the communicator. Stores the result in our_share.

void* BALL::MPISupport::acceptDatapoints ( Size numpoints,
MPI_Datatype  datatype 
)

Accept datapoints that are distributed by some source in the communicator. The caller has to ensure that the array is free()'d. If memory allocation fails, 0 is returned. The number of points we need to process is stored in numpoints.

template<typename valuetype >
void BALL::MPISupport::combineDatapoints ( const std::vector< valuetype > &  our_share) throw (Exception::OutOfMemory)

Combine datapoints from all processes of the communicator. Exactly one process has to accept the data by calling acceptCombinedDatapoints instead of combineDatapoints.

void BALL::MPISupport::combineDatapoints ( const void *  input,
int  size,
MPI_Datatype  datatype 
)

Combine distributed data from all processes in the communicator. Note that all but one of the processes in the communicator have to call combineDatapoints, while exactly one has to call acceptCombinedDatapoints.

template<typename valuetype >
void BALL::MPISupport::distributeDatapoints ( const std::vector< valuetype > &  input,
std::vector< valuetype > &  our_share 
)

Distribute input vector as evenly as possible over all processes in the communicator. Stores the part of the datapoints the sender itself has to process in our_share. Note that all processes in the communicator have to call acceptDatapoints.

void* BALL::MPISupport::distributeDatapoints ( const void *  input,
int  size,
Size numpoints,
MPI_Datatype  datatype 
)

Distribute input of type datatype as evenly as possible over the processes in the communicator. Returns the datapoints the sender itself has to process. The number of points to process is stored in numpoints. Note that all processes in the communicator have to call acceptDatapoints. The caller has to ensure that the returned array is free()'d. If memory allocation fails, 0 is returned.

MPI_Comm BALL::MPISupport::getDefaultCommunicator ( )

Return the default communicator used for MPI calls.

bool BALL::MPISupport::getFinalizeOnDestruct ( )

Returns true if MPI_Finalize will be called in the destructor, false otherwise.

template<typename valuetype >
valuetype BALL::MPISupport::getMaximum ( valuetype &  local_value)

Determine the maximum of the local_values of all processes. If this process is the master, this function will return the result. For all non-master processes, the result is undefined. This is implemented as a template function to encapsulate the MPI_Datatype - handling.

template<typename valuetype >
valuetype BALL::MPISupport::getMinimum ( valuetype &  local_value)

Determine the minimum of the local_values of all processes. If this process is the master, this function will return the result. For all non-master processes, the result is undefined. This is implemented as a template function to encapsulate the MPI_Datatype - handling.

template<typename valuetype >
valuetype BALL::MPISupport::getProduct ( valuetype &  local_value)

Multiply the local_value of all processes. If this process is the master, this function will return the result. For all non-master processes, the result is undefined. This is implemented as a template function to encapsulate the MPI_Datatype - handling.

Index BALL::MPISupport::getRank ( )

Return the rank of this process.

Accessors.

Index BALL::MPISupport::getSize ( )

Return the number of processes in MPI_COMM_WORLD.

template<typename valuetype >
valuetype BALL::MPISupport::getSum ( valuetype &  local_value)

Spawn new processes. This function allows the current instance to spawn new processes that can communicate via MPI. The spawned processes are assigned a new MPI_COMM_WORLD communicator by MPI, excluding /this/ process. We thus overwrite their default_communicator_, /and/ the default_communicator_ of /this/ process, with a communicator connecting all spawned processes with all processes in the default_communicator_ of this one. More complex communicator handling (like keeping the existing default_communicator_ of /this/ process) is currently not supported and can only be achieved by directly calling the corresponding MPI routines.

Note:

  • this function relies on the MPI-2 standard. Thus, if only MPI-1 is supported, this function will not be compiled
  • the executables that will be spawned do not need to be identical with the executable of the spawning process
  • if the number of processes to spawn is set to zero, we will try to estimate the maximum sensible number using MPI_UNIVERSE
  • currently, the use of MPI_Info objects to transmit information like search paths or hostnames to the spawned processes is not really supported. We merely pass an internal MPI_Info object that is set to MPI_INFO_NULL by default to MPI_Spawn. This object can be accessed from the outside using setMpiInfo(), but the external caller has to take care of the memory handling for this object
Parameters
commandThe path to the executable to spawn.
argvThe command line arguments for the executable
wanted_number_of_processesThe maximum number of processes to spawn
Returns
The number of processes that were succesfully spawned Sum up the local_value of all processes. If this process is the master, this function will return the result. For all non-master processes, the result is undefined. This is implemented as a template function to encapsulate the MPI_Datatype - handling.
void BALL::MPISupport::init ( int  argc,
char **  argv,
bool  accept_communicator = true 
)

Initialize MPI using argc and argv. If MPI_Init has already been called, a BALL_PRECONDITION_EXCEPTION is thrown. If we find a parent of this process, we have been spawned somehow. In this case, we will by default use a communicator containing the old communicator of our parent and all spawned processes and none of the spawned processes will become a new master process. This behaviour can be avoided by setting accept_communicator to false

bool BALL::MPISupport::isMaster ( )

Returns true if this process is the master, false otherwise.

Options* BALL::MPISupport::receiveOptions ( bool  broadcast = true,
int  source = MPI_ANY_SOURCE 
)

Receive a BALL option class from the communicator. If broadcast is true, we expect a broadcast, otherwise a send. In case of directed send, the source can be given as well. Note that we will have to ensure that the Options are deleted ourselves.

void BALL::MPISupport::receivePersistenceStream_ ( std::istringstream &  in,
int  tag = MPI_ANY_TAG,
bool  broadcast = true,
int  source = 0 
)
protected

Helper function for receiving BALL - objects: receives a string containing a persistence stream from the communicator and stores it in the istream

System* BALL::MPISupport::receiveSystem ( bool  broadcast = true,
int  source = MPI_ANY_SOURCE 
)

Receive a system from the communicator. If broadcast is true, we expect a broadcast, otherwise a send. In case of directed send, the source can be given as well. Note that we will have to ensure that the system is deleted ourselves.

void BALL::MPISupport::registerTypes_ ( )
protected

Register MPI_Datatypes used for sending BALL objects around.

void BALL::MPISupport::sendOptions ( const Options options,
bool  broadcast = true,
int  receiver = 0 
)

Send a BALL option class across the communicator. This function relies on BALL's XDRPersistenceManagement. If broadcast is true, all processes in default_communicator_ will receive the options. Note that all of these processes must call receiveOptions! If broadcast is set to false, the message is sent to receiver only.

void BALL::MPISupport::sendPersistenceStream_ ( const std::ostringstream &  stream,
int  tag = MPI_ANY_TAG,
bool  broadcast = true,
int  receiver = 0 
)
protected

Helper function for sending BALL - objects: sends a string containing a persistence stream over the communicator.

void BALL::MPISupport::sendSystem ( const System system,
bool  broadcast = true,
int  receiver = 0 
)

Send a system across the communicator. This function relies on BALL's XDRPersistenceManagement. If broadcast is true, all processes in default_communicator_ will receive the system. Note that all of these processes must call receiveSystem! If broadcast is set to false, the message is sent to receiver only.

void BALL::MPISupport::setBarrier ( )

Set an MPI_Barrier, i.e., this function will only return after all processes in the default_communicator_ have called this function (or have directly called MPI_Barrier). This can be used to syncronize the workflow.

void BALL::MPISupport::setDefaultCommunicator ( MPI_Comm  default_communicator)

Set the default communicator used for MPI calls.

void BALL::MPISupport::setFinalizeOnDestruct ( bool  new_value)

Decides whether MPI_Finalize will be called in the destructor.

void BALL::MPISupport::setMpiInfo ( const MPI_Info &  mpi_info)

Set the internal MPI_Info object. This is currently only a workaround, and the memory handling (MPI_Info_free) has to be performed by the calling process.

Member Data Documentation

Index BALL::MPISupport::comm_size_
protected

Definition at line 330 of file MPISupport.h.

MPI_Comm BALL::MPISupport::default_communicator_
protected

Definition at line 332 of file MPISupport.h.

bool BALL::MPISupport::finalize_on_destruct_
protected

Definition at line 331 of file MPISupport.h.

MPI_Info BALL::MPISupport::mpi_info_object_
protected

Definition at line 333 of file MPISupport.h.

MPI_Datatype BALL::MPISupport::mpi_Vector3_double_type_
protected

Definition at line 335 of file MPISupport.h.

MPI_Datatype BALL::MPISupport::mpi_Vector3_float_type_
protected

Definition at line 334 of file MPISupport.h.

Index BALL::MPISupport::rank_
protected

Definition at line 329 of file MPISupport.h.