The sources listed here include a simple test routine to demonstrate the sending and receiving of a message defined by the user. The newest and best version of this will be on my GitHub page.
TestMpiIocFor.f90
MsgTest.f90
MpiIocForAppBase.f90
MpiIocForAppCallback.f90
MpiIocForLayer.f90
MpiIocForBinaryTree.f90
MpiAssist.f90
MsgBase.f90
MsgTerminate.f90
MsgCapabilities.f90
CCapabilities.c
struct sysinfo
for getting installed memory &c.
This Fortran version of the idea is significantly more complicated than the python version, and requires the user to
know a fair bit about Object Oriented Fortran. For instance: the creator of an application that uses this code must
extend the
MpiIocForAppBase.f90
user-defined type for their application and supply a pointer to their application object (the instance of their
user-defined type) to the
initMpiLayer
method of the base class. See the
TestMpiIocFor.f90
example
PROGRAM
for details.
A note about style: I've chosen to keep all "reserved words" in majuscule, as that's what I learned (in 1972), and it
makes it easier for me to scan the code (which my IDE "Eclipse" colours nicely - VS-Code also). I also allow lines to
extend as far as the 132 character limit, and only indent by four spaces at each level. I also occasionally use
BLOCKs
so the memory allocations are cleaned-up before the program exits - this allows
valgrind
to properly report the memory situation when running the MPI application. If this was not used, the program would end
with the application pointer still existing. There are other solutions to this problem, but I prefer this one.
TestMpiIocFor
main PROGRAM
begins by
creating an object of the test application and making a pointer to this object. The pointer is required by the
application base so it can call back to the implemented virtual methods provided. Yes, I'm aware of the fact that I
can combine these by allocating a pointer object, I just chose to do it this way, as it seemed clearer.
initMpiLayer
method in the base class. This starts the MPI, initializes the rank and size information for each copy of the
running program, loads the internal message types, calls back to load the user-defined message types in the loadUserTypes
virtual method.
struct sysinfo
structure and sysinfo
function for getting the installed and available memory on the current blade / node /
CPU. I also chose to use the number of cores from the C function sysconf
. If you know a
better solution and have the time, please send it to me and I'll modify the code.
startApp
virtual method of the user
application is invoked. This allows the application to do any initial work analysis to determine how the application
will split the work over all the nodes. This is also where the application will send its first actual message.
Nothing else will be done unless the startApp
method sends a message, as all nodes will be
simply waiting for instructions. This startApp
method is only invoked on rank 0.
receiveMsg
virtual method of
the base application will be invoked and given the information. This information includes the message itself, the
number of the node that sent the message, and the tag
of the message. Tags allow messages
of the same type to convey different meanings.
SELECT TYPE
statement to determine the type of received message. The C
version allows the user to register a callback function and callback object that obviate the need for the
application to determine the type of received message. Perhaps I'll simplify this further in another version.
startApp
method is invoked, it simply
broadcasts a test message to all nodes.
stop
method of the application base class, which sends the MsgTerminate
message to all nodes.
This Fortran version of the MPI + IoC idea requires significant features only available in the more recent Fortran
standards. I developed this using version 7.1 of the
gfortran
compiler (together with
OpenMPI
version 2.1.2rc2 and
mpif90
) running on Linux Mint. I'm not sure which earlier versions of the compiler would be sufficient. YMMV.
I've also used cmake and tried the build early-on using the Intel
ifort
version 17 compiler with Intel MPI which seemed to work fine.