Fortran version of the MPI _ IoC idea
The sources listed herein include a simple test routine to demonstrate the sending and receiving of a message defined by the user.
- Demonstration test application sources:
- Fully functional example / demonstration application
- Example test message used by the demonstration application
- Abstraction Layer sources:
- (Not intended to be modified by the user) Application base which the user extends - requires the user to implement three virtual functions
- (Not intended to be modified by the user) Application callback - boilerplate that allows the application base to call back to the user's application
- (Not intended to be modified by the user) Application base MPI layer - home of the actual MPI functions
- (Not intended to be modified by the user) Special purpose binary tree - stores a copy of all registered objects instead of storing just their constructors
- (Not intended to be modified by the user) MPI routines for making messages - see the example test message for use
- (Not intended to be modified by the user) Message virtual base class - requires the implementation to implement virtual functions and allows for polymorphic treatment of objects
- (Not intended to be modified by the user) Internal message - used to terminate the main MPI loop and return control to the user
- (Not intended to be modified by the user) Capabilities of the node - contains information that the user application may use to divide the problem
- (Not intended to be modified by the user) C access to the
struct sysinfofor getting installed memory &c.
This Fortran version of the idea is significantly more complicated than the python version, and requires
the user to know a fair bit about Object Oriented Fortran.
For instance: the creator of an application that uses this code must extend the
MpiIocForAppBase.f90 user-defined type for their application and supply a pointer
to their application object (the instance of their user-defined type) to the
initMpiLayer method of the base class.
PROGRAM for details.
A note about style: I've chosen to keep all "reserved words" in majuscule, as that's what I learned (in
1972), and it makes it easier for me to scan the code (which my IDE "Eclipse" colours nicely). I also
allow lines to extend as far as the 132 character limit, and only indent by four spaces at each level.
I also occasionally use
BLOCKs so the memory allocations are cleaned-up before
the program exits - this allows
valgrind to properly report the memory
situation when running the MPI application.
If this was not used, the program would end with the application pointer still existing.
There are other solutions to this problem, but I prefer this one.
PROGRAMbegins by creating an object of the test application and making a pointer to this object. The pointer is required by the application base so it can call back to the implemented virtual methods provided. Yes, I'm aware of the fact that I can combine these by allocating a pointer object, I just chose to do it this way, as it seemed clearer.
- The application then calls to initialize the MPI layer by invoking the
initMpiLayermethod in the base class. This starts the MPI, initializes the rank and size information for each copy of the running program, loads the internal message types, calls back to load the user-defined message types in the
- The application may do any necessary license checking at this point - as the size of the network is now available.
- The application then gets the capabilities of the current node and passes this
back to "rank 0" (MPI-speak for what's known as 1 of this_image() in coarray-speak).
This uses some C code, as I was unable to find a Fortran version of the somewhat non-standard
struct sysinfostructure and
sysinfofunction for getting the installed and available memory on the current blade / node / CPU. I also chose to use the number of cores from the C function
sysconf. If you know a better solution and have the time, please send it to me and I'll modify the code.
- Rank 0 receives the Capabilities messages and stores them for later analysis by
When all the nodes have reported in, the
startAppvirtual method of the user application is invoked. This allows the application to do any initial work analysis to determine how the application will split the work over all the nodes. This is also where the application will send its first actual message. Nothing else will be done unless the
startAppmethod sends a message, as all nodes will be simply waiting for instructions. This
startAppmethod is only invoked on rank 0.
- When a message arrives at a node, the
receiveMsgvirtual method of the base application will be invoked and given the information. This information includes the message itself, the number of the node that sent the message, and the
tagof the message. Tags allow messages of the same type to convey different meanings.
- This is slightly different from the C version because the Fortran is pure Object
Oriented, and must use the
SELECT TYPEstatement to determine the type of received message. The C version allows the user to register a callback function and callback object that obviate the need for the application to determine the type of received message. Perhaps I'll simplify this further in another version.
- When the example application
startAppmethod is invoked, it simply broadcasts a test message to all nodes.
- When any node gets this test message (all except rank 0), it sends the message back to the sending node (rank 0) and then waits for the next message.
- When the "control node" (rank 0) receives one of these test messages, it calls the
stopmethod of the application base class, which sends the
MsgTerminatemessage to all nodes.
- When this message to terminate is received, the IoC main loop of the application base exits and control is returned to the application for any necessary cleanup.
This Fortran version of the MPI + IoC idea requires significant features only available in the more recent
I developed this using version 7.1 of the
gfortran compiler (together with
OpenMPI version 2.1.2rc2 and
mpif90) running on Linux
I'm not sure which earlier versions of the compiler would be sufficient. YMMV.
I've also used cmake and tried the build early-on using the Intel
17 compiler with Intel MPI which seemed to work fine.