Fortran version of the MPI _ IoC idea

The sources listed here include a simple test routine to demonstrate the sending and receiving of a message defined by the user. The newest and best version of this will be on my GitHub page.

The sources:

Demonstration test application sources:
TestMpiIocFor.f90
Fully functional example / demonstration application
MsgTest.f90
Example test message used by the demonstration application
Abstraction Layer sources:
MpiIocForAppBase.f90
(Not intended to be modified by the user) Application base which the user extends - requires the user to implement three virtual functions
MpiIocForAppCallback.f90
(Not intended to be modified by the user) Application callback - boilerplate that allows the application base to call back to the user's application
MpiIocForLayer.f90
(Not intended to be modified by the user) Application base MPI layer - home of the actual MPI functions
MpiIocForBinaryTree.f90
(Not intended to be modified by the user) Special purpose binary tree - stores a copy of all registered objects instead of storing just their constructors
MpiAssist.f90
(Not intended to be modified by the user) MPI routines for making messages - see the example test message for use
MsgBase.f90
(Not intended to be modified by the user) Message virtual base class - requires the implementation to implement virtual functions and allows for polymorphic treatment of objects
MsgTerminate.f90
(Not intended to be modified by the user) Internal message - used to terminate the main MPI loop and return control to the user
MsgCapabilities.f90
(Not intended to be modified by the user) Capabilities of the node - contains information that the user application may use to divide the problem
CCapabilities.c
(Not intended to be modified by the user) C access to the struct sysinfo for getting installed memory &c.

This Fortran version of the idea is significantly more complicated than the python version, and requires the user to know a fair bit about Object Oriented Fortran. For instance: the creator of an application that uses this code must extend the MpiIocForAppBase.f90 user-defined type for their application and supply a pointer to their application object (the instance of their user-defined type) to the initMpiLayer method of the base class. See the TestMpiIocFor.f90 example PROGRAM for details.

A note about style: I've chosen to keep all "reserved words" in majuscule, as that's what I learned (in 1972), and it makes it easier for me to scan the code (which my IDE "Eclipse" colours nicely - VS-Code also). I also allow lines to extend as far as the 132 character limit, and only indent by four spaces at each level. I also occasionally use BLOCKs so the memory allocations are cleaned-up before the program exits - this allows valgrind to properly report the memory situation when running the MPI application. If this was not used, the program would end with the application pointer still existing. There are other solutions to this problem, but I prefer this one.

Execution Sequence

This Fortran version of the MPI + IoC idea requires significant features only available in the more recent Fortran standards. I developed this using version 7.1 of the gfortran compiler (together with OpenMPI version 2.1.2rc2 and mpif90 ) running on Linux Mint. I'm not sure which earlier versions of the compiler would be sufficient. YMMV.

I've also used cmake and tried the build early-on using the Intel ifort version 17 compiler with Intel MPI which seemed to work fine.