Fortran version of the MPI _ IoC idea

The Fortran sources may shortly be available on the github site. The sources listed herein include a simple test routine to demonstrate the sending and receiving of a message defined by the user.

The sources:

Demonstration test application sources:
Fully functional example / demonstration application
Example test message used by the demonstration application
Abstraction Layer sources:
(Not intended to be modified by the user) Application base which the user extends - requires the user to implement three virtual functions
(Not intended to be modified by the user) Application callback - boilerplate that allows the application base to call back to the user's application
(Not intended to be modified by the user) Application base MPI layer - home of the actual MPI functions
(Not intended to be modified by the user) Special purpose binary tree - stores a copy of all registered objects instead of storing just their constructors
(Not intended to be modified by the user) Mpi routines for making messages - see the example test message for use
(Not intended to be modified by the user) Message virtual base class - requires the implementation to implement virtual functions and allows for polymorphic treatment of objects
(Not intended to be modified by the user) Internal message - used to terminate the main MPI loop and return control to the user
(Not intended to be modified by the user) Capabilities of the node - contains information that the user application may use to divide the problem
(Not intended to be modified by the user) C access to the struct sysinfo for getting installed memory &c.

This Fortran version of the idea is significantly more complicated than the python version, and requires the user to know a fair bit about Object Oriented Fortran. For instance: the creator of an application that uses this code must extend the MpiIocForAppBase.f90 user-defined type for their application and supply a pointer to their application object (the instance of their user-defined type) to the initMpiLayer method of the base class. See the TestMpiIocFor.f90 example PROGRAM for details.

A note about style: I've chosen to keep all "reserved words" in majuscule, as that's what I learned (in 1972), and it makes it easier for me to scan the code (which my IDE Eclipse colours nicely). I also allow lines to extend as far as the 132 character limit, and only indent by four spaces at each level. I also occasionally use BLOCKs so the memory allocations are cleaned-up before the program exits - this allows valgrind to properly report the memory situation when running the MPI application. If this was not used, the program would end with the application pointer still existing. There are other solutions to this problem, but I prefer this one.

The version on github (when I have it uploaded) may supersede this version.

Execution Sequence

This Fortran version of the MPI + IoC idea requires significant features only available in the more recent Fortran standards. I developed this using version 7.1 of the gfortran compiler (together with OpenMPI version 2.1.2rc2 and mpif90) running on Linux Mint. I'm not sure which earlier versions of the compiler would be sufficient. YMMV.

I've also used cmake and tried the build early-on using the Intel ifort version 17 compiler with Intel MPI which seemed to work fine.