How to call parallel fortran code from Julia correctly?

Hi All,

I am new to Julia. I would like to call the MPI FORTRAN code in Julia. However, when the Fortran code involve MPI_Init, I cannot run my code correctly. A minimum demo is shown below.

-------------Fortran code: main.f90--------------------
function rank()
implicit none
include ‘mpif.h’
integer::ierr,rank
rank=-1
!call MPI_INIT(ierr)
!call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr)
print*, “hello world”,rank
!call MPI_FINALIZE(ierr)
end function rank

-------------Julia script: test.jl--------------------
ccall((:rank_,“main.so”),Int64,())

I compile my fortran code with
“mpif90 -shared -fPIC -o main.so main.f90”
and run Julia script with
“mpirun -np 1 julia test.jl”.

As you can see in the Fortran code, when I comment the MPI_Init functions, I can get correct output. However, when these sentences
When these sentences take effect, something wrong happens.


The error information is shown here.
[username:04407] mca_base_component_repository_open: unable to open mca_patcher_overwrite: /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi/mca_patcher_overwrite.so: undefined symbol: mca_patcher_base_patch_t_class (ignored)
[username:04407] mca_base_component_repository_open: unable to open mca_shmem_sysv: /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi/mca_shmem_sysv.so: undefined symbol: opal_show_help (ignored)
[username:04407] mca_base_component_repository_open: unable to open mca_shmem_posix: /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi/mca_shmem_posix.so: undefined symbol: opal_shmem_base_framework (ignored)
[username:04407] mca_base_component_repository_open: unable to open mca_shmem_mmap: /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi/mca_shmem_mmap.so: undefined symbol: opal_show_help (ignored)

It looks like opal_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here’s some additional information (which may only be relevant to an
Open MPI developer):

opal_shmem_base_select failed
→ Returned value -1 instead of OPAL_SUCCESS

1 Like