MPI has only master nodes - multicore

I'm trying to use MPI with my 4 cores processor.
I have followed this tutorial : http://debianclusters.org/index.php/MPICH:_Starting_a_Global_MPD_Ring
But at the end, when I try the hello.out script, I get only server process (master nodes) :
mpiexec -np 4 ./hello.out
Hello MPI from the server process!
Hello MPI from the server process!
Hello MPI from the server process!
Hello MPI from the server process!
I have searched all around the web but couldn't find any clues for this problem.
Here is my mpdtrace result :
[nls#debian] ~ $ mpd --ncpus=4 --daemon
[nls#debian] ~ $ mpdtrace -l
debian_52063 (127.0.0.1)
Shouldn't I get one trace line per core ?
Thanks for your help,
Malchance

95% of the time, when you see this problem -- MPI tasks not getting the "right" rank ids, usually ending up all being rank zero -- it means there's a mismatch in MPI libraries. Either the mpiexec is doing the launching isn't the same as the mpicc (or whatever) used to compile the program, or the MPI libraries the child processes are picking up on launch (if linked dynamically) are different than those intended. So I'd start by double checking those things.

Related

Supervisor kills Prefect agent with SIGTERM unexpectedly

I'm using a rapsberry pi 4, v10(buster).
I installed supervisor per the instructions here: http://supervisord.org/installing.html
Except I changed "pip" to "pip3" because I want to monitor running things that use the python3 kernel.
I'm using Prefect, and the supervisord.conf is running the program with command=/home/pi/.local/bin/prefect "agent local start" (I tried this with and without double quotes)
Looking at the supervisord.log file it seems like the Prefect Agent does start, I see the ASCII art that normally shows up when I start it from the command line. But then it shows it was terminated by SIGTERM;not expected, WARN recieved SIGTERM inidicating exit request.
I saw this post: Supervisor gets a SIGTERM for some reason, quits and stops all its processes but I don't even have that 10Periodic file it references.
Anyone know why/how Supervisor processes are getting killed by sigterm?
It could be that your process exits immediately because you don’t have an API key in your command and this is required to connect your agent to the Prefect Cloud API. Additionally, it’s a best practice to always assign a unique label to your agents, below is an example with “raspberry” as a label.
You can also check the logs/status:
supervisorctl status
Here is a command you can try, plus you can specify a directory in your supervisor config (not sure whether environment variables are needed but I saw it from other raspberry Pi supervisor user):
[program:prefect-agent]
command=prefect agent local start -l raspberry -k YOUR_API_KEY --no-hostname-label
directory=/home/pi/.local/bin/prefect
user=pi
environment=HOME="/home/pi/.local/bin/prefect",USER="pi"

Slurm: Why use srun inside sbatch?

In a sbatch script, you can directly launch programs or scripts (for example an executable file myapp) but in many tutorials people use srun myapp instead.
Despite reading some documentation on the topic, I do not understand the difference and when to use each of those syntaxes.
I hope this question is precise enough (1st question on SO), thanks in advance for your answers.
The srun command is used to create job 'steps'.
First, it will bring better reporting of the resource usage ; the sstat command will provide real-time resource usage for processes that are started with srun, and each step (each call to srun) will be reported individually in the accounting.
Second, it can be used to setup many instances of a serial program (program that only use one CPU) into a single job, and micro-schedule those programs inside the job allocation.
Finally, for parallel jobs, srun will also play the important role of starting the parallel program and setup the parallel environment. It will start as many instances of the program as were requested with the --ntasks option on the CPUs that were allocated for the job. In the case of a MPI program, it will also handle the communication between the MPI library and Slurm.

Matlab/Simulink: run batch of simulations in parallel?

I have to run a series of simulations and save the results. Since by default Matlab only uses one core, I wonder if it is possible to open multiple worker tasks and assign different simulation runs to them?
You could run each simulation in a separate MATLAB instance and let the OS handle the process to core assignment.
One master MATLAB could synchronize each child instances checking for example if simulation results file are existing.
I aso have the same problem but I did not manage to really understand how to make it in MatLab. The documentation in matlab is too advanced to get to know how to make it.
Since I am working with Ubuntu I find a way to do the work calling the unix command from MatLab and using the parallel GNU command
So I mange to run my simulation in parallel with 4 cores.
unix('parallel --progress -j4 flow > /dev/null :::: Pool.txt','-echo')
you can find more info in the link
Shell, run four processes parallel
Details of the syntaxis can be found at https://www.gnu.org/software/parallel/
but breifly I can tell you
--progress shows a status of the progress
-j4 tells the amount or jobs in parallel you want to have
flow is the name of my simulator
/dev/null was just to avoid the screen run output of the simulator to show up
Pool.txt is a file I made with the required simulator input that is basically the path and the main simulator file.
echo I do not remember now what was it for :D

MPI+OpenMP job submission script on LSF

I am very new to LSF. I have 4 nodes with with 2 sockets per node. Each node is having 8 cores. I have developed hybrid MPI+OpenMP code. I am submitting the job like the following which asks each core to perform one MPI task. So I loose the power of OpenMP.
##BSUB -n 64
I wish to submit the job so that each socket runs one MPI task rather than each core so that the cores inside the socket can be used for OpenMP. How can I build up job submit scripts to optimize the power of the Hybridization in my code.
First of all, the BSUB sentinels have to be preceded by a single # sign, otherwise they are skipped over as a regular comments.
The correct way to start a hybrid job with older LSF versions is to pass the span resource request and request nodes exclusively. To start a job with 8 MPI processes and 8 OpenMP threads each, you should use the following:
#BSUB -n 8
#BSUB -x
#BSUB -R "span[ptile=2]"
The parameters are as following:
-n 8 - requests 8 slots for MPI processes
-x - requests nodes exclusively
-R "span[ptile=2]" - instructs LSF to span the job over two slots per node
You should request nodes exclusively, otherwise LSF will schedule other jobs to the same nodes since only two slots per node will be used.
Then you have to set the OMP_NUM_THREADS environment variable to 4 (the number of cores per socket), tell the MPI library to pass the variable to the MPI processes, and make the library limit each MPI process to its own CPU socket. This is unfortunately very implementation-specific, e.g.:
Open MPI 1.6.x or older:
export OMP_NUM_THREADS=4
mpiexec -x OMP_NUM_THREADS --bind-to-socket --bysocket ./program.exe
Open MPI 1.7.x or newer:
export OMP_NUM_THREADS=4
mpiexec -x OMP_NUM_THREADS --bind-to socket --map-by socket ./program.exe
Intel MPI (not sure about this one as I don't use IMPI very often):
mpiexec -genv OMP_NUM_THREADS 4 -genv I_MPI_PIN 1 \
-genv I_MPI_PIN_DOMAIN socket -genv I_MPI_PIN_ORDER scatter \
./program.exe

Boot process "scheduler"

Where in the boot process does the "scheduler" get created and when created how can its instructions be accessed?
That depends on the OS you use, but several things should be clear:
before the first switch to userland
before any kernel threads are started or any other multi-programming (multi-tasking) is done
Obviously that is relatively early in the boot process.
What exactly do you mean by "how can its instructions be accessed?"?
the default scheduler is set in kernel config :
adrian#adrian: ~ $ grep cfq /boot/config-2.6.36.2-desktop-2mnb
CONFIG_DEFAULT_IOSCHED="cfq"
at boot time you can do in the kernel line as example:
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/sda2 elevator=deadline