Will the valgrind work for Daemon programs - daemon

Running valgrind on foreground programs is easy. But will valgrind work for daemon programs and give the output after it executes. And how do I do that?
Thanks

Yes, valgrind will certainly work for daemon programs.
Many daemons have some sort of debug mode, for example the -X switch to apache, which will cause them not to fork or go into the background, and in that case the easiest way to valgrind them may be by using that mode so that they stay attached to the terminal.
In other cases you will still be able to use valgrind, but you will probably want to use --log-file or one of the other logging options to send the output to a suitable location, and you may also need --trace-children to cause valgrind to follow child processes when the daemon forks.
Output, such as memory leak reports, which is only produced when the program ends, should appear as normal when the daemon is shutdown.

Related

How does Supervisord get process and system information?

I'am learning about Supervisord, when I look at the source code, I can't figure out how it can get information about processes, such as the process's uptime, how the process works. How does it start processes, or are they running through specific commands through the Terminal? If not, how do they do it?
Thanks!

why there is several crond process on centos OS

I have a server which CPU usage is very high, and I find there is many crond process on this server. I can not understand why this occur.Anyone know the reason? Please tell me.
When I run "ps aux | grep crond" on this server.
enter image description here
crond forks a process for each job it executes. In your case, it looks like several jobs are being started every five minutes. All of them, though, appear to be waiting for I/O (that's the meaning of the "D" process state in the 8th column of ps output, according to the ps man page), and thus are not contributing to CPU load.
If you want to know what's eating the CPU, start with top.
crond is not the cause of the problem, suggest you use top to check Cost the process of high performance.

Matlab process termination in slurm

I have two questions that to me seem related:
First, is it necessary to explicitly terminate Matlab in my sbatch command? I have looked through several online slurm tutorials, and in some cases the authors include an exit command:
http://www.umbc.edu/hpcf/resources-tara-2013/how-to-run-matlab.html
And in some they don't:
http://www.buffalo.edu/ccr/support/software-resources/compilers-programming-languages/matlab/PCT.html
Second, when creating a parallel pool in a job, I almost always get the following warning:
Warning: Found 4 pre-existing communicating job(s) created by pool that are
running, and 2 communicating job(s) that are pending or queued. You can use
'delete(myCluster.Jobs)' to remove all jobs created with profile local. To
create 'myCluster' use 'myCluster = parcluster('local')'
Why is this happening, and is there any way to avoid it happening to myself and to others because of me?
It depends on how you launch Matlab. Note that your two examples use distinct methods for running a matlab script; the first one uses the -r option
matlab -nodisplay -r "matrixmultiply, exit"
while the second one uses stdin redirection from a file
matlab < runjob.m
In the first solution, the Matlab process will be left running after the script is finished, that is why the exit command is needed there. In the second solution, the Matlab process is terminated as stdin closes when the end of the file is reached.
If you do not end the matlab process, Slurm will kill it when the maximum allocation time is reached, as defined by the --time option in you submission script or by the default cluster (or partition) value.
To avoid the warning you mention, make sure to systematically use matlabpool close at the end of your job. If you have several instances of Matlab running on the same node, and you have a shared home directory, you will probably get the warning anyhow, as I believe the information about open matlab pools is stored in a hidden folder in your home. Rebooting will probably not help, but finding those files and removing them will (be careful though and ask the system administrator).
to avoid your warning, you have to delete
.matlab/local_cluster_jobs/
directory

Running command line PHP through PHP-FPM

Currently I use PHP-FPM with NGINX for front end requests but also run some background processes through a long running PHP script using exec to run other scripts with the command line PHP. What I'm thinking though is that this would be more efficient if these were also run through PHP-FPM? Any ideas on how I would do this? Thanks.
FPM is a tool to Manage FastCGI Processes. Just shuffle the letters. While it manages long-running PHP processes, it does so only under the mental umbrella of FastCGI.
Because you're creating a background work queue, you want something designed to manage a background work queue and running processes.
Gearman is an excellent choice for the work queue half. It's platform and language agnostic, and scan scale to the heavens and back. The PECL extension works well.
For keeping those long-running processes going, take a look at Supervisor.
The two make a great duo. Check out this blog post by PHP hacker Matthew Weier O'Phinney that documents some of his exploration with Gearman and Supervisor.
Very late to this question (4 years) but the correct answer is cgi-fcgi which will let you pass commands and execute code in the already-in-memory php-fpm

What's a good strategy to restart downed FastCGI processes automatically?

I've got a Perl based FastCGI app that rarely goes down. However, when it does go down, the restart is not automatic. Restarting Apache manually always does the trick but that does address improving the uptime of the app.
I'm thinking of using a cron job in conjunction with a script that uses WWW::Mechanize to periodically check on the app and restart it as required, as suggested by the folks at Perl Monks :
Keep FastCGI Processes Up and Running
Before I do that, I'm want to know if anyone knows of better ways to monitor a FastCGI process and restarting it automatically when it dies, or is the method suggested above the optimal one?
Thanks.
Monit is a nice monitoring daemon that can do automatic restarts and/or notification.
How about not having the process supervised by Apache but using an mechanism similar to the way init(8) starts getty processes? I have found daemon to be quite useful.
Most of the web servers offer already offer this as a configuration option.