How to kill a process redirected to /dev/null - perl

I am trying to initiate the execution of monitor.pl by using this following pipe mechanism:
$cpid = open($fh, '-|', "./monitor.pl >/dev/null") or die "can not open pipe\n";
The output of monitor.pl is redirected to /dev/null.
The problem which I am facing is that I am not able to kill the processes even after using the following code:
kill ('INT', $cpid) if defined $cpid;
close $fh if defined $fh;
So please can anyone suggest me how to kill the process monitor.pl >/dev/null.

/dev/null is not a process it is a special file.
but
to kill following command.
monitor.pl >/dev/null
type
ps -aef | grep monitor.pl
kill -9 PID of the process
These are the commands you can easily adjust themm in perl code.

Related

Perl script is returning incorrect output of zombie process on Linux box

Perl script is returning incorrect value of zomibie process on Linux box
my $threshold = 5;
$number_of_defuncts = `ps -ef | grep defunct |grep -v grep|wc -l`;
if ( $number_of_defuncts > $threshold )
{
print("number of defunct is [$number_of_defuncts] \n");
}
WHen manually checked via ps command then zombie processes are always zero but using perl script is giving the erroneous output of 7, 8 or similar high number.
(linux only)
$zombie_count = do { local *ARGV; #ARGV=</proc/[0-9]*/stat>; grep /Z[^)]*$/, <> }
Just grepping for defunct in the ps output is broken, because a process may put defunct in its command line just to break your script. More robust (but yet not portable [1]) solutions are ps -eo state | grep Z or ps -eo s | grep Z.
In your case, your perl script is probably creating the extra zombies, which disappear when it terminates. Unlike the shell, perl will not greedily reap its children as soon as they die; it's up to you to wait() for them, either directly or indirectly:
$ perl -e 'my $pid = open my $fh, "echo yup|"; exec "ps", $pid'
PID TTY STAT TIME COMMAND
6840 pts/11 Z+ 0:00 [echo] <defunct>
$ perl -e 'my $pid = open my $fh, "echo yup|"; undef $fh; exec "ps", $pid'
PID TTY STAT TIME COMMAND
$ perl -e 'my $pid = open my $fh, "echo yup|"; wait; exec "ps", $pid'
PID TTY STAT TIME COMMAND
$ perl -e 'my $pid = open FH, "echo yup|"; close FH; exec "ps", $pid'
PID TTY STAT TIME COMMAND
[1] no, ps -ef is not portable, either.

How to stop stunnel in linux server(using terminal only), other than killing pid

We have configures our stunnel properly in Ubuntu 16.04 , also it is starting properly we are getting our data in application which comes from stunnel server. Although I cannot find any proper way to stop stunnel. I tried killing the pid of stunnel , but killing pid is not a proper way to stop.
Thanks
There are could be several of stunnel processes:
[root#someserver ~]# ps aux | grep stunnel | grep -v grep | awk '{print $2}'
13527
13528
13529
13530
13531
13532
The following bash-one-line loop could handle to kill them all:
if kill $(ps aux | grep stunnel | grep -v grep | awk '{print $2}'); then echo "Done"; else echo "No ps left"; fi
Killing the PID sounds pretty bad, but it is the common way to stop processes in linux. "Kill" is just another name for "send signal". If you issue a kill $pid, then a SIGTERM is sent to that process.
The process can then handle the signal and perform a clean shutdown. This is also the way many programs implement a configuration reloading functionality, they often use SIGHUP for that (kill -SIGHUP $pid).
So, as long as you don't use kill -SIGKILL $pid (or in short: kill -9 $pid) the program can handle that signal and gracefully shutdown.
More about signals on linux: https://en.wikipedia.org/wiki/Signal_(IPC)#List_of_signals

Can we create bash instance in perl script?

I am trying to use Perl to create a process running bash and then create a file sample.txt, but after the bash command I can't see any output on the console or any sample.txt file in the same directory structure. Can somebody help me to fix following code?
my $var = `bash -l`;
system($var);
print "Done!";
my $filename = 'sample.txt';
open(my $fh, '>', $filename) or die "Could not open file '$filename' $!";
chmod(0777, "sample.txt");
print $fh "hello";
close $fh;
print "Done 1!..";
Bash's -l argument is convincing it to stay interactive. Running:
perl -e 'print `bash -l`'
On its own has the bash process bound to stdin interactively, but the subprocess's output is captured by perl and printed later when bash exits, which it will only do when you press ControlD, issue exit or logout etc.
You probably wanted to start with $var = 'bash -l';. That will start bash interactively at first, and when you exit, will continue the remainder of the program. To me it's unusual to want to do this and I expect you should write something for bash that exits normally, probably with the -c argument.
Replacing your first two lines of code with:
system("bash", "-c", "echo Hello World!");
accomplishes this and the remainder of the program executes normally. I'm unsure what you wanted bash to do for you however. These example cases would be better accomplished with just
system("echo", "Hello World!") or print "Hello World!".

How can i get process id of UNIX command i am triggering in a Perl script?

I am triggering a UNIX command in Perl script.
I need the process ID of the UNIX command.
For example if i trigger below UNIX command:
# padv -s adv.cfg > adv.out &
[1] 4550
My process ID is 4550.
# ps -ef | grep padv
root 4550 2810 0 16:28 pts/5 00:00:00 padv -s adv.cfg
root 4639 2810 0 16:29 pts/5 00:00:00 grep padv
How to capture that process ID in my Perl Script?
For example, i am triggering my command in Perl script like below:
#!/usr/bin/perl
use strict;
use warnings;
qx(padv -s adv.cfg > adv.out &);
You could use open()
Open returns nonzero on success, the undefined value otherwise. If the open involved a pipe, the return value happens to be the pid of the subprocess.
my $pid = open(my $ph, "-|", "padv -s adv.cfg > adv.out") or die $!;
reading output from $ph file handle instead of output redirect:
my $pid = open(my $ph, "-|", "padv -s adv.cfg") or die $!;
Call fork to create a child process. The process ID of the child process is returned to the parent process. The child process can then call exec to execute the program you want.

Open file in perl with sudo

I'd like to write data to a file, but the file handle should be opened with sudo or else I get permission denied error. But looks like something like following is not possible in perl?
sudo open (FH, "> $filename") or die "$!\n";
sudo is a linux command, it is not a Perl function. You can run the whole Perl script with sudo (sudo perl script.pl), or you can change your user id in Perl by assigning to $< and $> special variables (see perlvar - Perl predefined variables) which will only be possible with extra privileges, anyway.
BTW, open sets $! on failure, not $#.
open(my $pipe_fh, '-|', 'sudo', 'cat', $filename) or die "Unable to open pipe: $!\n";
It creates another process to solve your problem that may be better solved by running the script with the correct rights.