In Jmeter, http request executed from Command line fails, but passes in GUI mode - command-line

I have multiple http requests under a Thread group, that was always passing till yesterday when executed from either GUI and Command line mode on my Mac system.
Now, when executing from NON GUI mode(Command line), one URL (launching the home page) always fails when executing on Slave systems from the Master system
But works when executed on the Master system itself.
I was trying some changes in jmeter.properties. Not sure, if it is anything to do with the error I face now.
My Command line instruction is as below
sh Jmeter.sh -n -t R3Performance_Fragment.jmx -G ucount=5 -l Results/r1.csv -R 192.168.X.XX,192.168.X.XX
Not sure if I am missing something here, please let me know.

Related

Rundeck: see what is actually executed on the commandline

I'm just getting started with rundeck and trying to find out how it works.
I created a simple Job that should install some packages on the remote note from a pre-selected list (Option).
When I select more than one option the command fails. I want to find out why it fails but (even with debug-mode enabled) see nowhere which command is actally being executed on the remote node.
My command looks like yum install -y "${option.package}" and the unexpected response is eg: no package [selected options] available ... I have selected (space) as delimitter.
How can I see what is executed on the remote host?
Update:
I meanwhile found out why my options did not work as expected; I had to use the unqouted variant for the command-line. But the main question still stays the same ...
Right now the only way to see the exact executed command is to run the job on debug mode. Just select "Run with Debug output" and you can see the command dispatched in the middle of the execution output.

SFTP from web service through Cygwin fails

I have a web page running on Apache which uses a matured set of Perl files for monitoring our workplace servers and applications. One of those tests goes through Cygwin´s SFTP, list files there and assess them.
The problem I have is with SFTP itself - when I run part of test either manually from cmd as D:\cygwin\bin\bash.exe -c "/usr/bin/sftp -oIdentityFile=[privateKeyPath] -oStrictHostKeyChecking=no -b /cygdrive/d/WD/temp/list_SFTP.sh [user]#[hostname]" or invoke the very same set of Perl files as web it works OK (returns list of files as it should). When exactly same code is run through web page it fails quick and does not tell anything. Only thing I have is error code 255 and "Connection closed". No error stream, no verbose output, nothing, no matter what way to capture any error I have used.
To cut long story short, the culprit was HOME path.
When run manually either directly from cmd or through Perl, the D:\cygwin\bin\bash.exe -c "env" would report HOME as HOME=/cygdrive/c/Users/[username]/ BUT this same command when run through web page reports HOME=/ i.e. root, apparently loosing the home somewhere along the path.
With this knowledge the solution is simple: prepend SFTP command with proper home path (e.g. D:\cygwin\bin\bash.exe -c "export HOME=/cygdrive/c/Users/%USERNAME%/ ; /usr/bin/sftp -oIdentityFile=[privateKeyPath] -oStrictHostKeyChecking=no -b /cygdrive/d/WD/temp/list_SFTP.sh [user]#[hostname]") and you are good to go.

Perl script file run manually but not in crontab

I have a perlscript file was running fine in crontab but suddenly it stopped running without any modification.
cd /home/user/public_html/crons && ./script.pl 2>&1 >/dev/null
The top of the script file is #!/usr/bin/perl -X
The output expect from this script is changes in database
I have another script file with the same modification and still works fine
When I run the file in the browser it works fine and execute all lines without any problem
I tried full path /usr/bin/perl but it didn't work
I tried Perl at the beginning but it didn't work
I run the command from SSH using putty but nothing happened
I checked log file /var/log/cron but no errors at all
I created temporary log file cd /home/user/public_html/crons/script.pl> /tmp/temp.log 2>&1 to see the errors but the log is empty
Here is the solution:-
I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed
You can find your file process like this
ps aux | grep 'your cron file here'
This is a really common antipattern people seem to tend toward with cron.
Cron sends you an email with the output of your script, if it generates any output. People often redirect output to /dev/null to prevent cron from sending the email. This is bad because now the output of your script is lost entirely. Even if the script has some built-in logging, it might generate errors before it gets the log file opened and those are lost. It also might crash in a way that doesn't get written to the logging mechanism.
At a bare minimum, you should just remove 2>&1 >/dev/null to start receiving the email. (and also, test your mail setup using a temporary cron job like 1 * * * * echo "Test" )
The next better solution is to change it to >> /var/log/myscript/current.log and then also set up something to rotate the log files (like logrotate) and also make sure to create that directory with permissions that the script user is allowed to write to it. By only redirecting STDOUT of the script, any errors or warnings it writes to STDERR cause you to get an email, and if there are no errors/warnings the output goes to the log file and no email gets sent.
Neither of those changes solve the root problem though, which is that when cron runs your script it does so with a different environment than you have on the command line. What you really want is a way to run the script with a consistent environment, and log it. The "ultimate solution" is to define your task in some kind of service manager, and then use cron to occasionally start it. For instance, you could use systemd and define a service that doesn't restart, then use systemctl start my_custom.service in your cron job. Now you can test independent of cron, and your tests will have the same exact environment, and be logged by the service manager. As extra bonuses, you are protected from accidentally running your script twice at once, and you get a clean way to stop a running cron job without the danger of stale pid files.
I don't particularly advocate systemd myself, but thankfully there are lots of alternatives:
Runit : http://smarden.org/runit/runsvdir.8.html
S6 : https://skarnet.org/software/s6/
Perp : http://b0llix.net/perp/site.cgi?page=perpd.8
(but installing and configuring a service manager is a bigger task than just using systemd if your distro is based on systemd) Each of these allows you to define a service that doesn't restart. Then you use a shell command to issue a "run once" directive to the supervisor, which runs the task as a child. Now you can easily launch the jobs yourself and see all the errors in the log, and then add that command to the crontab and know that it will run identically when cron starts it.
Back to your original problem, once you get some logging you are likely to discover it is a permission problem or a upgraded module in the system perl.

Substitute user with long command doesn't work

I'm having trouble to start a service as a specific user (under Ubuntu 14.4) and I'm unsure what the problem is. I use the following command to autostart a jar-file on startup:
nohup ${JAVA_EXEC} -jar ${MICROSERVICE_HOME}/bin/${MICROSERVICE_JAR} server ${MICROSERVICE_CONF} 2>> /dev/null &
That works perfectly, therefore there is no problem with the variables and so on. Well, this script get's executed by the actual user, which is in this case, the root. Since I don't want to take any risks, I do want to execute it as a specific (already existing) user. Normally my approach would be to change the to command to:
nohup su some_user -c "${JAVA_EXEC} -jar ${MICROSERVICE_HOME}/bin/${MICROSERVICE_JAR} server ${MICROSERVICE_CONF}"
But this doesn't work. I don't get any error messages (of course I left out the redirection of stderr for test purposes) and the nohup.out is empty.
I already have tried different versions, e.g. replacing the double quotes with single quotes and masking the "$" inside the command. According to this thread it should work with the syntax.
None of the solutions in that thread do work. E.g.
su some_user -c "nohup ${JAVA_EXEC} -jar ${MICROSERVICE_HOME}/bin/${MICROSERVICE_JAR} server ${MICROSERVICE_CONF}" -> doesn't work
nohup runuser some_user c "nohup ${JAVA_EXEC} -jar ${MICROSERVICE_HOME}/bin/${MICROSERVICE_JAR} server ${MICROSERVICE_CONF}"-> doesn't work (the runuser commands doesn't exist).
What do I miss?
Any help is very appreciated!

unable to take user input in perl

I am having a strange issue. I have written a script which is basically running a perl script in remote server using ssh.
This script is working fine but after completion of the above operation it will ask user to choose the next operation.
it is showing the options in the command prompt but while I am giving any input it is not showing in the screen even after hitting enter also it remain same.
I am not getting what is the exact issue, but it seems there is some issue with the ssh command because if I am commenting out the ssh command it is working fine.
OPERATION:
print "1: run the script in remote server \n2: Exit\n\nEnter your choice:";
my $input=<STDIN>;
chomp($input);
..........
sub run_script()
{
my $com="sshg3.exe server -q --user=user --password=pass -exec script >/dev/null";
system("$com");
goto OPERATION;
}
after completing this ssh script it is showing in screen:
1: run remote script
2: exit
Enter your choice:
but while I am giving any input it is not displaying in the screen until and unless I am exiting it using crtl C.
Please can anyone help what might be the issue here ?
One of the classic gotchas with ssh is this - that it normally runs interactively, and as such will attach STDIN by default.
This can result in STDIN being consumed by ssh rather than your script.
Try it with ssh -n instead.
You can redirect the output in command prompt if -n option is not available for you.
try this one it might work for you.
system("$com />null");
As per https://support.ssh.com/manuals/client-user/62/sshg3.html there is an option for redirecting input use --dev-null (*nix) or --null (Windows).
-n, --dev-null (Unix), -n, --null (Windows)
Redirects input from /dev/null (Unix) and from NUL (Windows).