SFTP from web service through Cygwin fails - perl

I have a web page running on Apache which uses a matured set of Perl files for monitoring our workplace servers and applications. One of those tests goes through Cygwin´s SFTP, list files there and assess them.
The problem I have is with SFTP itself - when I run part of test either manually from cmd as D:\cygwin\bin\bash.exe -c "/usr/bin/sftp -oIdentityFile=[privateKeyPath] -oStrictHostKeyChecking=no -b /cygdrive/d/WD/temp/list_SFTP.sh [user]#[hostname]" or invoke the very same set of Perl files as web it works OK (returns list of files as it should). When exactly same code is run through web page it fails quick and does not tell anything. Only thing I have is error code 255 and "Connection closed". No error stream, no verbose output, nothing, no matter what way to capture any error I have used.

To cut long story short, the culprit was HOME path.
When run manually either directly from cmd or through Perl, the D:\cygwin\bin\bash.exe -c "env" would report HOME as HOME=/cygdrive/c/Users/[username]/ BUT this same command when run through web page reports HOME=/ i.e. root, apparently loosing the home somewhere along the path.
With this knowledge the solution is simple: prepend SFTP command with proper home path (e.g. D:\cygwin\bin\bash.exe -c "export HOME=/cygdrive/c/Users/%USERNAME%/ ; /usr/bin/sftp -oIdentityFile=[privateKeyPath] -oStrictHostKeyChecking=no -b /cygdrive/d/WD/temp/list_SFTP.sh [user]#[hostname]") and you are good to go.

Related

Perl script file run manually but not in crontab

I have a perlscript file was running fine in crontab but suddenly it stopped running without any modification.
cd /home/user/public_html/crons && ./script.pl 2>&1 >/dev/null
The top of the script file is #!/usr/bin/perl -X
The output expect from this script is changes in database
I have another script file with the same modification and still works fine
When I run the file in the browser it works fine and execute all lines without any problem
I tried full path /usr/bin/perl but it didn't work
I tried Perl at the beginning but it didn't work
I run the command from SSH using putty but nothing happened
I checked log file /var/log/cron but no errors at all
I created temporary log file cd /home/user/public_html/crons/script.pl> /tmp/temp.log 2>&1 to see the errors but the log is empty
Here is the solution:-
I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed
You can find your file process like this
ps aux | grep 'your cron file here'
This is a really common antipattern people seem to tend toward with cron.
Cron sends you an email with the output of your script, if it generates any output. People often redirect output to /dev/null to prevent cron from sending the email. This is bad because now the output of your script is lost entirely. Even if the script has some built-in logging, it might generate errors before it gets the log file opened and those are lost. It also might crash in a way that doesn't get written to the logging mechanism.
At a bare minimum, you should just remove 2>&1 >/dev/null to start receiving the email. (and also, test your mail setup using a temporary cron job like 1 * * * * echo "Test" )
The next better solution is to change it to >> /var/log/myscript/current.log and then also set up something to rotate the log files (like logrotate) and also make sure to create that directory with permissions that the script user is allowed to write to it. By only redirecting STDOUT of the script, any errors or warnings it writes to STDERR cause you to get an email, and if there are no errors/warnings the output goes to the log file and no email gets sent.
Neither of those changes solve the root problem though, which is that when cron runs your script it does so with a different environment than you have on the command line. What you really want is a way to run the script with a consistent environment, and log it. The "ultimate solution" is to define your task in some kind of service manager, and then use cron to occasionally start it. For instance, you could use systemd and define a service that doesn't restart, then use systemctl start my_custom.service in your cron job. Now you can test independent of cron, and your tests will have the same exact environment, and be logged by the service manager. As extra bonuses, you are protected from accidentally running your script twice at once, and you get a clean way to stop a running cron job without the danger of stale pid files.
I don't particularly advocate systemd myself, but thankfully there are lots of alternatives:
Runit : http://smarden.org/runit/runsvdir.8.html
S6 : https://skarnet.org/software/s6/
Perp : http://b0llix.net/perp/site.cgi?page=perpd.8
(but installing and configuring a service manager is a bigger task than just using systemd if your distro is based on systemd) Each of these allows you to define a service that doesn't restart. Then you use a shell command to issue a "run once" directive to the supervisor, which runs the task as a child. Now you can easily launch the jobs yourself and see all the errors in the log, and then add that command to the crontab and know that it will run identically when cron starts it.
Back to your original problem, once you get some logging you are likely to discover it is a permission problem or a upgraded module in the system perl.

get user machines current working directory from perl cgi

i am trying to get the current working directory path using Perl
when i execute from ubuntu: $root#ubuntu:/var/test/geek# firefox http:/localhost/test.html, i get /var/cgi-bin as output in perl cgi page instead of /var/test/geek.
used perl code:
my $pwd=cwd();
bla bla
print "<h1> pwd </h1>";
above code gives path of test.pl not users working directory path
Edit: When i run the script alone from the terminal it works fine. for example:
$root#ubuntu:/var/test/geek# /var/cgi-bin/test.pl
i get /var/test/geek. but when i call the script in html page using submit button it gives path of perl script.
Each process has its own working directory that it inherits from its parent when it gets created.
cwd() returns the current process's working directory.
For a CGI script, the browser doesn't pass its working directory to the server as part of the request. To obtain that, you need to have code running on the client system that submits it. That might be an application that the user download, or possibly, but unlikely, some in-browser code, like Javascript / a Java applet (This info is likely hidden from in-browser code for security reasons though).
(The rest assumes Linux, it will likely differ on other operating systems)
The part below assumes that you are looking for the working directory of a user on the server:
In order to get a specific shell for a specific user's working directory, you would need to identify the PID for the shell and get the working directory from the /proc/<pid>/cwd symlink (To read these, the process must belong to the user running the code, or the code must run as root (Which is a bad idea for a CGI script)...). To get the PID of the shell, you likely need to start from the w command output, or its data source, /var/run/utmp. Sys::Utmp might be useful for this... You might then also need to retreive a whole lot of extra info to find all the processes that might have the working directory that you are looking for.
I think you are mixing the web server and the local user. The web server has a working directory when you run the script, and that is the one that cwd() returns.

Continue/run commands after ssh into VM

I have to take quite a few steps before I get into the file I need to be, which is why I'm trying to set up an alias in my terminal, that gets me to the file by running that alias.
The following steps are needed to arrive where I have to be:
cd Sites
vagrant ssh
cd /var/www/miniportal.billetten.dk/logs/
sudo -s
cd /etc/apache2/sites-available/
nano 25-av_miniportal.conf
Edit line 33 in that file (I guess it's possible to jump to that line)
I tried setting up an alias like this, but the problem is that it stops running the rest of the command after I SSH'd into Vagrant. if I manually exit Vagrant, it continues the command (and of course returns an error, because there is no such folder).
The question is: How do I make sure that everything from step 3 is executed AFTER step 2 is done logging in through SSH?
My ultimate goal is to set up an Apple Automator program that lets me put in a value that gets entered on line 33, but I'm fine with just an alias for now.
I know I asked this question a long time ago, but in the meantime I found a solution and forgot I had posted this question.
My alias in my .zshrc-file looks like this:
alias changeCust='ssh -t root#192.168.56.101 "nano +32 /etc/apache2/sites-enabled/25-av_miniportal.conf && service apache2 reload"'
In other words, it SSHs into vagrant as root (it asks for my password), nanos into a file on line 32 (or whatever line you need), then, when the file is saved, it reloads apache2 and the changes are applied.
Just use the below one and change the values.
alias AliasName='ssh -t root#your.ip.addres.here "nano +lineNumber /path/to/file"'

Run perl script on remote server

Is it possible to run perl script, which is located on a remote server, on that server from Windows? There is a job on a remote server that I want to get done every time I make something on Windows.
You have to have something listening for an instruction to run the script, and then you have to send the instruction.
There are lots of approaches you could take to that, including:
Running an SSH server and then connecting to it from an ssh client on the windows machine
Running an HTTP server, running the script through FastCGI, and then requesting the URL for it from curl or a browser on the Windows machine
Writing a custom protocol, listening on a socket, and then writing a custom client that you run on the Windows machine
Absolutely.
You can use plink to run commands on the server from Windows, assuming the server is running sshd.
plink user#a.domain.ext echo hi
This will print "hi\n" to the standard output.
Substitute /path/to/perl/script for echo above and substitute hi with any command line argument that the script needs.
plink is available here: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
One cautionary personal note from doing this many times is that the environment in which the perl script will be run is much less complete than what you would experience when logging in via a full SSH session and running the command interactively. Many environment variables you would normally expect are unset.
For instance using "set | wc -l" in the command above produces only 39 environment variables defined, but from an interactive SSH session, there are 57 environment variables defined. You have to make sure your perl script isn't depending on an environment variable that hasn't been set. For instance, you may need to use full paths for any modules that it uses, or by using the -I flag in the shebang line, because #INC may not be what you expect it to be.

Is it possible to have Perl run shell script aliases?

Is it possible to have a Perl script run shell aliases? I am running into a situation where we've got a Perl module I don't have access to modify and one of the things it does is logs into multiple servers via SSH to run some commands remotely. Sadly some of the systems (which I also don't have access to modify) have a buggy SSH server that will disconnect as soon as my system tries to send an SSH public key. I have the SSH agent running because I need it to connect to some other servers.
My initial solution was to set up an alias to set ssh to ssh -o PubkeyAuthentication=no, but Perl runs the ssh binary it finds in the PATH instead of trying to use the alias.
It looks like the only solutions are disable the SSH agent while I am connecting to the problem servers or override the Perl module that does the actual connection.
Perhaps you could put a command called ssh in PATH ahead of the ssh which runs ssh as you want it to be run.
Alter the PATH before you run the perl script, or use this in your .ssh/config
Host *
PubkeyAuthentication no
Why don't you skip the alias and just create a shell script called ssh in a directory somewhere, then change the path to put that directory before the one containing the real ssh?
I had to do this recently with iostat because the new version output a different format that a third-party product couldn't handle (it scanned the output to generate a report).
I just created an iostat shell script which called the real iostat (with hardcoded path, but you could be more sophisticated), passing the output through an awk script to massage it into the original format. Then, I changed the path for the third-party program and it started working fine.
You could declare a function in .bashrc (or .profile or whatever) with that name. It could look like this (might break):
function ssh {
/usr/bin/ssh -o PubkeyAuthentication=no "$#"
}
But using a config file might be the best solution in your case.