How to create script which activates virtualenv? - virtualenv

I'd like to have a script which activates particular virtualenv.
I did this:
file: go.sh
#!/bin/bash
source environments/admin2/bin/activate
but after execution it virtualenv is not activated.
Perhaps it dies after script execution. I need the script that could do activation in current bash/zsh, because now I'm forced to this ugly routine.
After signing in to ssh session do:
cat go.sh
and then copy paste the result O_o

Try executing go.sh this way:
source go.sh

Related

Perl script file run manually but not in crontab

I have a perlscript file was running fine in crontab but suddenly it stopped running without any modification.
cd /home/user/public_html/crons && ./script.pl 2>&1 >/dev/null
The top of the script file is #!/usr/bin/perl -X
The output expect from this script is changes in database
I have another script file with the same modification and still works fine
When I run the file in the browser it works fine and execute all lines without any problem
I tried full path /usr/bin/perl but it didn't work
I tried Perl at the beginning but it didn't work
I run the command from SSH using putty but nothing happened
I checked log file /var/log/cron but no errors at all
I created temporary log file cd /home/user/public_html/crons/script.pl> /tmp/temp.log 2>&1 to see the errors but the log is empty
Here is the solution:-
I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed
You can find your file process like this
ps aux | grep 'your cron file here'
This is a really common antipattern people seem to tend toward with cron.
Cron sends you an email with the output of your script, if it generates any output. People often redirect output to /dev/null to prevent cron from sending the email. This is bad because now the output of your script is lost entirely. Even if the script has some built-in logging, it might generate errors before it gets the log file opened and those are lost. It also might crash in a way that doesn't get written to the logging mechanism.
At a bare minimum, you should just remove 2>&1 >/dev/null to start receiving the email. (and also, test your mail setup using a temporary cron job like 1 * * * * echo "Test" )
The next better solution is to change it to >> /var/log/myscript/current.log and then also set up something to rotate the log files (like logrotate) and also make sure to create that directory with permissions that the script user is allowed to write to it. By only redirecting STDOUT of the script, any errors or warnings it writes to STDERR cause you to get an email, and if there are no errors/warnings the output goes to the log file and no email gets sent.
Neither of those changes solve the root problem though, which is that when cron runs your script it does so with a different environment than you have on the command line. What you really want is a way to run the script with a consistent environment, and log it. The "ultimate solution" is to define your task in some kind of service manager, and then use cron to occasionally start it. For instance, you could use systemd and define a service that doesn't restart, then use systemctl start my_custom.service in your cron job. Now you can test independent of cron, and your tests will have the same exact environment, and be logged by the service manager. As extra bonuses, you are protected from accidentally running your script twice at once, and you get a clean way to stop a running cron job without the danger of stale pid files.
I don't particularly advocate systemd myself, but thankfully there are lots of alternatives:
Runit : http://smarden.org/runit/runsvdir.8.html
S6 : https://skarnet.org/software/s6/
Perp : http://b0llix.net/perp/site.cgi?page=perpd.8
(but installing and configuring a service manager is a bigger task than just using systemd if your distro is based on systemd) Each of these allows you to define a service that doesn't restart. Then you use a shell command to issue a "run once" directive to the supervisor, which runs the task as a child. Now you can easily launch the jobs yourself and see all the errors in the log, and then add that command to the crontab and know that it will run identically when cron starts it.
Back to your original problem, once you get some logging you are likely to discover it is a permission problem or a upgraded module in the system perl.

Execute User Function on Remote Server

When I log into the server, I can execute a user defined command, runfw from fish_user_key_bindings.fish file it runs fine. But when I try to execute
ssh user#myipaddress "cd ~/mydir; runfw"
It replies unknown command "runfw. Of course I can source the key bindings in command like this:
ssh user#myipaddress "cd ~/mydir; source ~/.config/fish/functions/fish_user_key_bindings.fish; runfw"
The runfw works, but other issues occur. So the question is, how can I remotely execute a command with fully loaded fish environment for the user?
When not running interactively, fish does not execute fish_user_key_bindings, which causes that file to not be sourced.
When you then try to run that function, it hasn't already been defined (by loading sourcing fish_user_key_bindings.fish), so fish tries sourcing "runfw.fish", which doesn't exist. So it can't find the function.
So either source fish_user_key_bindings manually, or put that function into its dedicated file called "runfw.fish".

Perl script works but not via CRON

I have a perl script (which syncs delicious to wp) which:
runs via the shell but
does not run via cron (and i dont get an error)
The only thing I can think of is that it read the config file wrongly but... it is defined via the full path (i think).
I read my config file as:
my $config = Config::Simple->import_from('/home/12345/data/scripts/delicious/wpds.ini',
\my %config);
(I am hosted on mediatemple)
Does anybody have a clue?
update 1: HERE is the complete code: http://plugins.svn.wordpress.org/wordpress-23-compatible-wordpress-delicious-daily-synchronization-script/trunk/ (but I have added the path as above to the configuration file location as difference)
update 2: crossposted on https://forums.mediatemple.net/viewtopic.php?pid=31563#p31563
update 3: the full path did the trick, solved
The difference between a cron job and a job run from the shell is 'environment'. The primary difference is that your profile and the like are not run for a cron job, so any environment variables you have set in your normal shell environment are not set the same in the cron environment - no extensions to PATH, no environment variables identifying where Delicious and/or WP are hosted, etc.
Suggestion: create a cron job that simply reports the environment to a known file:
env > /home/27632/tmp/env.27632
Then see what is set in your own shell environment in comparison. Chances are, that will reveal the trouble.
Failing that, other environmental differences are that a cron job has no terminal, and has /dev/null for input and output - so interactive stuff does not work well.
it seems the problem is not in running perl, but locating the Config library
you should try:
perl -e "print #INC"
and run a similar perl script in cron, and read the output
it possible that they differ
I suggest looking at my answer to How to simulate the environment cron executes a script with?
This is an similar Jonathan's answer but goes a bit further.
Based on your crontab, and depending on your installation, the problem might be the "perl". As others note the environment, particularly the $PATH variable, is different for cron. perl may not be in the path so you need to put the full path to perl in the cron command.
You can determine the path with the command $ type perl
I run into the same problem ...
Perl script works but not via CRON => error: "perl: command not found"
... after an update from Plesk 12.0 to Plesk 12.5. But the existing answers were not very helpful for me.
It took some time, but than I found this thread in the Odin forum which helps me: https://talk.plesk.com/threads/scheduled-tasks-always-fail.331821/
They suggest the following:
/usr/local/psa/bin/server_pref -u -crontab-secure-shell ""
That deletes in the /var/spool/cron/crontabs files the line:
SHELL="/opt/psa/bin/chrootsh"
After that, my cron jobs run with out any error.
(Ubuntu 14.04 with Plesk 12.5)
If the perl script runs fine manually, but not from crontab, then
there is some environment path needed by the some package that is not
getting through `cron`. Run your command as follows:
suppose your cron entry like:
* 13 * * * /usr/bin/perl /home/username/public_html/cron.pl >/dev/null 2>&1
env - /home/username/public_html/cron.pl
The output will show you the missing package. export that package path in
$PATH variables

Why does my command-line not run from cron?

I have a perl script (part of the XMLTV family of "grabbers", specifically tv_grab_oztivo).
I can successfully run it like this:
/sw/bin/perl /path/to/tv_grab_oztivo --output /path/to/tv.xml
I use the full paths to everything to eliminate issues with the Working Directory. Permissions shouldn't be a problem.
So, if I run it from the Terminal (Mac OSX) it works just fine.
But when I set it to run via a cron job, nothing appears to happen at all. No output is created etc.
There isn't anything wrong with the crontab as far as I can see, because if I substitute a helloworld.pl for the actual script, it runs just fine at the right time.
So, what can I do to debug? I can see from looking at %ENV in the two cases that the environment is very different, but what other approaches can I take to debugging? How can I see the output of the cron job, which might be some kind of perl "die" message or "not found" message from the shell or whatever?
Or should I be trying to somehow give the cron version of the command the same environment as when it's running as me?
It's often because you don't get the full environment when running under cron. Best bet is to capture the ouput by using the command:
( /sw/bin/perl /path/to/tv_grab_oztivo ... ) >/tmp/qq 2>&1
and then have a look at /tmp/qq.
If it does turn out to be a missing environment, then you may need to put:
. ~/.profile
or something similar, into the execution chain of your cron job, such as:
( . ~/.profile ; /sw/bin/perl /path/to/tv_grab_oztivo ... ) >/tmp/qq 2>&1
If you're looking at %ENV in the two cases, I'd suggest that, as a first step in your perl script, set %ENV to what it is in a cron job, and then trying to run it from the command line. You may need to exec yourself once for this to take full control:
BEGIN {
if (exists $ENV{something_in_your_env_not_in_cron}) {
%ENV = (...);
exec $^X, $0, #ARGV;
}
}
Now try running it, and seeing if there's anything you can do to debug it (including running under perl -d if required). Most likely, you'll find that you end up adding items back into %ENV one at a time until it magically starts working (LD_LIBRARY_PATH is a good one for this, but ORACLE_HOME or DB2HOME for Oracle or DB2 apps might be good choices, too). Then you can either set the variable in your script, or in the crontab.
I'd run a simple shell script by absolute path from the cron command.
Inside that script, I'd ensure that I trapped stdout and stderr to a known (or knowable) file. I'd also ensure that enough of your environment is set. On Unix, you get almost no environment set at all when you run a command via cron - I'm not sure about MacOS X. The standard culprit for problems is PATH. I have a separate .cronfile that sets my working environment enough that I usually don't have problems - that's an analogue of .profile.
On occasion if you can't figure out what's going wrong with your command line, the simplest way to fix it is to turn the whole thing into a shell script. Ideally you shouldn't have to do this, but it can be the fastest way to solve the problem.
File: /files/cron1.sh
#!/bin/sh
/sw/bin/perl /path/to/tv_grab_oztivo --output /path/to/tv.xml
And then in cron:
/files/cron1.sh
This allows you to test the script independent of cron. Remember though that your login shell runs with different environment variables than cron does.
cron usually captures the output of stdout and stderr and e-mailes any output to the crontab owner.
Did you double check your crontab entry to make sure it's valid and will execute at the right time?
Make sure that the script does not need any environment variables set. Otherwise wrap it in another (bash) script, where you can set the environment variables that the other script expects.

Why can't my Perl script load a module when run by cron?

I have a bunch of Perl scripts that all run fine, yet need to have use Plibdata; up top.
I set up a cron job that runs (I get the confirmation email from root) and it spits back the following error message:
Can't locate Plibdata.pm in #INC (#INC contains: /install/lib /opt/perl58/lib/5.8.8/IA64.ARCHREV_0-thread-multi /opt/perl58/lib/5.8.8 /opt/perl58/lib/site_perl/5.8.8/IA64.ARCHREV_0-thread-multi /opt/perl58/lib/site_perl/5.8.8 /opt/perl58/lib/site_perl .) at ./x line 5.
BEGIN failed--compilation aborted at ./x line 5.
Line 5 is... you guessed it.... use Plibdata;
I am also attempting to set the environment as such:
use lib "$ENV{CARSPATH}/install/lib";
so maybe if I found the location of this plibdata, I could explicitly direct it that way?
My cron commands will be executed using /usr/bin/sh says crontabs...
Any suggestions?
This script works from the command line.
You don't say what Plibdata is. You also don't state if this works at your command prompt. I assume that it does.
Try this:
perl -MPlibdata -e 1
Assuming that doesn't spit the same error, try this:
perl -MPlibdata -le 'print $INC{"Plibdata.pm"}'
That will tell you where. (It's probably in your PERL5LIB env var if this works.) Then you can just add the appropriate "use lib" to the directory Plibdata.pm is in.
Also, be sure you're using the same perl in both locations - command line ("which perl") and in the cron job (try "BEGIN { print $^X }" at the top of your script).
Cron uses a different user env than your env when logged in. Are you able to run the script from the command line? If so, just set your env variables inside the cron above your current commands.
Clearly, Plibdata.pm is not installed in the default module paths on your system:
/install/lib /opt/perl58/lib/5.8.8/IA64.ARCHREV_0-thread-multi /opt/perl58/lib/5.8.8 /opt/perl58/lib/site_perl/5.8.8/IA64.ARCHREV_0-thread-multi /opt/perl58/lib/site_perl/5.8.8 /opt/perl58/lib/site_perl
You have three choices:
Install Plibdata.pm in a known Perl system path (site_perl is the classic option).
Make the PERL5LIB shell environment (or the equivalent command line -I option for Perl) include the installation path of the module.
Use use lib in your script. Remember that the use lib action is done at compile time, so your variable in the path may not be initialised. Try using the variable in a BEGIN block like this:
my $env;
BEGIN {
$env = $ENV{CARSPATH};
}
use lib "$env/install/lib";
Running your program from a wrapper script as others have suggested is probably my preferred method, but there may be a few other solutions:
If you're using a modern cron you may be able to do something like this in your crontab entry:
* * * * * CARSPATH=/opt/carsi x
replacing the asterisks with the appropriate schedule designators.
This will set CARSPATH for the x process and allow the use lib statement that passes the environment variable to work.
You can also, depending on your shell and cron implementation, store your environment setup in a file and do something like:
* * * * * source specialenv.sh && x
Where specialenv.sh contains lines like (for bash)
export CARSPATH=/opt/carsi
You may also be able to set environment variables directly in the crontab, should you choose to do so.
cron does not setup an environment for you when it runs your code, so the environment variable $CARSPATH does not exist. I suggest only running shell scripts from cron, setting the environment inside of the shell script and then running the program you really wanted to run.
example wrapper script:
#!/bin/bash
source ~username/.bash_profile
cd ~username
./script.pl
If you are using ksh or sh you may need to say
#!/bin/sh
. ~username/.profile
cd ~username
./script.pl
Remember to replace username with your username on the system. Also, if the script is not in your home directory you will want to execute it with the path to it, not ./.
You say source, or period space, to load a given shell script in the current shell. You load it in the current shell so that any environment settings stay with the current shell.
~/.bash_profile, ~/.bashrc, ~/.profile, /etc/profile, etc. are all files that commonly hold your environment setup. Which one you are using depends heavily on your OS and who set it up.
Although its not an 'answer', I resolved this issue by using DBI instead of the Plibdata.
Which is kind of milky because now I will have to change a couple scripts around... ahhhhh I wish there was something I could do to make the Plibdata work
I'm still going to try Chas. Owens answer to see if that works
didn't work for me... "interpreter "/bin/bash" not found"maybe it would work with people who have that interpreter
* * * * * CARSPATH=/opt/carsi ./x
works