Rudeck: Using node environment variables, inside a scheduled job - rundeck

I have a scheduled job on rundeck (2.6.2).
This jobs run a script that needs an node environment variable available (like $HOME, $USER or $PWD. A custom one. ) for all user in the node/nodes.
I could use jobs options to solve this if I wanted trigger the job from API ( Or manually. Rundeck ask me for the option ) but is a scheduled job. I can't use Options -> Default Value because the jobs could run in nodes with different values for this environment variable.
There is any way to offers all / some node environment variables to rundeck to be used inside the scheduled jobs?
(I have thought in use Options -> Allowed Values -> Remote URL but is a mess. Too complicated to me requirement)
Thanks.

The easy way in my case has been to customize /etc/rundeck/profile adding into it all the stuff I wanted.
Seems a pretty good solution to me.

I succeeded to perform this by adding the following lines:
set -a
. /etc/environment
. /etc/profile
1) put those lines into the file: /etc/rundeck/profile
2) put those lines into a script step
Remark: I'm using only script steps in my rundeck and I'm always put this lines in the first line of the script step:
#!/usr/bin/env bash

Related

How does one get the job ID of an interactive job from an environment variable when using condor?

I usually got the job id with:
MY_CONDOR_JOB_ID
but I don't see it set if it's an interactive job. Is there a way to set it? When I am given the resources I see that there is a job id for my job. Is there a way to get it?
Here is what it should be
Submitting job(s).
1 job(s) submitted to cluster 4869.
Waiting for job to start...
HTCondor proper doesn't set MY_CONDOR_JOB_ID, so either your submit file or your administrator has set this up.
If your submit file contains
environment = CONDOR_JOB_ID=$(Cluster)
Then HTCondor will insert the job cluster id into the environment variable CONDOR_JOB_ID. To get this into a condor_submit -i, you'll need to pass the name of this submit file to condor_submit. So, try putting that into a submit file, maybe named env.sub, and run
condor_submit -i env.sub
Or, if you already have a submit file which sets this, pass the name of that submit file to condor_submit -i

Azure DevOps: How can I run powershell on onPrem servers after deployment of all IIS Sites

How can I run a powershell script after all stages have completed deployment? I have currently selected a deployment group job but am not 100% sure if this is what I need. I have included the script as part of the solution that is being deployed so that it will be available on all machines. Based on what I can find in the UI there seem to be 2 tasks that could work.
The first option would be to execute the task "Powershell Script" but it is asking for a path in the drop directory. The problem with this is that the file that I am interested in is in a zip file and there does not seem to be a way to specify a file in the zip file.
The other task I see is "PowerShell on Target Machines" and then it asks for a list of target machines. I am not sure what needs to be entered here as I want to run the powershell script on the current machine in the deploy group. It seems like this task was intended to run powershell scripts from the deployment machine to another remote machine. As a result this option does not seem like it fits my use case.
From looking the answers that I have come across talk about how to do this as part of an Azure site using something called "Kudu" (not relevant) or don't answer my other questions related to these tasks or seem like they are out of date.
A deployment group job will run on all of the servers specified in that deployment group. Based on what you have indicated, it sounds like that is what you are looking for.
Since you indicated that the file in question is a zip, you are actually going to need to use 2 separate tasks.
Extract Files - use this to extract the zip file so that you can execute the script
Powershell script - use this to execute the script. You can set the working directory for the script to execute in if necessary (under advanced options). Also remember that you don't have to use the file/folder selector 'helper' as it wont work in your case if the file is inside a zip. This is just used to populate the text box which you can manually do starting with the $(System.DefaultWorkingDirectory) variable and adding the necessary path of the script.

Can Bamboo variables be overridden by a Script task?

I'm interested in using a script task to override one of these Bamboo plan variables for subsequent tasks but I'm not sure if it's possible or how to go about doing so. It appears that Bamboo allows for various levels of variable overrides for Build Plans all the way down to particular branches however they all seem to require defining the values within the Bamboo UI. The problem with this is that it requires admin privileges to modify these variables whereas some of them need to be modified by developers that do not have this level of access. As a solution I want to be able to specify some variable overrides in files that exist in the source repository itself.
Attempt 1: Overriding environment variables
I've attempted to set the environment variables exposed by Bamboo using a Powershell script and specifying something like $env:bamboo_xyz = 'ABC' but it doesn't seem to have an effect past the task context in which it was specified in. Presumably Bamboo must be re-setting the environment variables individually for each task or executing them within their own contexts but it's not clear to me exactly from the documentation.
UPDATE: It appears from some testing that environment variables set in one Script task are not available in subsequent Script tasks in the same Job. This leaves me with no apparent way to override variables based on anything other than hard coded values in Bamboo.
Attempt 2: Using the Bamboo Inject Variables Plugin task
I've tried using the Bamboo Inject Variables Plugin task to override variables but because what appears to be a required namespace parameter it only seems to be able to define new variables and not override existing ones.
Enviroment variables are only valid in the current session. So if bamboo starts one script ( one powershell session ) completes that and then start a new powershell script ( New Session ) the enviroment variable will not be kept.
So then there are a few options, set the variable in each script.
Or set it using registry at the start of the process. And if ncessary set it back to default value in the last step/script.

Perl script works but not via CRON

I have a perl script (which syncs delicious to wp) which:
runs via the shell but
does not run via cron (and i dont get an error)
The only thing I can think of is that it read the config file wrongly but... it is defined via the full path (i think).
I read my config file as:
my $config = Config::Simple->import_from('/home/12345/data/scripts/delicious/wpds.ini',
\my %config);
(I am hosted on mediatemple)
Does anybody have a clue?
update 1: HERE is the complete code: http://plugins.svn.wordpress.org/wordpress-23-compatible-wordpress-delicious-daily-synchronization-script/trunk/ (but I have added the path as above to the configuration file location as difference)
update 2: crossposted on https://forums.mediatemple.net/viewtopic.php?pid=31563#p31563
update 3: the full path did the trick, solved
The difference between a cron job and a job run from the shell is 'environment'. The primary difference is that your profile and the like are not run for a cron job, so any environment variables you have set in your normal shell environment are not set the same in the cron environment - no extensions to PATH, no environment variables identifying where Delicious and/or WP are hosted, etc.
Suggestion: create a cron job that simply reports the environment to a known file:
env > /home/27632/tmp/env.27632
Then see what is set in your own shell environment in comparison. Chances are, that will reveal the trouble.
Failing that, other environmental differences are that a cron job has no terminal, and has /dev/null for input and output - so interactive stuff does not work well.
it seems the problem is not in running perl, but locating the Config library
you should try:
perl -e "print #INC"
and run a similar perl script in cron, and read the output
it possible that they differ
I suggest looking at my answer to How to simulate the environment cron executes a script with?
This is an similar Jonathan's answer but goes a bit further.
Based on your crontab, and depending on your installation, the problem might be the "perl". As others note the environment, particularly the $PATH variable, is different for cron. perl may not be in the path so you need to put the full path to perl in the cron command.
You can determine the path with the command $ type perl
I run into the same problem ...
Perl script works but not via CRON => error: "perl: command not found"
... after an update from Plesk 12.0 to Plesk 12.5. But the existing answers were not very helpful for me.
It took some time, but than I found this thread in the Odin forum which helps me: https://talk.plesk.com/threads/scheduled-tasks-always-fail.331821/
They suggest the following:
/usr/local/psa/bin/server_pref -u -crontab-secure-shell ""
That deletes in the /var/spool/cron/crontabs files the line:
SHELL="/opt/psa/bin/chrootsh"
After that, my cron jobs run with out any error.
(Ubuntu 14.04 with Plesk 12.5)
If the perl script runs fine manually, but not from crontab, then
there is some environment path needed by the some package that is not
getting through `cron`. Run your command as follows:
suppose your cron entry like:
* 13 * * * /usr/bin/perl /home/username/public_html/cron.pl >/dev/null 2>&1
env - /home/username/public_html/cron.pl
The output will show you the missing package. export that package path in
$PATH variables

Why does my command-line not run from cron?

I have a perl script (part of the XMLTV family of "grabbers", specifically tv_grab_oztivo).
I can successfully run it like this:
/sw/bin/perl /path/to/tv_grab_oztivo --output /path/to/tv.xml
I use the full paths to everything to eliminate issues with the Working Directory. Permissions shouldn't be a problem.
So, if I run it from the Terminal (Mac OSX) it works just fine.
But when I set it to run via a cron job, nothing appears to happen at all. No output is created etc.
There isn't anything wrong with the crontab as far as I can see, because if I substitute a helloworld.pl for the actual script, it runs just fine at the right time.
So, what can I do to debug? I can see from looking at %ENV in the two cases that the environment is very different, but what other approaches can I take to debugging? How can I see the output of the cron job, which might be some kind of perl "die" message or "not found" message from the shell or whatever?
Or should I be trying to somehow give the cron version of the command the same environment as when it's running as me?
It's often because you don't get the full environment when running under cron. Best bet is to capture the ouput by using the command:
( /sw/bin/perl /path/to/tv_grab_oztivo ... ) >/tmp/qq 2>&1
and then have a look at /tmp/qq.
If it does turn out to be a missing environment, then you may need to put:
. ~/.profile
or something similar, into the execution chain of your cron job, such as:
( . ~/.profile ; /sw/bin/perl /path/to/tv_grab_oztivo ... ) >/tmp/qq 2>&1
If you're looking at %ENV in the two cases, I'd suggest that, as a first step in your perl script, set %ENV to what it is in a cron job, and then trying to run it from the command line. You may need to exec yourself once for this to take full control:
BEGIN {
if (exists $ENV{something_in_your_env_not_in_cron}) {
%ENV = (...);
exec $^X, $0, #ARGV;
}
}
Now try running it, and seeing if there's anything you can do to debug it (including running under perl -d if required). Most likely, you'll find that you end up adding items back into %ENV one at a time until it magically starts working (LD_LIBRARY_PATH is a good one for this, but ORACLE_HOME or DB2HOME for Oracle or DB2 apps might be good choices, too). Then you can either set the variable in your script, or in the crontab.
I'd run a simple shell script by absolute path from the cron command.
Inside that script, I'd ensure that I trapped stdout and stderr to a known (or knowable) file. I'd also ensure that enough of your environment is set. On Unix, you get almost no environment set at all when you run a command via cron - I'm not sure about MacOS X. The standard culprit for problems is PATH. I have a separate .cronfile that sets my working environment enough that I usually don't have problems - that's an analogue of .profile.
On occasion if you can't figure out what's going wrong with your command line, the simplest way to fix it is to turn the whole thing into a shell script. Ideally you shouldn't have to do this, but it can be the fastest way to solve the problem.
File: /files/cron1.sh
#!/bin/sh
/sw/bin/perl /path/to/tv_grab_oztivo --output /path/to/tv.xml
And then in cron:
/files/cron1.sh
This allows you to test the script independent of cron. Remember though that your login shell runs with different environment variables than cron does.
cron usually captures the output of stdout and stderr and e-mailes any output to the crontab owner.
Did you double check your crontab entry to make sure it's valid and will execute at the right time?
Make sure that the script does not need any environment variables set. Otherwise wrap it in another (bash) script, where you can set the environment variables that the other script expects.