I've been reading this great post:
https://serverfault.com/questions/449651/why-is-my-crontab-not-working-and-how-can-i-troubleshoot-it
And I decided to modify a line of my cron task to output my echos and any problems it might encouter. My cron tab line looks like this:
30 08 * * * /root/scripts_server/backup_daily.sh &>/var/log/bkp_daily.log
The script runs correctly (I can confirm that the backups were made and transfered) and the output file is created (bkp_daily.log), but it is empty.
Can any one point out a problem?
EDIT:
This is an example of a line in the script:
echo "--------------Sincronización de git remotos a locales-----------------------"
I think &> is a bash extension, try using the standard shell syntax:
30 08 * * * /root/scripts_server/backup_daily.sh >/var/log/bkp_daily.log 2>&1
Related
I have a command line code as follows -
for /r %%v in (*.max) do start %%v
It opens any Max file in the same folder - great.
I want it to also tell max to run any number of scripts when the file has opened, there are guides on how to do this on the 3dsMax help for eg:
-U MAXScript = (this will open MAXScript and run a certain script on the end of a fresh 3dsmax command load.
However this does not work on the end of the initial code I need to use.
I have been researching how this could work for 2 days but keep going in circles.
Please help.
Adam
try for /r %%v in (*.max) do START cmd.exe /C %%v
You might want to take a look at using 3dsmaxbatch.exe instead of 3dsmax.exe
Here is a link to the 2019 documentation:
https://knowledge.autodesk.com/support/3ds-max/learn-explore/caas/CloudHelp/cloudhelp/2019/ENU/3DSMax-Batch/files/GUID-48A78515-C24B-4E46-AC5F-884FBCF40D59-htm.html
The command line to load a max file then execute a script should look like this
3dsmaxbatch.exe -sceneFile C:/some/path/to/maxfile.max C:/some/path/to/script.ms
I have a cron job that runs several shell scripts:
30 1 * * 1-5 /ufs/00/home/usr/bin/ConsentforFoo.sh "prd"
15 1 * * 1-5 /ufs/00/home/usr/bin/apptTvoxforFoo.sh
the first shell script looks like this:
#!/bin/bash
# ConsentforFoo.sh - set different environments, set path to perl scripts, calls script
TMP_HOME=/home/localweb/htdocs/cgi-bin/usr/CFoodir
if [ "$1" = "dev" ] || [ "$1" = "uat" ] || [ "$1" = "prd" ]
then
cd $TMP_HOME/$1
My-Consent-Cron.pl
else
echo "Val Not Set: $1"
fi
this script works flawlessly... However, the second shell script looks like this:
#!/bin/bash
# apptTvoxforFoo.sh - sends MHT population and patients with multiple appointments to west
TMP_HOME=/home/localweb/htdocs/cgi-bin/usr/CFoodir
cd $TMP_HOME
TvoxCron.pl #adding './' works here
but when it runs, I get an error saying: "sh: /ufs/00/home/usr/bin/apptTvoxforFoo.sh: cannot execute"
I add a "pwd" to the shell script and it's getting in the right directory and the file is there...
Weirdest thing is when I add "./" to it, it works... but in the first shell script I don't have to...
Any ideas why taking the if/then/else out would force me to SOURCE the perl script?
Thanks for any help you can provide.
Did you check the file permissions on the directories and all the files? Can you add a dot in front of the file to make sure you are not finding the file on the path?
./TvoxCron.pl
Say I had a script that checked honeypot locations using md5sum.
#!/bin/bash
#cryptocheck.sh
#Designed to check md5 CRC's of honeypot files located throughout the filesystem.
#Must develop file with specific hashes and create crypto.chk using following command:
#/opt/bin/md5sum * > crypto.chk
#After creating file, copy honeypot folder out to specific folders
locations=("/share/ConfData" "/share/ConfData/Archive" "/share/ConfData/Application"
"/share/ConfData/Graphics")
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done
And the output looked like this:
http://pastebin.com/b4AU4s6k
Where would you start to try and recognize the output and perhaps trigger some sort of response by the system if there is a 'FAILED'?
I've worked a bit with PERL trying to parse log files before but my attempts typically failed miserably for one reason or another.
This may not be the proper way to go about this, but I'd want to be putting this script into a cronjob that would run every minute. I had some guys telling me that an inotify job or script (I'm not familiar with this) would be better than doing it this way.
Any suggestions?
--- edit
I made another script to call the script above and send the output to a file. The new script then runs a grep -q on 'FAILED' and if it picks anything up, it sounds the alarm (tbd what the alarm will be).
#!/bin/bash
#cryptocheckinit.sh
#
#rm /share/homes/admin/cryptoalert.warn
/share/homes/admin/cryptocheck.sh > /share/homes/admin/cryptoalert.warn
grep -q "FAILED" /share/homes/admin/cryptoalert.warn && echo "LIGHT THE SIGNAL FIRES"
Use:
if ! /opt/bin/md5sum -c /share/homes/admin/crypto.chk
then
# Do something
fi
Or pipe the output of the loop:
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done | grep -q FAILED && echo "LIGHT THE SIGNAL FIRES"
I know you can redirect the output of a cronjob via ">" to overwrite and ">>" to append. However, I was wondering if there is anyway to get the output from a cronjob to overwrite the log file each time the job is run, but then append the output for that particular job run?
When you use > it overwrites anything previously each time there is a in the output of the command linebreak, so you don't see historical output from that particular job.
If I understand it correctly, you want to create a new log file everytime the job is run, so in crontab you use ">" as
* * * * /home/myhome/some_cron_job.sh > /home/myhome/cron_job_output
Now, within some_cron_job.sh, you use ">>" to append to the log file
(within shell script)
echo "Testing" >> /home/myhome/cron_job_output
Does that help ?
I have an entry in my crontab that looks like this:
0 3 * * * pg_dump mydb | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz
That script works perfectly when I execute it from the shell, but it doesn't seem to be running every night. I'm assuming there's something wrong with the permissions, maybe crontab is running under a different user or something. How can I debug this? I'm a shared hosting environment (WebFaction).
You need to escape "%" characters in crontab entries with backslashes- see the crontab(5) manpage. I've had exactly the same problem.
For example:
0 7 * * * mysqldump usblog | bzip2 -c > usblog.$(date --utc +\%Y-\%m-\%dT\%H-\%M-\%SZ).sql.bz2
Do you not get emails of cron errors? Not even if you put "MAILTO=you#example.com" in the crontab?
You may also need to set PATH in your crontab if pg_dump or gzip isn't on the system default path (so use "type pg_dump" to check where they are, crontab usually only runs commands in /bin or /usr/bin by default)
Always use full paths in crontab entrties. For example, /usr/bin/gzip. You will also need to do that for pg_dump and date.
When you say it doesn't work, what do you mean? Does it not generate the file at all or is it empty?
If your system is set up correctly, crontab should send you an email if your command generated any output.
Try something like this to verify crontab is running. It will touch the file every minute.
* * * * * touch /tmp/foo
And check your paths like James mentioned.
If this is in something like /etc/crontab, make sure the user is included:
0 3 * * * <user_goes_here> pg_dump mydb | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz