raspberry pi with NOOBS: crontab does nothing - raspberry-pi

I have a raspberry pi with NOOBS. I am trying to run a script every 5 minutes with crontab. It is not working. So I made a test command and added it to my crontab file.
I typed "crontab -e"
then added "* * * * * date >> /Documents/crontab logs/crontab_test_log.txt"
My understanding is that this should get the date and time every minute then save it to a test log file. After rebooting the pi and waiting for 10s of minutes nothing is happening. What am I doing wrong?
Thanks for your help.

Related

Not able to execute script from crontab

I am able to execute scrip from command line.
I'm executing it like this:
/path/to/script run
But while executing it from cron like below, the page is not comming:
55 11 * * 2-6 /path/to/script.pl run >> /tmp/script.log 2>&1
The line which is getting a webpage uses LWP::Simple:
my $site = get("http://sever.com/page") ;
I'm not modyfing anything. The page is valid and accessible.
I'm getting enpty page only when I execute this script from crontab. I am able to execute itfrom command line!
Crontab is owned by root. And job is executed as root.
Thanks in advance for any clue!
It's difficult to say what might be causing this, but there are differences between your environment, and the environment created by crontab.
You could try running it through a shell with appropriate args to construct your user environment:
55 11 * * 2-6 /bin/tcsh -l /path/to/script.pl run >> /tmp/script.log 2>&1
I'm assuming you are running it by cron with your own user ID of course. If you aren't, then obviously you should try running it manually with the user ID that cron is using to run it.
If it's not a difference in environment variables (e.g. those that specify a proxy to use), I believe you are running afoul of SElinux. Among other things, it prevents background applications (e.g. cron jobs) from accessing the internet unless you explicitly allow them to do so. I don't know how to do so, but you should be able to find out quite easily.

Run script from 8:00 AM to 8 PM and if it does not end, kill it on 8:00 PM

I try to simulate the enterprise clients connecting the SQL server and selecting data from it. For now on, I managed to set up postgres on Windows 7 and wrote a script that selects some data from server from remote machine:
#!/bin/bash
data="$(psql -h 10.0.0.2 -d test -U postgres -c $'SELECT * FROM tablename;')"
#echo $data
Also, I measure its time with the time command.
What I need now, is to be able to run the script at 8:00 in the morning and stop it at 8:00 in the evening (that it should work the whole day). At night, the server does nothing (the script is not running), and the next day, I need to run it again and again, through the given number of days.
I played with cron a while ago, it seems a good solution, but the biggest concern is as follows: lets say, I run the script at 8:00 am but it will finish at 9:00 am - how can I know how long it will work to run it again to simulate the whole day traffic with no interruptions? Notice, that I don't know how long it takes to select the data - the content of the datais not important to me, I only have to select them and that's simply all.
You could wrap your cronjob up in a script like this, and call it something like psql_load_runner:
#!/bin/bash
while [ 1 ]
do
data="$(psql -h 10.0.0.2 -d test -U postgres -c $'SELECT * FROM tablename;')"
#echo $data
done
Then set that to run a 08:00 each day - it will keep looping all day, which seems to match your requirement.
Then have another cron job that you run at 20:00 each day which just kills this one off, perhaps with a command like killall -9 psql_load_runner

Use cygwin to run a batch file and email results

I am new to using cygwin and don't really understand how the scripting of it works. Currently I am running it on Windows 7 and using task scheduler to do this inefficiently.
What I want to do is to run a .bat file already made that runs tests in the cmd line and than take the results of that test and email that people.
Some side notes:
1. It doesn't HAVE to be a batch file, from my reading I think maybe a .sh would be easier to run with bash. Being able to run it on CentOS would be even better, that way others can run if I leave.
2. This needs to run daily. I would like to run the batch file at around 10 am and give it an hour till the emailed results are sent, unless you can trigger the email when the .bat is done.
3. Every time I run this .bat file it saves the results to a .htm file and overwrites it every time the .bat is run.
Thank you
That could be in the crontab for a a centOS server (/etc/crontab)
0 10 * * * user cd /path/ && /bin/bash file.sh >> result_file
Is that what you needed ? Also, you can install Cron as a windows service with cygrunsrv

perl based cron job won't write to mounted cifs/windows share ONLY after long inactivity

I'm not sure how to title that more succinctly and still have it be meaningful.
(Note that this works fine when run mid-day, via cron or manually, so I "know" the script itself is sound.)
I have a cron job (ubuntu 13.04.)
It runs as my user (not root.)
The job itself runs at 6:00 in the morning. It's the first 'business level' job that runs all day.
1 6 * * 1-5 /home/me/bin/run_perl_job
run_perl_job is just:
#!/bin/bash
cd /home/me/bin
./script.pl
The script copies a file to "/mnt/shared_drive/outputfile.xls"
The mount point is defined in fstab as:
//fileserver/share /mnt/shared_drive cifs user=domain/me%password,iocharset=utf8,gid=1000,uid=1000,sec=ntlm,file_mode=0777,dir_mode=0777 0 0
Now. Given that:
When I run the script in a normal shell, it works fine.
When I look at the mount point first thing in the morning (via a normal terminal) it shows up (and is writeable) without event.
When I copy the crontab line and set it to run in a couple minutes, to see the symptom, it works fine (creates the file quite happily.)
The ONLY time this fails is if it's running in its normal time slot (6:01). The rest of the script functions ( the file itself has to be pulled down via sftp, etc.) So I know it's not dying.
It's driving me batty because the test cycle is 24 hours.
I just added the following couple lines to the beginning of the 'run_perl_job' script, hoping it exposes something tomorrow:
cd /mnt/shared_drive
ls -lrt >>home/me/bin/process.log
But I'm stumped. "It's almost as though" the mount point had gotten stale overnight and is waiting for some kind of access attempt before remounting. I'd run "mount -a" at the top of the 'run_perl_job' script if I could reasonably do it. But given that it's got to be sudo'ed, that doesn't seem reasonable to me.
Thoughts? I'm running out of ideas and this test cycle is awful.
how about putting a
umount -f -v /mnt/shared_drive
mount -v -a
into a root cron job just before your script runs. That way you don't need to sudo in your script and have the password in plain sight. -v might give you a hint on what is happening to make it stale

Crontab configuration on CentOS

I have a Magento installation running on a VPS that runs on CentOS. I've been trying to implement a backup solution using the script found in here: https://github.com/zertrin. It worked fine and my next step was to automate it. Despite all my efforts the Cron job is not running. Here is what I have got in /etc/crontab:
* 20 * * * root echo "Cron Worked $(date)" >> /tmp/cronworked.txt
#
* 16 * * 1-6 root /root/duplicity-backup.sh -c /etc/duplicity-backup.conf -b
#
* 4 * * 7 root /root/duplicity-backup.sh -c /etc/duplicity-backup.conf -f
#
* 20 * * 7 root /root/duplicity-backup.sh -c /etc/duplicity-backup.conf -n
#
* 20 * * * root echo "Cron Worked $(date)" >> /tmp/cronworked3.txt
Both my test cron jobs (first one and the last one), work fine, but not those commands in the middle. They work fine if I issues them as standalone commands but for some reason not as Cron jobs.
Anyone can guide me to figure out what this is not working?
There a couple of things you can check:
Make sure /root/duplicity-backup.sh is executable
If you have a local mail server configured, you should receive an email about the output of the cron jobs, which might tell you what's going wrong
If you don't receive emails from the cron job, then redirect stdout AND stderr to a file. That should help figuring out what's going wrong
Add bash in front of the script name to make sure it's running with bash and not something else, somehow, like this:
* 4 * * 7 root bash /root/duplicity-backup.sh -c /etc/duplicity-backup.conf -f
Having the script output and error messages should help. If they don't please paste them here.