I have an entry in my crontab that looks like this:
0 3 * * * pg_dump mydb | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz
That script works perfectly when I execute it from the shell, but it doesn't seem to be running every night. I'm assuming there's something wrong with the permissions, maybe crontab is running under a different user or something. How can I debug this? I'm a shared hosting environment (WebFaction).
You need to escape "%" characters in crontab entries with backslashes- see the crontab(5) manpage. I've had exactly the same problem.
For example:
0 7 * * * mysqldump usblog | bzip2 -c > usblog.$(date --utc +\%Y-\%m-\%dT\%H-\%M-\%SZ).sql.bz2
Do you not get emails of cron errors? Not even if you put "MAILTO=you#example.com" in the crontab?
You may also need to set PATH in your crontab if pg_dump or gzip isn't on the system default path (so use "type pg_dump" to check where they are, crontab usually only runs commands in /bin or /usr/bin by default)
Always use full paths in crontab entrties. For example, /usr/bin/gzip. You will also need to do that for pg_dump and date.
When you say it doesn't work, what do you mean? Does it not generate the file at all or is it empty?
If your system is set up correctly, crontab should send you an email if your command generated any output.
Try something like this to verify crontab is running. It will touch the file every minute.
* * * * * touch /tmp/foo
And check your paths like James mentioned.
If this is in something like /etc/crontab, make sure the user is included:
0 3 * * * <user_goes_here> pg_dump mydb | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz
Related
I'm using my Windows PC, and I'm trying to import a "dump.sql" into the database "TEST" created with Postgres, using command prompt. I do it in two steps:
Step 1:
cd C:\Program Files\PostgreSQL\12\bin
Step 2:
psql -U username -d TEST < C:\Users\Username\Desktop\University\Politechnic\III year\INFORMATIC TECHNOLOGIES FOR THE WEB\PDF SL\SL\Materials\TIW_IOL_ServletJSP\db\dump.sql`
Long path, I know. But the result is: "Impossible to find specified file".
What can I do?
Not sure how security is where you are at, but can you attempt to write to a file with a simpler destination? Such that you take out any possibility of spaces and/or length being the issue? Then you will at least be able to remove those variables or narrow down to them depending on the outcome. Note that the max path length is 260 characters
(From comment on question)
I've been reading this great post:
https://serverfault.com/questions/449651/why-is-my-crontab-not-working-and-how-can-i-troubleshoot-it
And I decided to modify a line of my cron task to output my echos and any problems it might encouter. My cron tab line looks like this:
30 08 * * * /root/scripts_server/backup_daily.sh &>/var/log/bkp_daily.log
The script runs correctly (I can confirm that the backups were made and transfered) and the output file is created (bkp_daily.log), but it is empty.
Can any one point out a problem?
EDIT:
This is an example of a line in the script:
echo "--------------Sincronización de git remotos a locales-----------------------"
I think &> is a bash extension, try using the standard shell syntax:
30 08 * * * /root/scripts_server/backup_daily.sh >/var/log/bkp_daily.log 2>&1
I am trying to log a complete session in psql into a .txt file. The command given to me was initially this:
psql db_name| tee file_name.txt
However, my SSH client does nothing until I quit it. That means, it does not recognize any command. More like a document, no action happens no matter what I write. So far, only '\q' is recognised which lets me get out of it. Any ideas what is happening? How am I to write the query if shell will not read anything. Also, I tried the following (this is before connecting to database) :
script filename.txt
It does show the message : script started, file is filename.txt, but I dont know where this file is stored and how to retrieve it.
Any help with the above will be welcome and really appreciated! Thanks a lot :)
There is option to psql for log query and results:
-L filename
--log-file filename
Write all query output into file filename, in addition to the normal output destination.
Try this:
psql db_name -L file_name.txt
Say I had a script that checked honeypot locations using md5sum.
#!/bin/bash
#cryptocheck.sh
#Designed to check md5 CRC's of honeypot files located throughout the filesystem.
#Must develop file with specific hashes and create crypto.chk using following command:
#/opt/bin/md5sum * > crypto.chk
#After creating file, copy honeypot folder out to specific folders
locations=("/share/ConfData" "/share/ConfData/Archive" "/share/ConfData/Application"
"/share/ConfData/Graphics")
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done
And the output looked like this:
http://pastebin.com/b4AU4s6k
Where would you start to try and recognize the output and perhaps trigger some sort of response by the system if there is a 'FAILED'?
I've worked a bit with PERL trying to parse log files before but my attempts typically failed miserably for one reason or another.
This may not be the proper way to go about this, but I'd want to be putting this script into a cronjob that would run every minute. I had some guys telling me that an inotify job or script (I'm not familiar with this) would be better than doing it this way.
Any suggestions?
--- edit
I made another script to call the script above and send the output to a file. The new script then runs a grep -q on 'FAILED' and if it picks anything up, it sounds the alarm (tbd what the alarm will be).
#!/bin/bash
#cryptocheckinit.sh
#
#rm /share/homes/admin/cryptoalert.warn
/share/homes/admin/cryptocheck.sh > /share/homes/admin/cryptoalert.warn
grep -q "FAILED" /share/homes/admin/cryptoalert.warn && echo "LIGHT THE SIGNAL FIRES"
Use:
if ! /opt/bin/md5sum -c /share/homes/admin/crypto.chk
then
# Do something
fi
Or pipe the output of the loop:
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done | grep -q FAILED && echo "LIGHT THE SIGNAL FIRES"
I have a DOS batch file I want to use to invoke a TSQL program.
I want to pass the names of the databases to use. This seems to work.
I want to pass the PREFIXES for the names of the tables I want to work with.
So for test tables I want to pass the name of a prefix to use the test table.
set svr=myserver
rem set db=myTESTdatabasename
set db=mydatabasename
rem set tp=TEST
set tp=
sqlcmd -S %svr% -d somename -i test01.sql
test01.sql looks like this:
use $(db)
go
select top 10 * into $(db).dbo.$(tp)dsttbl from $(db).dbo.$(tp)srctbl
It works fine for the test stuff, but for the real stuff, I just want to set the value of tp to null so that it will use the real table name and not the bogus table name.
The reason I'm doing this is because I don't know the names of everything that will be used on the actual databases. I'm trying to make it generic so I don't have to do a bunch of search replaces on what will be a very large sql program (the real sql program is already hundreds of lines).
In the test case, this would resolve to
select top 10 * into myTESTdatabasename.dbo.TESTdsttbl from myTESTdatabasename.dbo.TESTsrctbl
For the production runs, it should resolve to
select top 10 * into mydatabasename.dbo.dsttbl from mydatabasename.dbo.srctbl
The problem seems that it doesn't like null values for $(tp), or perhaps that it's getting an undefined variable.
I experimented some with the syntax and as Preet Sangha pointed out you should use the /V command line option.
The reason is that setting a variable to the empty string in a batch script undefines it.
If you want to set the database name in the top of the batch file you can still use set, like this:
set db_to_use=
Then you can use this (undefined) variable in the sqlcmd using the /V option:
sqlcmd -S %svr% -d somename -v db="%db_to_use%" -i test01.sql
...or you can just set the value directly in the sqlcmd line:
sqlcmd -S %svr% -d somename -v db="" -i test01.sql