Full Weekly Backup & Daily Incremental Backup - centos

I have Linux Centos 6.5, and I have tried different backup scripts, but I have failed each time. I only have a small amount of experience with Linux, I've only used it to set up a server etc. so I don't know how to do proper backups. I have a 100GB FTP server connected to my Linux server that I can use for backups.
I need a script that takes a weekly backup and also a daily incremental backup. I only need to backup certain directories, e.g. /home, /etc and so on. It should also automatically execute every week/day and take a backup and put that backup on the FTP server.
Is there anyone who has a proper and working script for this?

Assuming that you have installed Centos, you obviously have crond tool. Put your routines into the cron, and it will execute any script at the specified time:
su #login as root
crontab -e
This will run the FTP upload every day at hh:mm:
mm hh * * * curl --upload-file testfile.zip ftp://user:password#ftp.domain.com/
But i find it more useful to use direct filesystem access for creating backups (you need to configure ssh public key access before):
mm hh * * day_of_week_number rsync -avh -e --updates --delete /source remote.host:/dest

Adding below intro to the crontab will help to take weekly backup of Linux partitions.
mm hh * * dow rsync -avh -e --updates --delete /source /destination

Related

Crontab doesn't backup postgre DB in docker, but if i run script by hand it work prorely

I'm trying to set up automatic backup for postgre database. Postgre running in docker, so my script for backup is:
docker-compose exec postgres -U user database_name | gzip > "/var/server/my_service/data/backup-db/db_backup.sql.gz"
And its working fine, if I run it manually. I wrote the following job for the crontab (every 5 minutes just for testing):
*/5 * * * * cd /var/server/my_service && sh /var/server/my_service/data/backup/backup_script
This command also working great, if i run it manually it create valid DB backups that i can use.
But crontab just create empty archive, without any data. I just cant understand why.
My guess is that the output stream that catches the gzip is normally generated in manual mode, but completely empty when the crontab trying to run command
I thought there were problems with access rights and put the in the root crontab but it didn't help
UPD:
so... problem in backup_script, error in logs says the input device is not a TTY
I tried google it and add -T, but it didn't help as well
Update your /var/server/my_service/data/backup/backup_script with the following:
Prefix the first 3 line in your script:
#!/bin/bash
source ~/.bash_profile
cd /var/server/my_service
#
# rest of your script
#
Your crontab line should be (At 04:44 on every day-of-month):
44 4 */1 * * /var/server/my_service/data/backup/backup_script

PostgreSQL COPY pipe output to gzip and then to STDOUT

The following command works well
$ psql -c "copy (select * from foo limit 3) to stdout csv header"
# output
column1,column2
val1,val2
val3,val4
val5,val6
However the following does not:
$ psql -c "copy (select * from foo limit 3) to program 'gzip -f --stdout' csv header"
# output
COPY 3
Why do I have COPY 3 as the output from this command? I would expect that the output would be the compressed CSV string, after passing through gzip.
The command below works, for instance:
$ psql -c "copy (select * from foo limit 3) to stdout csv header" | gzip -f -c
# output (this garbage is just the compressed string and is as expected)
߉T`M�A �0 ᆬ}6�BL�I+�^E�gv�ijAp���qH�1����� FfВ�,Д���}������+��
How to make a single SQL command that directly pipes the result into gzip and sends the compressed string to STDOUT?
When you use COPY ... TO PROGRAM, the PostgreSQL server process (backend) starts a new process and pipes the file to the process's standard input. The standard output of that process is lost. It only makes sense to use COPY ... TO PROGRAM if the called program writes the data to a file or similar.
If your goal is to compress the data that go across the network, you could use sslmode=require sslcompression=on in your connect string to use the SSL network compression feature I built into PostgreSQL 9.2. Unfortunately this has been deprecated and most OpenSSL binaries are shipped with the feature disabled.
There is currently a native network compression patch under development, but it is questionable whether that will make v14.
Other than that, you cannot get what you want at the moment.
copy is running gzip on the server and not forwarding the STDOUT from gzip on to the client.
You can use \copy instead, which would run gzip on the client:
psql -q -c "\copy (select * from foo limit 3) to program 'gzip -f --stdout' csv header"
This is fundamentally the same as piping to gzip, which you show in your question.
If the goal is to compress the output of copy so it transfers faster over the network, then...
psql "postgresql://ip:port/dbname?sslmode=require&sslcompression=1"
It should display "compression active" if it's enabled. That probably requires some server config variable to be enabled though.
Or you can simply use ssh:
ssh user#dbserver "psql -c \"copy (select * from foo limit 3) to stdout csv header\" | gzip -f -c" >localfile.csv.gz
But... of course, you need ssh access to the db server.
If you don't have ssh to the db server, maybe you have ssh to another box in the same datacenter that has a fast network link to the db server, in that case you can ssh to it instead of the db server. Data will be transferred uncompressed between that box and the database, compressed on the box, and piped via ssh to your local machine. That will even save cpu on the database server since it won't be doing the compression.
If that doesn't work, well then, why not put the ssh command into the "to program" and have the server send it via ssh to your machine? You'll have to setup your router and open a port, but you can do that. Of course you'll have to find a way to put the password in the ssh command line, that's usually a big no-no, but maybe just for once. Or just use netcat instead, that doesn't require a password.
Also, if you want speed, please, use zstd instead of gzip.
Here's an example with netcat. I just tested it and it worked.
On destination machine which is 192.168.0.1:
nc -lp 65001 | zstd -d >file.csv
In another terminal:
psql -c "copy (select * from foo) to program 'zstd -9 |nc -N 192.168.0.1 65001' csv header" test
Note -N option for netcat.
You can use copy to PROGRAM:
COPY foo_table to PROGRAM 'gzip > /tmp/foo_table.csv' delimiters',' CSV HEADER;

Option to exclude files in pg_basebackup command Postgres

When cloning a standby, how can I prevent pg_basebackup from copying postgresql.conf and pg_hba.conf from the master to /var/lib/pgsql/9.9/data directory?
Currently I am using this command
[root#xyz..]# pg_basebackup -h {master ipAddr} -D /var/lib/pgsql/9.6/data -U postgres -v -P
according to docs:
The backup will include all files in the data directory and
tablespaces, including the configuration files and any additional
files placed in the directory by third parties. But only regular files
and directories are copied. Symbolic links (other than those used for
tablespaces) and special device files are skipped.
So there is no such option. If you still want to force it, move config files away from data directory (and optionally ln them to data_dir)
This answer is for Postgres 14. pg_basebackup takes backup of the entire data directory. https://www.postgresql.org/docs/14/app-pgbasebackup.html states that the backup utility will skip all directory/file that are symbolic links. So, that could be a workaround to get only desired content into the tar ball.
I had faced similar situations where I wanted to exclude the content of multiple directories like pg_replslot,pg_dynshmem, pg_notify etc. I made the tar ball the usual way: pg_basebackup -D /backup/ -F t -P -v. After the tar ball was made, and before restoring it to another server, I updated the tar manually by excluding content of all the required directories.

Register and run PostgreSQL 9.0 as Windows Service

For a while i have my db running on a command window because im not figuring out how to run it as a windows service.
Since i have the zip file version downloaded. how can i register the pg_ctl command as a windows service?
By the way, im using the following line to start the server:
"D:/Program Files/PostgreSQL/9.0.4/bin/pg_ctl.exe" -D "D:/Program Files/PostgreSQL/9.0.4/db_data" -l logfile start
Thanks in advance.
Use the register parameter for the pg_ctl program.
The data directory should not be stored in Program Files, the location of %ProgramData% is e.g. a good choice.
pg_ctl.exe register -N PostgreSQL -U some_windows_username -P windows_password -D "%ProgramData%/db_data" ...
In newer versions of Postgres, a separate Windows account is no longer necessary, so the following is also sufficient
pg_ctl.exe register -N PostgreSQL -D "%ProgramData%/db_data" ...
Details are in the manual: http://www.postgresql.org/docs/current/static/app-pg-ctl.html
You need to make sure the directory D:/Program Files/PostgreSQL/9.0.4/db_data has the correct privileges for the windows user you specify with the -U flag.
Btw: it is a bad idea to store program data in Program Files. You should move the data directory somewhere outside of Program Files because Program Files is usually highly restricted for regular users - with a very good reason.
Just run 'Command Prompt' as windows administrator and run the below command:
pg_ctl.exe register -N PostgreSQL -D "D:/Program Files/PostgreSQL/9.0.4/db_data"
You don't need to specify a User and Password, as previous answers have suggested.

How do I copy symbolic links between servers?

We are moving web servers. (LAMP)
Webserver 1 has hundreds of symbolic links pointing to files in a different directory (eg. ../../files/001.png). When we move over to the new server (have downloaded the site to my computer then reuploading to Webserver 2 using SFTP client Transmit). It does not copy the symbolic links...
Is there a better way to get the symbolic links from one server to another apart from recreating them on the new server?
rsync -a from one server to another will preserve file attributes and symlinks.
rsync -av user#server:/path/to/source user#server2:/path/to/target
Something like the following? On Webserver 1:
tar czf - the_directory | (ssh Webserver2 "cd /path/to/wherever && tar xzf -")
This creates a tar of the stuff to copy to STDOUT, and pipes it to an untar on the other server over ssh. It can be faster than a recursive ssh copy too.