I'm looking for a way to back up a database through a LAN on a mounted drive on a workstation. Basically a Bash script on the workstation or the server to dump a single database into a path on that volume. The volume isn't mounted normally, so I'm not clear as to which box to put the script on, given username/password and mounted volume permissions/availability.
The problem I currently have is permissions on the workstation:
myfile='/volumes/Dragonfly/PG_backups/serverbox_PG_mydomain5myusername_'`date +%Y_%m_%d_%H_%M`'.sql'
pg_dump -h serverbox.local -U adminuser -w dbname > $myfile
Is there a syntax that I can provide for this? Read the docs and there is no provision for a password, which is kind of expected. I also don't want to echo the password and keep it in a shell script. Or is there another way of doing this using rsync after the backups are done locally? Cheers
First, note the pg_dump command you are using includes the -w option, which means pg_dump will not issue a password prompt. This is indeed what you want for unattended backups (i.e. performed by a script). But you just need to make sure you have authentication set up properly. The options here are basically:
Set up a ~/.pgpass file on the host the dump is running from. Based on what you have written, you should keep this file in the home directory of the server this backup job runs on, not stored somewhere on the mounted volume. Based on the info in your example, the line in this file should look like:
serverbox.local:5432:database:adminuser:password
Remember to specify the database name that you are backing up! This was not specified in your example pg_dump command.
Fool with your Postgres server's pg_hba.conf file so that connections from your backup machine as your backup user don't require a password, but use something like trust or ident authentication. Be careful here of course, if you don't fully trust the host your backups are running on (e.g. it's a shared machine), this isn't a good idea.
Set environment variables on the server such as PGPASSWORD that are visible to your backup script. Using a ~/.pgpass file is generally recommended instead for security reasons.
Or is there another way of doing this using rsync after the backups are done locally?
Not sure what you are asking here -- you of course have to specify credentials for pg_dump before the backup can take place, not afterwards. And pg_dump is just one of many backup options, there are other methods that would work if you have SSH/rsync access to the Postgres server, such as file-system level backups. These kinds of backups (aka "physical" level) are complementary to pg_dump ("logical" level), you could use either or both methods depending on your level of paranoia and sophistication.
Got it to work with ~/.pgpass, pg_hba.conf on the server, and a script that included the TERM environment variable (xterm), and a path to pg_dump.
There is no login for the crontab, even as the current admin user. So it's running a bit blind.
Related
I am new to programming. I used PostgreSQL because I needed a database program.
I made a data file using initdb.exe
initdb.exe -U postgres -A password -E utf8 -W -D D:\Develop\postgresql-10.17-2-windows-x64-binaries\data
This method is called a data cluster.
I have put a lot of information into this data file.
Now I want to transfer the data to another computer and use it.
How do I import and use files created using a cluster?
I want to register and use it in pgAdmin4.
What should I do?
I am using a Windows 10 operating system. A solution similar to loading a cluster is required.
As long as you want to transfer to another 64-bit Windows system, you can just shut down the server and copy the data directory. Register a service with pg_ctl register if you want.
To copy the data to a different operating system, you have to use pg_dumpall and restore with psql. pgAdmin won't help you there (it is not an administration tool, as its name would suggest).
We are loading loading data from S3 to Redshift, but proving redshift username and password on the command line.
Can we do this too role based because this leads to hard coding user name password in code which is a security vulnerability.
psql -h $redshift_jdbc_url -U $redshift_db_username -d $redshift_dbname -p $port_number -c "copy $destinationTable$columnList from '$s3fileName' credentials 'aws_iam_role=arn:aws:iam::$account_number:role/$s3role;master_symmetric_key=$master_key' region '$s3region' format as json '$jsonPathFile' timeformat 'auto' GZIP TRUNCATECOLUMNS maxerror $maxError";
Though this question has nothing to do specifically with Redshift, there could be multiple options to avoid username/password, by mistake checked in to code repository like (cvs,git etc) or getting shared.
Not sure if we do(as stated below) is best practice or not, here is how we do and I think, its safe.
We use the environment variable in our case, and those environment variables are outside of source code repository and the shell script code reads usually there at particular instance environment only.
For e.g. if you have shell script that execute the above command, will load the environment file variable like below. example psql.sh
#!/bin/bash
echo "Loading environment variable"
. "$HOME/.env"
Your other commands
The env file could have variables like below,
#!/bin/bash
export REDSHIFT_USER="xxxxxxxxx"
export REDSHIFT_PASSWORD="xxxxxx"
There are other options too, but not sure if they work well with Redshift.
.pgpass file to store the password. refer below link.
http://www.postgresql.org/docs/current/static/libpq-pgpass.html
"trust authentication" for that specific user, refer below link.
http://www.postgresql.org/docs/current/static/auth-methods.html#AUTH-TRUST
Hope that answers your question.
Approach 1:
Generate temporary username / password which has a TTL as part of your script. Use that temporary username / password to connect to DB.
Reference From AWS documentation
https://docs.aws.amazon.com/cli/latest/reference/redshift/get-cluster-credentials.html
Approach 2:
Use AWS Secerets Manager Service
We are trying to encrypt Postgres data at rest. Can't find any documentation to encrypt Postgres data folder using LUKS with dm-encrypt.
No special instructions are necessary – PostgreSQL will use the opened encrypted filesystem just like any other file system. Just point initdb to a directory in the opened file system, and it will create a PostgreSQL cluster there.
Automatic server restarts will fail, because you need to enter the passphrase.
Of all the ways to protect a database, encrypting the file system is the least useful:
Usually, attacks on a database happen via the client, normally with SQL injection. Encrypring the file system won't help.
The other common attack vector are backups. Backups done with pg_dump or pg_basebackup are not encrypted.
But I guess you know why you need it.
we are developing an app on Openshift.
we recently upgraded it and made it scalable, separating postgresql to a separate gear than the nodeJS.
in the app user can choose a csv file and upload it to the server ($OPENSHIFT_DATA_DIR).
we then execute from within Node JS:
copy uploaded_data FROM '/var/lib/openshift/our_app_id/app-root/data/uploads/table.csv' WITH CSV HEADER
since the upgrade the above copy command is broken, we are getting this error:
[error: could not open file "/var/lib/openshift/our_app_id/app-root/data/uploads/table.csv" for reading: No such file or directory]
I suppose because the pgsql is now on a separate gear it cannot access $OPENSHIFT_DATA_DIR.
can I make this folder visible to postgresql (though it is on a separate gear)?
is there any other folder that can be visible to both the DB and the APP (each on its own gear) ?
can you suggest alternative ways to achieve similar functionality ?
There is currently no shared disk space between gears within the same scaled application on OpenShift Online. If you want to store a file and access it on multiple gears, the best way would probably be to store it on Amazon S3 or some other shared file storage service that is accessible by all of your gears, or, as you have stated, store the data in the database and access it wherever you need it.
You can do this by using \COPY and psql. e.g.
first put your sql command in a file. (file.sql)
psql -h yourremotehost -d yourdatabase -p thedbport -U username -w -f file.sql
the -w eliminates the password prompt. If you need a password, you can't supply it on the command line. Instead set the environmental variable PGPASSWORD to your password. (The use of PGPASSWORD has been deprecated but it still works)
You can do this with rhc
rhc set-env PGPASSWORD=yourpassword -a yourapp
Here is a sample sql
CREATE TABLE junk(id integer, values float, name varchar(100);
\COPY junk from 'file.sql' with CSV HEADER
Notice there is NO semicolon at the end of the second line.
If you're running this command from a script in your application. The file that contains your data and the file.sql must be in your application's data directory.
ie. app-root/data
I would like to backup my production database before and after running a database migration from my deploy server (not the database server) I've got a Postgresql 8.4 server sitting on a CentOS 5 machine. The website accessing the database is on a Windows 2008 server running an MVC.Net application, it checks out changes in the source code, compiles the project, runs any DB Changes, then deploys to IIS.
I have the DB server set up to do a crontab job backup for daily backups, but I also want a way of calling a backup from the deploy server during the deploy process. From what I can figure out, there isn't a way to tell the database from a client connection to back itself up. If I call pg_dump from the web server as part of the deploy script it will create the backup on the web server (not desirable). I've looked at the COPY command, and it probably won't give me what I want. MS SQLServer lets you call the BACKUP command from within a DB Connection which will put the backups on the database machine.
I found this post about MySQL, and that it's not a supported feature in MySQL. Is Postgres the same? Remote backup of MySQL database
What would be the best way to accomplish this? I thought about creating a small application that makes an SSH connection to the DB Server, then calls pg_dump? This would mean I'm storing SSH connection information on the server, which I'd really rather not do if possible.
Create a database user pgbackup and assign him read-only privileges to all your database tables.
Setup a new OS user pgbackup on CentOS server with a /bin/bash shell.
Login as pgbackup and create a pair of ssh authentication keys without passphrase, and allow this user to login using generated private key:
su - pgbackup
ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""
cp -a ~/.ssh/.id_rsa.pub ~/.ssh/authorized_keys
Create a file ~pgbackup/.bash_profile:
exec pg_dump databasename --file=`date +databasename-%F-%H-%M-%S-%N.sql`
Setup your script on Windows to connect using ssh and authorize using primary key. It will not be able to do anything besides creating a database backup, so it would be reasonably safe.
I think this could be possible if you create a trigger that uses the PostgreSQL module dblink to make a remote database connection from within PL/pgSQL.
I'm not sure what you mean but I think you can just use pg_dump from your Windows computer:
pg_dump --host=centos-server-name > backup.sql
You'd need to install Windows version of PostgreSQL there, so pg_dump.exe would be installed, but you don't need to start PostgreSQL service or even create a tablespace there.
Hi Mike you are correct,
Using the pg_dump we can save the backup only on the local system. In our case we have created a script on the db server for taking the base backup. We have created a expect script on another server which run the script on database server.
All our servers are linux servers , we have done this using the shell script.