I have a question about postgreSQL cluster - postgresql

I am new to programming. I used PostgreSQL because I needed a database program.
I made a data file using initdb.exe
initdb.exe -U postgres -A password -E utf8 -W -D D:\Develop\postgresql-10.17-2-windows-x64-binaries\data
This method is called a data cluster.
I have put a lot of information into this data file.
Now I want to transfer the data to another computer and use it.
How do I import and use files created using a cluster?
I want to register and use it in pgAdmin4.
What should I do?
I am using a Windows 10 operating system. A solution similar to loading a cluster is required.

As long as you want to transfer to another 64-bit Windows system, you can just shut down the server and copy the data directory. Register a service with pg_ctl register if you want.
To copy the data to a different operating system, you have to use pg_dumpall and restore with psql. pgAdmin won't help you there (it is not an administration tool, as its name would suggest).

Related

How to backup and restore PostgreSQL programmatically without using pg_dump and psql?

I have a kubernetes space where the PostgreSQL DB is on a different POD than the executor where my Typescrtipt code is being executed, which means that I do not have access to pg_dump and psql. Is there a typescript npm library which can backup and restore the DB programmatically and remotely or a special client which I can install in project so that I can use? I could not find such an mpm library or client on the internet.
Another thing that I can think of is some "universal" SQL script, which makes the backup and another one for the restore, if there is such a thing at all.

PGAdmin restore remote database [duplicate]

This question already has answers here:
Export and import table dump (.sql) using pgAdmin
(6 answers)
Closed 1 year ago.
Let I first state that I am not a DBA-guy but I do have a question regarding restoring remote databases using PG Admin.
I have this PG Admin tool (v4.27) running in a Docker container and I use this portal to maintain two separate Postgress databases, both running in a Docker container as well. I installed PG Agent in both database containers and run scheduled daily backup's, defined via PG Admin and stored in the container of each corresponding databases. So far so good.
Now I want to restore one of these databases by using the latest daily backup file (*.sql), but the Restore Dialog of PG Admin only looks for files stored locally (the PG Admin container)?
Whatever I tried or searched for on the internet, to me it seems not possible to show a list of remote backup files in PG Admin or run manually a remote SQL file. Is this even possible in PG Admin? Running psql in the query editor is not possible (duh ...) and due to not finding the remote SQL-restore file I have no clue how to run this code within PG Admin on the remote corresponding database container.
The one and only solution so far I can think of, is scheduling a restore which has no calendar and should be triggered manually when needed, but it's not the prettiest solution.
Do I miss something or did I overlook the right documentation or have I created a silly, unmaintainable solution?
Thanks in advance for thinking along and kind regards,
Aad Dijksman
You cannot restore a plain format dump (an SQL script) with pgAdmin. You will have to use psql, the command line client.
COPY statements and data are mixed in such a dump, and that would make pgAdmin choke.
The solution by #Laurenz Albe points out that it is best to use the command line psql here, and that would be my first go-to.
However, if for whatever reason you don't have access to the command line and are only able to connect to this database via pgadmin, there is another solution which you can find here:
Export and import table dump (.sql) using pgAdmin
I recommend looking at the solution by Tomas Greif.

postgreSQL pg_dump Through LAN

I'm looking for a way to back up a database through a LAN on a mounted drive on a workstation. Basically a Bash script on the workstation or the server to dump a single database into a path on that volume. The volume isn't mounted normally, so I'm not clear as to which box to put the script on, given username/password and mounted volume permissions/availability.
The problem I currently have is permissions on the workstation:
myfile='/volumes/Dragonfly/PG_backups/serverbox_PG_mydomain5myusername_'`date +%Y_%m_%d_%H_%M`'.sql'
pg_dump -h serverbox.local -U adminuser -w dbname > $myfile
Is there a syntax that I can provide for this? Read the docs and there is no provision for a password, which is kind of expected. I also don't want to echo the password and keep it in a shell script. Or is there another way of doing this using rsync after the backups are done locally? Cheers
First, note the pg_dump command you are using includes the -w option, which means pg_dump will not issue a password prompt. This is indeed what you want for unattended backups (i.e. performed by a script). But you just need to make sure you have authentication set up properly. The options here are basically:
Set up a ~/.pgpass file on the host the dump is running from. Based on what you have written, you should keep this file in the home directory of the server this backup job runs on, not stored somewhere on the mounted volume. Based on the info in your example, the line in this file should look like:
serverbox.local:5432:database:adminuser:password
Remember to specify the database name that you are backing up! This was not specified in your example pg_dump command.
Fool with your Postgres server's pg_hba.conf file so that connections from your backup machine as your backup user don't require a password, but use something like trust or ident authentication. Be careful here of course, if you don't fully trust the host your backups are running on (e.g. it's a shared machine), this isn't a good idea.
Set environment variables on the server such as PGPASSWORD that are visible to your backup script. Using a ~/.pgpass file is generally recommended instead for security reasons.
Or is there another way of doing this using rsync after the backups are done locally?
Not sure what you are asking here -- you of course have to specify credentials for pg_dump before the backup can take place, not afterwards. And pg_dump is just one of many backup options, there are other methods that would work if you have SSH/rsync access to the Postgres server, such as file-system level backups. These kinds of backups (aka "physical" level) are complementary to pg_dump ("logical" level), you could use either or both methods depending on your level of paranoia and sophistication.
Got it to work with ~/.pgpass, pg_hba.conf on the server, and a script that included the TERM environment variable (xterm), and a path to pg_dump.
There is no login for the crontab, even as the current admin user. So it's running a bit blind.

Use postgresql copy command On Openshift $OPENSHIFT_DATA_DIR from within Node JS program

we are developing an app on Openshift.
we recently upgraded it and made it scalable, separating postgresql to a separate gear than the nodeJS.
in the app user can choose a csv file and upload it to the server ($OPENSHIFT_DATA_DIR).
we then execute from within Node JS:
copy uploaded_data FROM '/var/lib/openshift/our_app_id/app-root/data/uploads/table.csv' WITH CSV HEADER
since the upgrade the above copy command is broken, we are getting this error:
[error: could not open file "/var/lib/openshift/our_app_id/app-root/data/uploads/table.csv" for reading: No such file or directory]
I suppose because the pgsql is now on a separate gear it cannot access $OPENSHIFT_DATA_DIR.
can I make this folder visible to postgresql (though it is on a separate gear)?
is there any other folder that can be visible to both the DB and the APP (each on its own gear) ?
can you suggest alternative ways to achieve similar functionality ?
There is currently no shared disk space between gears within the same scaled application on OpenShift Online. If you want to store a file and access it on multiple gears, the best way would probably be to store it on Amazon S3 or some other shared file storage service that is accessible by all of your gears, or, as you have stated, store the data in the database and access it wherever you need it.
You can do this by using \COPY and psql. e.g.
first put your sql command in a file. (file.sql)
psql -h yourremotehost -d yourdatabase -p thedbport -U username -w -f file.sql
the -w eliminates the password prompt. If you need a password, you can't supply it on the command line. Instead set the environmental variable PGPASSWORD to your password. (The use of PGPASSWORD has been deprecated but it still works)
You can do this with rhc
rhc set-env PGPASSWORD=yourpassword -a yourapp
Here is a sample sql
CREATE TABLE junk(id integer, values float, name varchar(100);
\COPY junk from 'file.sql' with CSV HEADER
Notice there is NO semicolon at the end of the second line.
If you're running this command from a script in your application. The file that contains your data and the file.sql must be in your application's data directory.
ie. app-root/data

Remote trigger a postgres database backup

I would like to backup my production database before and after running a database migration from my deploy server (not the database server) I've got a Postgresql 8.4 server sitting on a CentOS 5 machine. The website accessing the database is on a Windows 2008 server running an MVC.Net application, it checks out changes in the source code, compiles the project, runs any DB Changes, then deploys to IIS.
I have the DB server set up to do a crontab job backup for daily backups, but I also want a way of calling a backup from the deploy server during the deploy process. From what I can figure out, there isn't a way to tell the database from a client connection to back itself up. If I call pg_dump from the web server as part of the deploy script it will create the backup on the web server (not desirable). I've looked at the COPY command, and it probably won't give me what I want. MS SQLServer lets you call the BACKUP command from within a DB Connection which will put the backups on the database machine.
I found this post about MySQL, and that it's not a supported feature in MySQL. Is Postgres the same? Remote backup of MySQL database
What would be the best way to accomplish this? I thought about creating a small application that makes an SSH connection to the DB Server, then calls pg_dump? This would mean I'm storing SSH connection information on the server, which I'd really rather not do if possible.
Create a database user pgbackup and assign him read-only privileges to all your database tables.
Setup a new OS user pgbackup on CentOS server with a /bin/bash shell.
Login as pgbackup and create a pair of ssh authentication keys without passphrase, and allow this user to login using generated private key:
su - pgbackup
ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""
cp -a ~/.ssh/.id_rsa.pub ~/.ssh/authorized_keys
Create a file ~pgbackup/.bash_profile:
exec pg_dump databasename --file=`date +databasename-%F-%H-%M-%S-%N.sql`
Setup your script on Windows to connect using ssh and authorize using primary key. It will not be able to do anything besides creating a database backup, so it would be reasonably safe.
I think this could be possible if you create a trigger that uses the PostgreSQL module dblink to make a remote database connection from within PL/pgSQL.
I'm not sure what you mean but I think you can just use pg_dump from your Windows computer:
pg_dump --host=centos-server-name > backup.sql
You'd need to install Windows version of PostgreSQL there, so pg_dump.exe would be installed, but you don't need to start PostgreSQL service or even create a tablespace there.
Hi Mike you are correct,
Using the pg_dump we can save the backup only on the local system. In our case we have created a script on the db server for taking the base backup. We have created a expect script on another server which run the script on database server.
All our servers are linux servers , we have done this using the shell script.