Connect to a remote postgresql server using ftp - postgresql

I exported a database to a folder in a local ftp server (using pg_dump) in order to save some space on my computer. Now I want to connect to the generated db.sql file but without importing it again.
In PgAdmin I've created a new database, opened the psql tool and executed \i /path/todb/db.sql, which imports the database into my computer. However, I just need a remote connection over ftp.
I'll appreciate any help.
Thanks.

Related

is there a way to dump all postgres database from localhost to remote server without restoration in ubuntu?

I want to dump all my PostgreSQL databases directly to a remote server from my localhost, just a simple .sql file and create a cronjob for it. Is this possible ?

How to COPY local file to remote database

I have remote postgresql database and a local csv file which I need to add to the database. I'm trying to do it with PyCharm.
Thus, I'm trying to copy data from a local file to a remote database.
If the database local is, then this command works:
COPY master_relationsextra(code, serial_number, member_type, characteristic, price_list)
FROM '/Users/name/Desktop/AUTOI.csv' with CSV HEADER delimiter ';' encoding 'ISO-8859-1';
But for the remote database it doesn't working.
Any advice how can I do that?
I'm using PyCharm thus I did with PyCharm's help. All queries and commands did PyCharm for me. I did it as follows:
I connected to the remote database from PyCharm database pane
Right click on table and then import from file
Choose all rules and import
That did the trick for me.

How to connect to postgresql from files?

Ok, so I have a client who has a postgresql database. He sent me the files (they look like this: http://prntscr.com/fbyz2l).
I have PGAdmin 4 on my windows 10 box. I also have postgres installed locally.
I have the database name and login information... but I can't figure out how to connect to the database.
I am guessing it is pretty simple, but I am having a tough time googling the right thing to get some help.
Update
I am still hunting this, but my feeling is the files are not the right format to import or bring onto my localhost. I am asking for a backup file that PGAdmin can make. If anyone has input on this, I am all ears.
Update 2
So I copied all the files to C:\Program Files\PostgreSQL\data\pg96\base
Restarted the server. When I do a
psql -h "C:\Program Files\PostgreSQL\data\pg96\base" -l
I get this:
http://prntscr.com/fbzxf2
I can connect to template1 and postgres, but neither of them is my database (184429). Ugh...
Thanks!
These files are basically copy of the postgres DB . You need to restore it on your machine .
You can find default path of postgres DB on C:\Program Files\PostgreSQL\some version\data location . Place the root of these files within that directory and restart your postgres service .
After that you can find new database in your PGAdmin .

Remote trigger a postgres database backup

I would like to backup my production database before and after running a database migration from my deploy server (not the database server) I've got a Postgresql 8.4 server sitting on a CentOS 5 machine. The website accessing the database is on a Windows 2008 server running an MVC.Net application, it checks out changes in the source code, compiles the project, runs any DB Changes, then deploys to IIS.
I have the DB server set up to do a crontab job backup for daily backups, but I also want a way of calling a backup from the deploy server during the deploy process. From what I can figure out, there isn't a way to tell the database from a client connection to back itself up. If I call pg_dump from the web server as part of the deploy script it will create the backup on the web server (not desirable). I've looked at the COPY command, and it probably won't give me what I want. MS SQLServer lets you call the BACKUP command from within a DB Connection which will put the backups on the database machine.
I found this post about MySQL, and that it's not a supported feature in MySQL. Is Postgres the same? Remote backup of MySQL database
What would be the best way to accomplish this? I thought about creating a small application that makes an SSH connection to the DB Server, then calls pg_dump? This would mean I'm storing SSH connection information on the server, which I'd really rather not do if possible.
Create a database user pgbackup and assign him read-only privileges to all your database tables.
Setup a new OS user pgbackup on CentOS server with a /bin/bash shell.
Login as pgbackup and create a pair of ssh authentication keys without passphrase, and allow this user to login using generated private key:
su - pgbackup
ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""
cp -a ~/.ssh/.id_rsa.pub ~/.ssh/authorized_keys
Create a file ~pgbackup/.bash_profile:
exec pg_dump databasename --file=`date +databasename-%F-%H-%M-%S-%N.sql`
Setup your script on Windows to connect using ssh and authorize using primary key. It will not be able to do anything besides creating a database backup, so it would be reasonably safe.
I think this could be possible if you create a trigger that uses the PostgreSQL module dblink to make a remote database connection from within PL/pgSQL.
I'm not sure what you mean but I think you can just use pg_dump from your Windows computer:
pg_dump --host=centos-server-name > backup.sql
You'd need to install Windows version of PostgreSQL there, so pg_dump.exe would be installed, but you don't need to start PostgreSQL service or even create a tablespace there.
Hi Mike you are correct,
Using the pg_dump we can save the backup only on the local system. In our case we have created a script on the db server for taking the base backup. We have created a expect script on another server which run the script on database server.
All our servers are linux servers , we have done this using the shell script.

Can PostgreSQL COPY read CSV from a remote location?

I've been using JDBC with a local Postgres DB to copy data from CSV files into the database with the Postgres COPY command. I use Java to parse the existing CSV file into a CSV format matches the tables in the DB. I then save this parsed CSV to my local disk. I then have JDBC execute a COPY command using the parsed CSV to my local DB. Everything works as expected.
Now I'm trying to perform the same process on a Postgres DB on a remote server using JDBC. However, when JDBC tries to execute the COPY I get
org.postgresql.util.PSQLException: ERROR: could not open file "C:\data\datafile.csv" for reading: No such file or directory
Am I correct in understanding that the COPY command tells the DB to look locally for this file? I.E. the remote server is looking on its C: drive (doesn't exist).
If this is the case, is there anyway to indicate to the copy command to look on my computer rather than "locally" on the remote machine? Reading through the copy documentation I didn't find anything that indicated this functionality.
If the functionality doesn't exist, I'm thinking of just populating the whole database locally then copying to database to the remote server but just wanted to check that I wasn't missing anything.
Thanks for your help.
Create your sql file as follows on your client machine
COPY testtable (column1, c2, c3) FROM STDIN WITH CSV;
1,2,3
4,5,6
\.
Then execute, on your client
psql -U postgres -f /mylocaldrive/copy.sql -h remoteserver.example.com
If you use JDBC, the best solution for you is to use the PostgreSQL COPY API
http://jdbc.postgresql.org/documentation/publicapi/org/postgresql/copy/CopyManager.html
Otherwise (as already noted by others) you can use \copy from psql which allows accessing the local files on the client machine
To my knowledge, the COPY command can only be used to read locally (either from stdin or from file) from the machine where the database is running.
You could make a shell script where you run you the java conversion, then use psql to do a \copy command, which reads from a file on the client machine.