Bulk loading into PostgreSQL from a remote client - postgresql

I need to bulk load a large file into PostgreSQL. I would normally use the COPY command, but this file needs to be loaded from a remote client machine. With MSSQL, I can install the local tools and use bcp.exe on the client to connect to the server.
Is there an equivalent way for PostgreSQL? If not, what is the recommended way of loading a large file from a client machine if I cannot copy the file to the server first?
Thanks.

COPY command is supported in PostgreSQL Protocol v3.0 (Postgresql 7.4 or newer).
The only thing you need to use COPY from a remote client is a libpq enabled client such as psql command line utility.
From the remote client run:
$ psql -d dbname -h 192.168.1.1 -U uname < yourbigscript.sql

You can use the \copy command from psql tool like:
psql -h IP_REMOTE_POSTGRESQL -d DATABASE -U USER_WITH_RIGHTS -c "\copy
TABLE(FIELD_LIST_SEPARATE_BY_COMMA) from 'FILE_IN_CLIENT_MACHINE(MAYBE IN THE SAME
DIRECTORY)' with csv header"

Assuming you have some sort of client in order to run the query, you can use the COPY FROM STDIN form of the COPY command: http://www.postgresql.org/docs/current/static/sql-copy.html

Use psql's \copy command to load data in sql:
$ psql -h <IP> -p <port> -U <username> -d <database>
database# \copy schema.tablename from '/home/localdir/bulkdir/file.txt' delimiter as '|'
database# \copy schema.tablename from '/home/localdir/bulkdir/file.txt' with csv header

Related

copy (pg_dump) .sql file to postgresql using Cygwin

I have downloaded YELP's sql file and tried to import the sql to my local Postgresql.
I am running postgres on Windows 10 and using Cygwin to execute command that I have found on Google. (it took me forever to use Cygwin instead of windows psql shell)
anyhow, Yelp gives data schema in sql and also data in sql. You may find them a link attached below
https://www.yelp.com/dataset/download
so, basically what I thought was creating an empty table with YELP's schema
the, copy all YELP's data into that table.
pg_dump -h localhost -p 5433 -U postgres -s mydb < D:/YELP/yelp_schema.sql
pg_dump -h localhost -p 5433 -U postgres -d mydb < D:/YELP/yelp_sql-2.tar
and I am checking my database transaction and it doesn't change nothing and i do not see the table.
this is what i see on Cygwin terminal
enter image description here
and nothing on my postgresql database
Please let me know what I have missed.
Thanks a lot
your link asks for email to download. I would suspect yelp to be not transparent for it. So I did not check if they have any advise on their data sets... Anyway you use pg_dump to dump the data. to import the resulting file use psql for plain sql files and pg_restore for custom format ones...
Eg:
psql -h localhost -p 5433 -U postgres -f D:/YELP/yelp_schema.sql
pg_restore -h localhost -p 5433 -U postgres -s worldmap -Ft D:/YELP/yelp_schema.sql
https://www.postgresql.org/docs/current/static/app-pgdump.html
pg_dump — extract a PostgreSQL database into a script file or other
archive file

Postgresql Database export to .sql file

I want to export my database as a .sql file.
Can someone help me? The solutions I have found don't work.
A detailed description please.
On Windows 7.
pg_dump defaults to plain SQL export. both data and structure.
open command prompt and
run pg_dump -U username -h localhost databasename >> sqlfile.sql
Above command is preferable as most of the times there will be an error which will be something like - ...FATAL: Peer authentication failed for user ...
In windows, first, make sure the path is added in environment variables PATH
C:\Program Files\PostgreSQL\12\bin
After a successful path adding restart cmd and type command
pg_dump -U username -p portnumber -d dbname -W -f location
this command will export both schema and data
for only schema use -s in place of -W
and for only data use -a.
replace each variable like username, portnumber, dbname and location according to your situation
everything is case sensitive, make sure you insert everything correctly,
and to import
psql -h hostname -p port_number -U username -f your_file.sql databasename
make sure your db is created or creation query is present in .sql file
Documentation: https://www.postgresql.org/docs/current/app-pgdump.html
Go to your command line and run
pg_dump -U userName -h localhost -d databaseName > ~/Desktop/cmsdump.sql

Backing up PostgreSQL Database in Linux

I have to create a postgresql database back up(only the schema) in Linux and restore it on a window machine. I backed up the database with option -E, but I was not able to restore it on the window machine.
Here is the command that I used to backup the database
pg_dump -s -E SQL_ASCII books > books.backup
below is the error message that I received when I tried to restore it.
C:/Program Files/PostgreSQL/9.3/bin\pg_restore.exe --host localhost --port 5432 --username "postgres" --dbname "books" --role "sa" --no-password --list "C:\progress_db\test1"
pg_restore: [archiver] input file appears to be a text format dump. Please use psql.
Am I supposed to use a different command or am I missing something? Your help is greatly appreciated.
The first lines in the official docs about restoring say:
The text files created by pg_dump are intended to be read in by the
psql program. The general command form to restore a dump is
psql dbname < infile
where infile is the file output by the pg_dump command
Or simply read the error message.
The option -E SQL_ASCII only sets the character encoding, it has nothing to do with the format of the dump. By default, pg_dump generates a text file that contains the SQL statements to regenerate the database; in this case, to restore the database you only need to execute the sql commands in the file, as in a psql terminal - that's why a simple psql dbname < mydump is the way to go.
pg_restore is to be used with the alternative "binary" -postgresql specific- format of pg_dump.

how to backup and restore postgresql data from remote to local system in single command

I have to take database dump from remote server but i want to download it on my local system and then restore it directly on my local system with using a single command. actually my remote server disk is full so i can't take data bakup there.
You can use a pipe for this:
pg_dump -f - ... | psql -f - ...
or:
pg_dump -Fc -f - ... | pg_restore -f - ....
The -f - parameter (which is actually the default for pg_dump, but included for clarity) tells the command to write to stdout / read from stdin as appropriate.
Install pg_dump in one of your local system and run the following command from your local system
pg_dump -h your_remort_syten's_ip -U username database_name > /home/user/Desktop/dump_db.sql
I think this will help u.

How can I use DBI to execute a "\copy from remote table" command in Postgres?

I need to copy from a remote PostgreSQL server to a local one. I cannot use any ETL tools, it must be done using Perl with DBI. This data will be large, so I don't want to use "select from source" and "insert into local". I was looking to use COPY to create a file, but this file will be created on the remote server. I can't do that either. I want to use \COPY instead.
How can I use DBI to execute a "\copy from remote table" command and create a local file using DBI in Perl?
You can do it in perl with DBD::Pg, details can be found here:
https://metacpan.org/pod/DBD::Pg#COPY-support
You definitely want to use the "copy from" and "copy to" commands to get the data in and out of the databases efficiently. They are orders of magnitude faster than iterating over rows of data. You many also want to turn off the indexes while you're copying data into the target table, then enable them (and let them build) when the copy is complete.
Assuming you are simply connecting to the listener ports of the two databases, simply open a connection to the source database, copy the table(s) to a file, open a connection to the destination database and copy the file back to the target table.
Hmm. \copy to ... is a psql directive, not SQL, so it won't be understood by DBI or by the PostgreSQL server at the other end.
I see that the PostgreSQL's SQL COPY command has FROM STDIN and TO STDOUT options -- but I doubt that DBI has a way to perform the "raw reads" necessary to access the result data. (I'm sure TO STDOUT is how psql internally implements \copy to ....)
So: In your case, I would mount a folder on your source box back to your target box using e.g. samba or nfs, and use plain old COPY TO '/full/path/to/mounted/folder/data.txt' ....
I got it to work using \copy (select * from remote_table) to '/local/file.txt' ... then \copy local_table from '/local/file.txt' to load the file into the local db. I executed the \copy command from a psql script.
Here's my script
export PGUSER=remoteuser
export PGPASSWORD=remotepwd
/opt/PostgreSQL/8.3/bin/psql -h xx.xx.xx -p 5432 -d remotedb -c "\COPY (select * from remote_table where date(reccreationtim
e) = date((current_date - interval '4 day'))) TO '/local/copied_from_remote.txt' D
ELIMITER '|'"
export PGUSER=localuser
export PGPASSWORD=localpwd
/opt/PostgreSQL/8.3/bin/psql -h xx.xx.xx.xx -p 5432 -d localdb -c "\COPY local_table FROM '/local/copied_from_remote.txt' DELIMITER '|'"
You could use ~/.pgpass and save yourself the export PGUSER stuff, and keep the password out of the environment... (always a good idea from a security perspective)