pg_dump and restore - pg_restore hangs on Windows - postgresql

I have around 50gb of data in a postgres database on my laptop (mac) that I need to transfer to my new pc (windows). I've generated tar dumps using pg_dump of the schemas I need to transfer, but pg_restore just hangs.
To eliminate problems with the size of the file and the fact that the source is mac, I've boiled it down to the simplest test case I can find which is to create a new table in a new schema on my PC, export it using pg dump and then try to restore it back into the same database. Even with something this simple, pg_restore just hangs. I'm clearly missing something - probably quite obvious. Any ideas?
D:\Share\dbexport>psql -U postgres
Password for user postgres:
psql (12.1)
WARNING: Console code page (850) differs from Windows code page (1252)
8-bit characters might not work correctly. See psql reference
page "Notes for Windows users" for details.
Type "help" for help.
postgres=# create schema new_schema
postgres-# create table new_schema.new_table(id numeric);
CREATE SCHEMA
postgres=# insert into new_schema.new_table values(1);
INSERT 0 1
postgres=# commit;
WARNING: there is no transaction in progress
COMMIT
postgres=# exit
Schema is created with new table and 1 row. So export
D:\Share\dbexport>pg_dump -U postgres -n new_schema -f new_schema_sql.sql
Password:
D:\Share\dbexport>more new_schema_sql.sql
--
-- PostgreSQL database dump
--
-- Dumped from database version 12.1
-- Dumped by pg_dump version 12.1
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
--
-- Name: new_schema; Type: SCHEMA; Schema: -; Owner: postgres
--
CREATE SCHEMA new_schema;
ALTER SCHEMA new_schema OWNER TO postgres;
SET default_tablespace = '';
SET default_table_access_method = heap;
--
-- Name: new_table; Type: TABLE; Schema: new_schema; Owner: postgres
--
CREATE TABLE new_schema.new_table (
id numeric
);
ALTER TABLE new_schema.new_table OWNER TO postgres;
--
-- Data for Name: new_table; Type: TABLE DATA; Schema: new_schema; Owner: postgres
--
COPY new_schema.new_table (id) FROM stdin;
1
\.
So file has been created and has the expected content. I connect back to the database and drop the new schema before attempting the restore.
--
-- PostgreSQL database dump complete
--
D:\Share\dbexport>psql -U postgres
Password for user postgres:
psql (12.1)
WARNING: Console code page (850) differs from Windows code page (1252)
8-bit characters might not work correctly. See psql reference
page "Notes for Windows users" for details.
Type "help" for help.
postgres=# drop schema new_schema cascade;
NOTICE: drop cascades to table new_schema.new_table
DROP SCHEMA
postgres=# select * from new_schema.new_table;
ERROR: relation "new_schema.new_table" does not exist
LINE 1: select * from new_schema.new_table;
^
postgres=# exit
D:\Share\dbexport>pg_restore -U postgres -f new_schema_sql.sql
And it just hangs at this last line. I'm a bit lost - I can't get pg_restore to output anything - I've tried with Verbose mode etc but nothing.
Does anyone know where I should be looking next?
David

So I will buy myself a dunce hat.
The issue as pointed out by #a_horse_with_no_name is that I misused the -f flag. That specifies the output file rather than the input file.
Using
pg_restore -U postgres -d postgres -n new_schema new_schema_custom.sql
fixed the issue. Thank you

Related

Is it possible to make the psql \copy see a line inside a csv file as a comment?

I'm successfully inserting csv files to postgresql with the following command:
\COPY tablename(col1, col2, col3) FROM '/home/user/mycsv.txt' WITH CSV HEADER DELIMITER ';' NULL AS 'null';
However, I'd like to write some metadata inside this csv file with data that's repeatable. I know I could create a different file to store this data but I think it'd be a lot more convenient to store this metadata in the same csv file where the majority of the data is stored. I imagine a file like the following:
-- commented line with some metadata
col1;col2;col3
value;value;value
value;value;value
value;value;value
I've tried using -- /* /* and # as comments but the \copy command fails to import the data when I do that. Is there any way that I can tell the \copy psql command to see specific lines as comments just so I can insert data with lines that are not a part of the csv file? Is it possible?
Use the FROM PROGRAM construct to tell something else to filter them out.
\COPY tablename(col1, col2, col3) FROM PROGRAM 'egrep -v "^-- " mycsv.txt' WITH CSV HEADER DELIMITER ';' NULL AS 'null'
You see how pg_dump does it:
pg_dump -d test -U postgres -t orders -a -f test.sql
cat test.sql
--
-- PostgreSQL database dump
--
-- Dumped from database version 12.3
-- Dumped by pg_dump version 12.3
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
--
-- Data for Name: orders; Type: TABLE DATA; Schema: public; Owner: postgres
--
COPY public.orders (order_id, total, order_date, user_id) FROM stdin;
1 100 2020-06-20 00:00:00 1
2 250 2020-06-20 00:00:00 2
\.
--
-- Name: orders_order_id_seq; Type: SEQUENCE SET; Schema: public; Owner: postgres
--
SELECT pg_catalog.setval('public.orders_order_id_seq', 2, true);
--
-- PostgreSQL database dump complete
--
psql -d test -U postgres -f test.sql
Null display is "NULL".
SET
SET
SET
SET
SET
set_config
------------
(1 row)
SET
SET
SET
SET
COPY 2
setval
--------
2
(1 row)

PostgresSQL dump loading succeed but nothing is written on the database

I've try to load a dump to a new database and all seems to work :
user#vpsXXXX:~$ pg_dump -U user -d database < mydump.sql
--
-- PostgreSQL database dump
--
-- Dumped from database version 10.6 (Ubuntu 10.6-0ubuntu0.18.04.1)
-- Dumped by pg_dump version 10.6 (Ubuntu 10.6-0ubuntu0.18.04.1)
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
--
-- Name: plpgsql; Type: EXTENSION; Schema: -; Owner:
--
CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
--
-- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner:
--
COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
--
-- PostgreSQL database dump complete
--
When I look the tables on a software like Postico, there is no tables except the Postgres ones. My dump is complete when I look the SQL file.
Do you know a tip to know what happens ?
Thanks !
Dump and restore operations are best performed as postgres user. The easiest way to achieve this is to become the postgres UNIX user.
The initial command had the mistake of confusing pg_dump with psql.

pg_dump returning no rows

usually I have good luck with pg_dump getting dumps of table data, but for some reason, it always returns zero rows for one of them.
Here is the command I use that usually works
pg_dump -d foo_db -h localhost -p 8888 -U postgres -t foo_table --column-inserts --data-only > foo.sql
port, db, login are all good
Here's the results:
--
-- PostgreSQL database dump
--
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET search_path = public, pg_catalog;
--
-- Data for Name: foo_table; Type: TABLE DATA; Schema: public; Owner: postgres
--
--
-- Name: foo_table_id_seq; Type: SEQUENCE SET; Schema: public; Owner: postgres
--
SELECT pg_catalog.setval('foo_table_id_seq', 179, true);
--
-- PostgreSQL database dump complete
--
I can log into the db and see the rows are in there, but pg_dump can't get them for some reason
psql -U postgres -h localhost -p 8888 -d foo_db
You are now connected to database "foo_db" as user "postgres".
foo_db=# select count(*) from foo_table;
count
-------
116
(1 row)
Time: 30.972 ms

Can't copy table to another database with pg_dump

I'm trying to copy a table from one database to another database (NOT schema). The code I used in terminal is as below:
pg_dump -U postgres -t OldSchema.TableToCopy OldDatabase | psql -U postgres -d NewDatabase
When I press Enter it requests postgres password I enter my pass and then It requests psql password. I enter it and press Enter. I receive lots of:
invalid command \N
ERROR: relation "TableToCopy" does not exist
Both tables have UTF8 encoding. Am I doing something wrong?
OS: windows XP
Error output:
psql:TblToCopy.sql:39236: invalid command \N
psql:TblToCopy.sql:39237: invalid command \N
psql:TblToCopy.sql:39238: invalid command \N
.
.
.
After Hundreds of above errors, the terminal echoes:
psql:TblToCopy.sql:39245: ERROR: syntax error at or near "509"
LINE 1: 509 some gibberish words and letters here
And Finally:
sql:TblToCopy.sql:39245: ERROR: relation "TableToCopy" does not exist
EDIT
I read this response to the same problem \N error with psql , it says to use INSERT instead of COPY, but in the file pg_dump created COPY. How to say to pg_dump to use INSERT instead of COPY?
I converted the file with iconv to utf-8. Now that error has gone but I have a new error. In this particular case when I use psql to import data to database something new happens. Table gets created but without data. It says:
SET
SET
SET
SET
SET
SET
SET
SET
CREATE TABLE
ALTER TABLE
psql:tblNew.sql:39610: ERROR: value too long for type character(3)
CONTEXT: COPY words, line 1, column first_two_letters: "سر"
ALTER TABLE
ALTER TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE TRIGGER
I've tried to create a database with Encoding: UTF8 with a table and insert the two UTF-8 encoded characters the COPY command is trying to insert and it works when using INSERT.
CREATE DATABASE test
WITH OWNER = postgres
ENCODING = 'UTF8'
TABLESPACE = pg_default
LC_COLLATE = 'English_United States.1252'
LC_CTYPE = 'English_United States.1252'
CONNECTION LIMIT = -1;
CREATE TABLE x
(
first_two_letters character(3)
)
WITH (
OIDS=FALSE
);
ALTER TABLE x
OWNER TO postgres;
INSERT INTO x(
first_two_letters)
VALUES ('سر');
According to http://rishida.net/tools/conversion/ for the failing COPY the Unicode code points are:
U+0633 U+0631
which are two characters, which means you should be able to store them in a column defined as character(3), which stores strings up to 3 characters (not bytes) in length.
and if we try to INSERT, it succeeds:
INSERT INTO x(
first_two_letters)
VALUES (U&'\0633\0631');
From the pgdump documentation you can INSERT instead of COPY by using the --inserts option
--inserts
Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is mainly useful for making dumps that can
be loaded into non-PostgreSQL databases. However, since this option
generates a separate command for each row, an error in reloading a row
causes only that row to be lost rather than the entire table contents.
Note that the restore might fail altogether if you have rearranged
column order. The --column-inserts option is safe against column order
changes, though even slower.
Try to use this instead for Step 1:
pg_dump -U postgres -t OldSchema."TableToCopy" --inserts OldDatabase > Table.sql
I've also tried to COPY from a table to a file and use COPY to import and for me it works.
Are you sure your client and server database encoding is UTF8 ?
Firstly, export the table named "x" from schema "public" on database "test" to a plain text SQL file:
pg_dump -U postgres -t public."x" test > x.sql
which creates the x.sql file that contains:
--
-- PostgreSQL database dump
--
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: x; Type: TABLE; Schema: public; Owner: postgres; Tablespace:
--
CREATE TABLE x (
first_two_letters character(3)
);
ALTER TABLE public.x OWNER TO postgres;
--
-- Data for Name: x; Type: TABLE DATA; Schema: public; Owner: postgres
--
COPY x (first_two_letters) FROM stdin;
سر
\.
--
-- PostgreSQL database dump complete
--
Secondly, import with:
psql -U postgres -d test -f x.sql
The table name should be quoted , as the following
pg_dump -U postgres -t OldSchema."TableToCopy" OldDatabase | psql -U postgres -d NewDatabase
And I suggest you do the job in two steps
Step 1
pg_dump -U postgres -t OldSchema."TableToCopy" OldDatabase > Table.sql
If step 1 goes ok then do the step2.
Step 2
psql -U postgres -d NewDatabase -f Table.sql

Dot (.) in Schema name makes pg_dump unusable

I have a schema named 2sample.sc. When I want to pg_dump some of its table, the following error appears:
pg_dump: No matching tables were found
My pg_dump command:
pg_dump -U postgres -t 2sample.sc."error_log" --inserts games > dump.sql
My pg_dump works fine on other schemas like 2sample.
What I did:
I tried to escape dot(.) with no success though
Use "schema.name.with.dots.in.it"."table.name.with.dots.in.it" to specify the schema.table:
-- test schema with a dot in its name
DROP SCHEMA "tmp.tmp" CASCADE;
CREATE SCHEMA "tmp.tmp" ;
SET search_path="tmp.tmp";
CREATE TABLE nononono
( dont SERIAL NOT NULL
);
insert into nononono
SELECT generate_series(1,10)
;
$pg_dump -t \"tmp.tmp\".\"nononono\" --schema-only -U postgres the_database
Output (snipped):
SET search_path = "tmp.tmp", pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: nononono; Type: TABLE; Schema: tmp.tmp; Owner: postgres; Tablespace:
--
CREATE TABLE nononono (
dont integer NOT NULL
);
BTW: why would you want to add a dot to a schema (or table) name? It is asking for trouble. The same for MixedCaseNames. Underscores work just fine.