I have a dump, where the data and the structure is in the public schema. I want to restore it into a schema with a custom name - how can I do that?
EDIT V 2:
My dump file is from heroku, and looks like this at the beginning:
PGDMP
!
pd6rq1i7f3kcath9.1.59.1.6<Y
0ENCODINENCODINGSET client_encoding = 'UTF8';
falseZ
00
STDSTRINGS
STDSTRINGS)SET standard_conforming_strings = 'off';
false[
126216385d6rq1i7f3kcatDATABASE?CREATE DATABASE d6rq1i7f3kcath WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
DROP DATABASE d6rq1i7f3kcath;
uc0lt9t3fj0da4false26152200publicSCHEMACREATE SCHEMA public;
DROP SCHEMA public;
postgresfalse\
SCHEMA publicCOMMENT6COMMENT ON SCHEMA public IS 'standard public schema';
postgresfalse5?307916392plpgsql EXTENSION?CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
DROP EXTENSION plpgsql;
false]
00EXTENSION plpgsqlCOMMENT#COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
false212?125516397_final_mode(anyarrayFUNCTION?CREATE FUNCTION _final_mode(anyarray) RETURNS anyelement
LANGUAGE sql IMMUTABLE
AS $_$
SELECT a
FROM unnest($1) a
GROUP BY 1
ORDER BY COUNT(1) DESC, 1
LIMIT 1;
$_$;
,DROP FUNCTION public._final_mode(anyarray);
publicuc0lt9t3fj0da4false5?125516398mode(anyelement) AGGREGATE?CREATE AGGREGATE mode(anyelement) (
SFUNC = array_append,
STYPE = anyarray,
INITCOND = '{}',
FINALFUNC = _final_mode
);
(DROP AGGREGATE public.mode(anyelement);
publicuc0lt9t3fj0da4false5224?125916399 advert_candidate_collector_failsTABLECREATE TABLE advert_candidate_collector_fails (
id integer NOT NULL,
advert_candidate_collector_status_id integer,
exception_message text,
stack_trace text,
url text,
created_at timestamp without time zone,
updated_at timestamp without time zone
);
4DROP TABLE public.advert_candidate_collector_fails;
publicuc0lt9t3fj0da4false5?125916405'advert_candidate_collector_fails_id_seSEQUENCE?CREATE SEQUENCE advert_candidate_collector_fails_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
>DROP SEQUENCE public.advert_candidate_collector_fails_id_seq;
publicuc0lt9t3fj0da4false1615^
00'advert_candidate_collector_fails_id_seqSEQUENCE OWNED BYeALTER SEQUENCE advert_candidate_collector_fails_id_seq OWNED BY advert_candidate_collector_fails.id;
publicuc0lt9t3fj0da4false162_
00'advert_candidate_collector_fails_id_seq
SEQUENCE SETRSELECT pg_catalog.setval('advert_candidate_collector_fails_id_seq', 13641, true);
publicuc0lt9t3fj0da4false162?125916407#advert_candidate_collector_statusesTABLE?CREATE TABLE advert_candidate_collector_statuses (
id integer NOT NULL,
data_source_id character varying(120),
state character varying(15) DEFAULT 'Queued'::character varying,
source_name character varying(30),
collector_type character varying(30),
started_at timestamp without time zone,
ended_at timestamp without time zone,
times_failed integer DEFAULT 0,
created_at timestamp without time zone,
updated_at timestamp without time zone
);
7DROP TABLE public.advert_candidate_collector_statuses;
publicuc0lt9t3fj0da4false240424055?125916412*advert_candidate_collector_statuses_id_seSEQUENCE?CREATE SEQUENCE advert_candidate_collector_statuses_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ADROP SEQUENCE public.advert_candidate_collector_statuses_id_seq;
publicuc0lt9t3fj0da4false1635`
00*advert_candidate_collector_statuses_id_seqSEQUENCE OWNED BYkALTER SEQUENCE advert_candidate_collector_statuses_id_seq OWNED BY advert_candidate_collector_statuses.id;
publicuc0lt9t3fj0da4false164a
00*advert_candidate_collector_statuses_id_seq
SEQUENCE SETVSELECT pg_catalog.setval('advert_candidate_collector_statuses_id_seq', 133212, true);
publicuc0lt9t3fj0da4false164?125916414advertsTABLE"CREATE TABLE adverts (
id integer NOT NULL,
car_id integer NOT NULL,
source_name character varying(20),
url text,
first_extraction timestamp without time zone,
last_observed_at timestamp without time zone,
created_at timestamp without time zone,
updated_at timestamp without time zone,
source_id character varying(255),
deactivated_at timestamp without time zone,
seller_id integer NOT NULL,
data_source_id character varying(100),
price integer,
availability_state character varying(15)
);
ROP TABLE public.adverts;
publicuc0lt9t3fj0da4false5?125916420adverts_id_seSEQUENCEpCREATE SEQUENCE adverts_id_seq
START WITH 1
INCREMENT BY 1
#Tometzky's solution isn't quite right, at least with 9.2's pg_dump. It'll create the table in the new schema, but pg_dump schema-qualifies the ALTER TABLE ... OWNER TO statements, so those will fail:
postgres=# CREATE DATABASE demo;
\cCREATE DATABASE
postgres=# \c demo
You are now connected to database "demo" as user "postgres".
demo=# CREATE TABLE public.test ( dummy text );
CREATE TABLE
demo=# \d
List of relations
Schema | Name | Type | Owner
--------+------+-------+----------
public | test | table | postgres
(1 row)
demo=# \q
$
$ pg_dump -U postgres -f demo.sql demo
$ sed -i 's/^SET search_path = public, pg_catalog;$/SET search_path = testschema, pg_catalog;/' demo.sql
$ grep testschema demo.sql
SET search_path = testschema, pg_catalog;
$ dropdb -U postgres demo
$ createdb -U postgres demo
$ psql -U postgres -c 'CREATE SCHEMA testschema;' demo
CREATE SCHEMA
$ psql -U postgres -f demo.sql -v ON_ERROR_STOP=1 -v QUIET=1 demo
psql:demo.sql:40: ERROR: relation "public.test" does not exist
$ psql demo
demo=> \d testschema.test
Table "testschema.test"
Column | Type | Modifiers
--------+------+-----------
dummy | text |
You will also need to edit the dump to remove the schema-qualification on public.test or change it to the new schema name. sed is a useful tool for this.
I could've sworn the correct way to do this was with pg_dump -Fc -n public -f dump.dbbackup then pg_restore into a new schema, but I can't seem to find out exactly how right now.
Update: Nope, it looks like sed is your best bet. See I want to restore the database with a different schema
Near the beginning of a dump file (created with pg_dump databasename) is a line:
SET search_path = public, pg_catalog;
Just change it to:
SET search_path = your_schema_name, pg_catalog;
Also you'll need to search for
ALTER TABLE public.
and replace with:
ALTER TABLE your_schema_name.
Related
I'm trying to import a dump created by pg_dump 2.9 into postgres 13.4, however it fails on the ALTER TABLE ...IDENITY...SEQUENCE NAME
CREATE TABLE admin.bidtype (
bidtype_id integer NOT NULL,
title character varying(50) NOT NULL,
created_by integer,
created_date timestamp without time zone,
updated_by integer,
updated_date timestamp without time zone,
deleted_by integer,
deleted_date timestamp without time zone
);
ALTER TABLE admin.bidtype OWNER TO app_bidhq;
--
-- Name: bidtype_bidtype_id_seq; Type: SEQUENCE; Schema: admin; Owner: postgres
--
ALTER TABLE admin.bidtype ALTER COLUMN bidtype_id ADD GENERATED ALWAYS AS IDENTITY (
SEQUENCE name admin.bidtype_bidtype_id_seq
^ ^
|______________________|__________________
START 1
^
|______________________________________________________
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1
CYCLE
);
The error shown is "[42601] ERROR: syntax error at end of input Position: 132" and DataGrip highlights errors at the marked positions. I'm new to Postgres, but have checked the documentation https://www.postgresql.org/docs/13/sql-altersequence.html. The syntax looks correct to me.
This is running on RDS for Postgres
After defining the table, the script generated by PGAdmin via rightclick->SCRIPT->CREATE SCRIPT, looks like this:
-- DROP TABLE public.arr;
CREATE TABLE public.arr
(
rcdno integer NOT NULL DEFAULT nextval('arr_rcdno_seq'::regclass),
rcdstate character varying(10) COLLATE pg_catalog."default" DEFAULT 'ACTIVE'::character varying,
rcdserial integer NOT NULL DEFAULT 0,
msrno integer NOT NULL DEFAULT nextval('arr_msrno_seq'::regclass),
mobileid character varying(50) COLLATE pg_catalog."default" NOT NULL,
edittimestamp timestamp without time zone DEFAULT now(),
editor character varying(20) COLLATE pg_catalog."default" NOT NULL,
CONSTRAINT pk_arr PRIMARY KEY (mobileid, msrno, rcdserial)
)
After using >pg_dump -s -U webmaster -W -F p Test > c:\temp\Test.sql, the table definition in the script OMITS the defaults for the 2 original "serial" columns. This means that the pg_dump script doesn't create the table correctly when it is run!
--
-- Name: arr; Type: TABLE; Schema: public; Owner: webmaster
--
CREATE TABLE public.arr (
rcdno integer NOT NULL, -- default missing!
rcdstate character varying(10) DEFAULT 'ACTIVE'::character varying,
rcdserial integer DEFAULT 0 NOT NULL,
msrno integer NOT NULL, -- default missing!
mobileid character varying(50) NOT NULL,
edittimestamp timestamp without time zone DEFAULT now(),
editor character varying(20) NOT NULL
);
ALTER TABLE public.arr OWNER TO webmaster;
--
-- Name: arr_msrno_seq; Type: SEQUENCE; Schema: public; Owner: webmaster
--
CREATE SEQUENCE public.arr_msrno_seq
AS integer
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE public.arr_msrno_seq OWNER TO webmaster;
--
-- Name: arr_msrno_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: webmaster
--
ALTER SEQUENCE public.arr_msrno_seq OWNED BY public.arr.msrno;
--
-- Name: arr_rcdno_seq; Type: SEQUENCE; Schema: public; Owner: webmaster
--
CREATE SEQUENCE public.arr_rcdno_seq
AS integer
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE public.arr_rcdno_seq OWNER TO webmaster;
--
-- Name: arr_rcdno_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: webmaster
--
ALTER SEQUENCE public.arr_rcdno_seq OWNED BY public.arr.rcdno;
EDIT: After the generated script has been run, the statement
into public.arr(mobileid, editor)
values
('12345', 'nolaspeaker')
generates
ERROR: null value in column "rcdno" violates not-null constraint DETAIL: Failing row contains (null, ACTIVE, 0, null, 12345, 2020-08-18 08:54:41.34052, nolaspeaker). SQL state: 23502
Which indicates that I am correct to assert that the script omits the default values for the rcdno column!
The ALTER TABLE statements that set the default values are towards the end of the dump, close to where the ALTER SEQUENCE ... OWNED BY are.
If they don't get restored, that might be because there was a problem creating the sequences.
Look at the error messages from the restore, particularly the first ones (later ones are often consequences of the earlier ones).
The statements that assign the default values to the applicable columns come much later in the script
--
-- Name: arr rcdno; Type: DEFAULT; Schema: public; Owner: webmaster
--
ALTER TABLE ONLY public.arr ALTER COLUMN rcdno SET DEFAULT nextval('public.arr_rcdno_seq'::regclass);
--
-- Name: arr msrno; Type: DEFAULT; Schema: public; Owner: webmaster
--
ALTER TABLE ONLY public.arr ALTER COLUMN msrno SET DEFAULT nextval('public.arr_msrno_seq'::regclass);
So you have to be careful if you cut-and-paste a table definition out of the script!
I did some research but can't find the exact answer that I look for. Currently I have a primary key column 'id' which is set to serial but I want to change it to bigserial to map to Long in Java layer. What is the best way to achieve this considering this is a existing table? I think my Postgres version is 10.5. Also I am aware that both serial and bigserial are not a data type.
In Postgres 9.6 or earlier the sequence created by a serial column already returns bigint. You can check this using psql:
drop table if exists my_table;
create table my_table(id serial primary key, str text);
\d my_table
Table "public.my_table"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+--------------------------------------
id | integer | | not null | nextval('my_table_id_seq'::regclass)
str | text | | |
Indexes:
"my_table_pkey" PRIMARY KEY, btree (id)
\d my_table_id_seq
Sequence "public.my_table_id_seq"
Type | Start | Minimum | Maximum | Increment | Cycles? | Cache
--------+-------+---------+---------------------+-----------+---------+-------
bigint | 1 | 1 | 9223372036854775807 | 1 | no | 1
Owned by: public.my_table.id
So you should only alter the type of the serial column:
alter table my_table alter id type bigint;
The behavior has changed in Postgres 10:
Also, sequences created for SERIAL columns now generate positive 32-bit wide values, whereas previous versions generated 64-bit wide values. This has no visible effect if the values are only stored in a column.
Hence in Postgres 10+:
alter sequence my_table_id_seq as bigint;
alter table my_table alter id type bigint;
-- backup table first
CREATE TABLE tablenamebackup as select * from tablename ;
--add new column idx
alter table tablename add column idx bigserial not null;
-- copy id to idx
update tablename set idx = id ;
-- drop id column
alter table tablename drop column id ;
-- rename idx to id
alter table tablename rename column idx to id ;
-- Reset Sequence to max + 1
SELECT setval(pg_get_serial_sequence('tablename', 'id'), coalesce(max(id)+1, 1), false) FROM tablename ;
I am in the process of switching from MariaDB to Postgres and have run into a small issue. There are times when I need to establish the next AUTO_INCREMENT value prior to making an INSERT. This is because the INSERT has an impact on a few other tables that would be quite messy to repair if done post the INSERT itself. In mySQL/MariaDB this was easy. I simply did
"SELECT AUTO_INCREMENT
FROM information_schema.tables
WHERE table_name = 'users'
AND table_schema = DATABASE( ) ;";
and used the returned value to pre-correct the other tables prior to making the actual INSERT. I am aware that with pgSQL one can use RETURNINGwith SELECT,INSERT and UPDATE statements. However, this would require a post-INSERT correction to the other tables which in turn would involve breaking code that has been tested and proven to work. I imagine that there is a way to find the next AUTO_INCREMENT but I have been unable to find it. Amongst other things I tried nextval('users_id_seq') which did not do anything useful.
To port my original MariaDB schema over to Postgres I edited the SQL emitted by Adminer with the MariaDB version to ensure it works with Postgres. This mostly involved changing INT(11) to INTEGER, TINYINT(3) to SMALL INT, VARCHAR to CHARACTER VARYING etc. With the auto-increment columns I read up a bit and concluded that I needed to use SERIAL instead. So the typical SQL I fed to Postgres was like this
CREATE TABLE "users"
(
"id" SERIAL NOT NULL,
"bid" INTEGER NOT NULL DEFAULT 0,
"gid" INTEGER NOT NULL DEFAULT 0,
"sid" INTEGER NOT NULL DEFAULT 0,
"s1" character varying(64)NOT NULL,
"s2" character varying(64)NOT NULL,
"name" character varying(64)NOT NULL,
"apik" character varying(128)NOT NULL,
"email" character varying(192)NOT NULL,
"gsm" character varying(64)NOT NULL,
"rights" character varying(64)NOT NULL,
"managed" character varying(256)NOT NULL DEFAULT
'M_BepHJXALYpLyOjHxVGWJnlAMqxv0KNENmcYA,,',
"senior" SMALLINT NOT NULL DEFAULT 0,
"refs" INTEGER NOT NULL DEFAULT 0,
"verified" SMALLINT NOT NULL DEFAULT 0,
"vkey" character varying(64)NOT NULL,
"lang" SMALLINT NOT NULL DEFAULT 0,
"leader" INTEGER NOT NULL
);
This SQL run from Adminer works correctly. However, when I then try to get Adminer to export the new users table in Postgres it gives me
CREATE TABLE "public"."users"
(
"id" integer DEFAULT nextval('users_id_seq') NOT NULL,
"bid" integer DEFAULT 0 NOT NULL,
It is perhaps possible that I have gone about things incorrectly when porting over the AUTO_INCREMENT columns - in which case there is still time to correct the error.
If you used serial in the column definition then you have a sequence named TABLE_COLUMN_seq in the same namespace of the table (where TABLE and COLUMN are, respectively, the names of the table and the column). You can just do:
SELECT nextval('TABLE_COLUMN_seq');
I see you have tried that, can you show your CREATE TABLE statement so that we can check all names are ok?
As documented in the manual serial is not a "real" data type, it's just a shortcut for a column that takes its default value from a sequence.
If you need the generated value in your code before inserting, use nextval() then use the value you got in your insert statement:
In PL/pgSQL this would be something like the following. The exact syntax obviously depends on the programming language you use:
declare
l_userid integer;
begin
l_userid := nextval('users_id_seq');
-- do something with that value
insert into users (id, ...)
values (l_userid, ...);
end;
It is important that you never pass a value to the insert statement that was not generated by the sequence. Postgres will not automagically sync the sequence values with "manually" provided values.
you can select last_value+1 from the sequence itself, eg:
t=# create table so109(i serial,n int);
CREATE TABLE
Time: 2.585 ms
t=# insert into so109(n) select i from generate_series(1,22,1) i;
INSERT 0 22
Time: 1.236 ms
t=# select * from so109_i_seq ;
sequence_name | last_value | start_value | increment_by | max_value | min_value | cache_value | log_cnt | is_cycled | is_called
---------------+------------+-------------+--------------+---------------------+-----------+-------------+---------+-----------+-----------
so109_i_seq | 22 | 1 | 1 | 9223372036854775807 | 1 | 1 | 11 | f | t
(1 row)
or use currval, eg:
t=# select currval('so109_i_seq')+1;
?column?
----------
23
(1 row)
UPDATE
While this answer gives an idea on how to Determine next auto_increment value before an INSERT in Postgres (which is the title), proposed methods would not fit the needs of post itself. If you are looking for "replacement" for RETURNING directive in INSERT statement, the better way is actually "reserving" the value with nextval, just as #fog proposed. So concurrent transactions would not get the same value twice...
Is there a way to change existing primary key type from int to serial without dropping the table? I already have a lot of data in the table and I don't want to delete it.
Converting an int to a serial more or less only means adding a sequence default to the value, so to make it a serial;
Pick a starting value for the serial, greater than any existing value in the table
SELECT MAX(id)+1 FROM mytable
Create a sequence for the serial (tablename_columnname_seq is a good name)
CREATE SEQUENCE test_id_seq MINVALUE 3 (assuming you want to start at 3)
Alter the default of the column to use the sequence
ALTER TABLE test ALTER id SET DEFAULT nextval('test_id_seq')
Alter the sequence to be owned by the table/column;
ALTER SEQUENCE test_id_seq OWNED BY test.id
A very simple SQLfiddle demo.
And as always, make a habit of running a full backup before running altering SQL queries from random people on the Internet ;-)
-- temp schema for testing
-- ----------------------------
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
CREATE TABLE bagger
( id INTEGER NOT NULL PRIMARY KEY
, tralala varchar
);
INSERT INTO bagger(id,tralala)
SELECT gs, 'zzz_' || gs::text
FROM generate_series(1,100) gs
;
DELETE FROM bagger WHERE random() <0.9;
-- SELECT * FROM bagger;
-- CREATE A sequence and tie it to bagger.id
-- -------------------------------------------
CREATE SEQUENCE bagger_id_seq;
ALTER TABLE bagger
ALTER COLUMN id SET NOT NULL
, ALTER COLUMN id SET DEFAULT nextval('player_id_seq')
;
ALTER SEQUENCE bagger_id_seq
OWNED BY bagger.id
;
SELECT setval('bagger_id_seq', MAX(ba.id))
FROM bagger ba
;
-- Check the result
-- ------------------
SELECT * FROM bagger;
\d bagger
\d bagger_id_seq