I could see the error is about the brackets in dragon.sql CREATE TABLE dragon etc...but I don't how/what to fix...thanks.
Below are terminal running log, configure_db.sh and the dragon.sql file
> backend#1.0.0 configure
> sh ./bin/configure_db.sh
Configuring dragonstackdb
ERROR: syntax error at or near ")"
LINE 4: );
^
ERROR: syntax error at or near ")"
LINE 7: );
^
dragonstackdb configured
------------------------------------
configure_db.sh
#!/bin/bash
echo "Configuring dragonstackdb"
dropdb -U node_user dragonstackdb
createdb -U node_user dragonstackdb
psql -U node_user dragonstackdb < ./bin/sql/generation.sql
psql -U node_user dragonstackdb < ./bin/sql/dragon.sql
echo "dragonstackdb configured"
-------------------------------
dragon.sql
CREATE TABLE dragon (
id SERIAL PRIMARY KEY,
birthdate TIMESTAMP NOT NULL,
nickname VARCHAR(64) NOT NULL,
"generationId" INTEGER,
FOREIGN KEY ("generationId") REFERENCES generation(id),
);
FOREIGN KEY ("generationId") REFERENCES generation(id),
Remove the comma at the end. Having a comma means it should have another line (i.e. another column).
This should be the create table statement:
CREATE TABLE dragon (
id SERIAL PRIMARY KEY,
birthdate TIMESTAMP NOT NULL,
nickname VARCHAR(64) NOT NULL,
"generationId" INTEGER,
FOREIGN KEY ("generationId") REFERENCES generation(id)
);
Here's an example.
Related
I'm trying to import a dump created by pg_dump 2.9 into postgres 13.4, however it fails on the ALTER TABLE ...IDENITY...SEQUENCE NAME
CREATE TABLE admin.bidtype (
bidtype_id integer NOT NULL,
title character varying(50) NOT NULL,
created_by integer,
created_date timestamp without time zone,
updated_by integer,
updated_date timestamp without time zone,
deleted_by integer,
deleted_date timestamp without time zone
);
ALTER TABLE admin.bidtype OWNER TO app_bidhq;
--
-- Name: bidtype_bidtype_id_seq; Type: SEQUENCE; Schema: admin; Owner: postgres
--
ALTER TABLE admin.bidtype ALTER COLUMN bidtype_id ADD GENERATED ALWAYS AS IDENTITY (
SEQUENCE name admin.bidtype_bidtype_id_seq
^ ^
|______________________|__________________
START 1
^
|______________________________________________________
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1
CYCLE
);
The error shown is "[42601] ERROR: syntax error at end of input Position: 132" and DataGrip highlights errors at the marked positions. I'm new to Postgres, but have checked the documentation https://www.postgresql.org/docs/13/sql-altersequence.html. The syntax looks correct to me.
This is running on RDS for Postgres
I am trying to restore database using psql. But it seems like psql always fail when it encounter CONSTRAINT. After I look into the dump. I found out that the child table, the table that hold FOREIGN KEY, is created before the parent table.
Here is the snippet...
DROP TABLE IF EXISTS "answer";
DROP SEQUENCE IF EXISTS answer_id_seq;
CREATE SEQUENCE answer_id_seq INCREMENT 1 MINVALUE 1 MAXVALUE 2147483647 START 71 CACHE 1;
CREATE TABLE "public"."answer" (
"id" integer DEFAULT nextval('answer_id_seq') NOT NULL,
"text" character varying NOT NULL,
"weight" double precision NOT NULL,
"questionId" integer NOT NULL,
"baseCreated" timestamp DEFAULT now() NOT NULL,
"baseUpdated" timestamp DEFAULT now() NOT NULL,
CONSTRAINT "PK_9232db17b63fb1e94f97e5c224f" PRIMARY KEY ("id"),
CONSTRAINT "FK_a4013f10cd6924793fbd5f0d637" FOREIGN KEY ("questionId") REFERENCES question(id) ON DELETE CASCADE NOT DEFERRABLE
) WITH (oids = false);
psql command.
psql -h 0.0.0.0 -p 5432 -U foobar -1 foobar < foobar.sql
And the error.
NOTICE: table "answer" does not exist, skipping
DROP TABLE
NOTICE: sequence "answer_id_seq" does not exist, skipping
DROP SEQUENCE
CREATE SEQUENCE
ERROR: relation "question" does not exist
ERROR: relation "answer" does not exist
ERROR: relation "answer" does not exist
LINE 1: INSERT INTO "answer" ("id", "text", "weight", "questionId", ...
I have been searching around stack overflow for some relevant problems, but I did not find any.
I have a table in sql on this format (call this file for create_table.sql):
CREATE TABLE object (
id BIGSERIAL PRIMARY KEY,
name_c VARCHAR(10) NOT NULL,
create_timestamp TIMESTAMP NOT NULL,
change_timestamp TIMESTAMP NOT NULL,
full_id VARCHAR(10),
mod VARCHAR(10) NOT NULL CONSTRAINT mod_enum CHECK (mod IN ('original', 'old', 'delete')),
status VARCHAR(10) NOT NULL CONSTRAINT status_enum CHECK (status IN ('temp', 'good', 'bad')),
vers VARCHAR(10) NOT NULL REFERENCES vers (full_id),
frame_id BIGINT NOT NULL REFERENCES frame (id),
name VARCHAR(10),
definition VARCHAR(10),
order_ref BIGINT REFERENCES order_ref (id),
UNIQUE (id, name_c)
);
This table is stored in google cloud. I have about 200000 insert statement, where I use a "insert block" method. Look like this (call this file for object_name.sql):
INSERT INTO object(
name,
create_timestamp,
change_timestamp,
full_id,
mod,
status,
vers,
frame_id,
name)
VALUES
('Element', current_timestamp, current_timestamp, 'Element:1', 'current', 'temp', 'V1', (SELECT id FROM frame WHERE frame_id='Frame:data'), 'Description to element 1'),
('Element', current_timestamp, current_timestamp, 'Element:2', 'current', 'temp', 'V1', (SELECT id FROM frame WHERE frame_id='Frame:data'), 'Description to element 2'),
...
...
('Element', current_timestamp, current_timestamp, 'Element:200000', 'current', 'temp', 'V1', (SELECT id FROM frame WHERE frame_id='Frame:data'), 'Description to object 200000');
I have a bash script where a postgres command is used to upload the data in object_name.sql to the table in google cloud:
PGPASSWORD=password psql -d database --username username --port 1234 --host 11.111.111 << EOF
BEGIN;
\i object_name.sql
COMMIT;
EOF
(Source: single transaction)
When I run this, I get this error:
BEGIN
psql:object_name.sql:60002: SSL SYSCALL error: EOF detected
psql:object_name.sql:60002: connection to server was lost
The current "solution" I have done now, is to chunk the file so each file can only have max 10000 insert statements. Running the psql command on these files works, but they take around 7 min.
Instead of having one file with 200000 insert statements, I divided them into 12 files where each file had max 10000 insert statements.
My question:
1. Is there a limit how large a file can contain?
2. I also saw this post about how to speed up insert, but I could not get COPY to work.
Hope someone out there have time to help me 🙂
I have Postgres databases generated with Eclipse Link. Between these databases is not any change, but when I run liquibase for generating diffChangeLog, it generates changesets with dropPrimaryKey and addPrimaryKey. I don't understand why it generates these records for all primary keys of all tables. Names, order of columns are the same for both tables.
Example of changeset
<changeSet author="michal2 (generated)" id="1436872322297-8">
<dropPrimaryKey tableName="country_translation"/>
<addPrimaryKey columnNames="country_id, translations_id" constraintName="country_translation_pkey" tableName="country_translation"/>
</changeSet>
Sql of original table
CREATE TABLE country_translation
(
country_id bigint NOT NULL,
translations_id bigint NOT NULL,
CONSTRAINT country_translation_pkey PRIMARY KEY (country_id, translations_id),
CONSTRAINT fk_country_translation_country_id FOREIGN KEY (country_id)
REFERENCES country (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT fk_country_translation_translations_id FOREIGN KEY (translations_id)
REFERENCES translation (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
ALTER TABLE country_translation
OWNER TO hotels;
Sql of reference table
CREATE TABLE country_translation
(
country_id bigint NOT NULL,
translations_id bigint NOT NULL,
CONSTRAINT country_translation_pkey PRIMARY KEY (country_id, translations_id),
CONSTRAINT fk_country_translation_country_id FOREIGN KEY (country_id)
REFERENCES country (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT fk_country_translation_translations_id FOREIGN KEY (translations_id)
REFERENCES translation (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
ALTER TABLE country_translation
OWNER TO hotels;
Liquibase command with arguments
./liquibase \
--driver=org.postgresql.Driver \
--classpath=/home/michal2/tools/postgresql-jdbc-driver/postgresql-jdbc.jar \
--changeLogFile=changelog-hotels.xml \
--url="jdbc:postgresql://localhost/hotels" \
--username=hotels \
--password=hotels \
--defaultSchemaName=public \
--logLevel=info \
diffChangeLog \
--referenceUrl="jdbc:postgresql://localhost/hotels_liquibase" \
--referenceUsername=hotels \
--referencePassword=hotels \
--referenceDefaultSchemaName=public
I'm using version 3.4.0
This has been fixed for Liquibase 3.4.1 https://liquibase.jira.com/browse/CORE-2416
I have a dump, where the data and the structure is in the public schema. I want to restore it into a schema with a custom name - how can I do that?
EDIT V 2:
My dump file is from heroku, and looks like this at the beginning:
PGDMP
!
pd6rq1i7f3kcath9.1.59.1.6<Y
0ENCODINENCODINGSET client_encoding = 'UTF8';
falseZ
00
STDSTRINGS
STDSTRINGS)SET standard_conforming_strings = 'off';
false[
126216385d6rq1i7f3kcatDATABASE?CREATE DATABASE d6rq1i7f3kcath WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
DROP DATABASE d6rq1i7f3kcath;
uc0lt9t3fj0da4false26152200publicSCHEMACREATE SCHEMA public;
DROP SCHEMA public;
postgresfalse\
SCHEMA publicCOMMENT6COMMENT ON SCHEMA public IS 'standard public schema';
postgresfalse5?307916392plpgsql EXTENSION?CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;
DROP EXTENSION plpgsql;
false]
00EXTENSION plpgsqlCOMMENT#COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';
false212?125516397_final_mode(anyarrayFUNCTION?CREATE FUNCTION _final_mode(anyarray) RETURNS anyelement
LANGUAGE sql IMMUTABLE
AS $_$
SELECT a
FROM unnest($1) a
GROUP BY 1
ORDER BY COUNT(1) DESC, 1
LIMIT 1;
$_$;
,DROP FUNCTION public._final_mode(anyarray);
publicuc0lt9t3fj0da4false5?125516398mode(anyelement) AGGREGATE?CREATE AGGREGATE mode(anyelement) (
SFUNC = array_append,
STYPE = anyarray,
INITCOND = '{}',
FINALFUNC = _final_mode
);
(DROP AGGREGATE public.mode(anyelement);
publicuc0lt9t3fj0da4false5224?125916399 advert_candidate_collector_failsTABLECREATE TABLE advert_candidate_collector_fails (
id integer NOT NULL,
advert_candidate_collector_status_id integer,
exception_message text,
stack_trace text,
url text,
created_at timestamp without time zone,
updated_at timestamp without time zone
);
4DROP TABLE public.advert_candidate_collector_fails;
publicuc0lt9t3fj0da4false5?125916405'advert_candidate_collector_fails_id_seSEQUENCE?CREATE SEQUENCE advert_candidate_collector_fails_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
>DROP SEQUENCE public.advert_candidate_collector_fails_id_seq;
publicuc0lt9t3fj0da4false1615^
00'advert_candidate_collector_fails_id_seqSEQUENCE OWNED BYeALTER SEQUENCE advert_candidate_collector_fails_id_seq OWNED BY advert_candidate_collector_fails.id;
publicuc0lt9t3fj0da4false162_
00'advert_candidate_collector_fails_id_seq
SEQUENCE SETRSELECT pg_catalog.setval('advert_candidate_collector_fails_id_seq', 13641, true);
publicuc0lt9t3fj0da4false162?125916407#advert_candidate_collector_statusesTABLE?CREATE TABLE advert_candidate_collector_statuses (
id integer NOT NULL,
data_source_id character varying(120),
state character varying(15) DEFAULT 'Queued'::character varying,
source_name character varying(30),
collector_type character varying(30),
started_at timestamp without time zone,
ended_at timestamp without time zone,
times_failed integer DEFAULT 0,
created_at timestamp without time zone,
updated_at timestamp without time zone
);
7DROP TABLE public.advert_candidate_collector_statuses;
publicuc0lt9t3fj0da4false240424055?125916412*advert_candidate_collector_statuses_id_seSEQUENCE?CREATE SEQUENCE advert_candidate_collector_statuses_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ADROP SEQUENCE public.advert_candidate_collector_statuses_id_seq;
publicuc0lt9t3fj0da4false1635`
00*advert_candidate_collector_statuses_id_seqSEQUENCE OWNED BYkALTER SEQUENCE advert_candidate_collector_statuses_id_seq OWNED BY advert_candidate_collector_statuses.id;
publicuc0lt9t3fj0da4false164a
00*advert_candidate_collector_statuses_id_seq
SEQUENCE SETVSELECT pg_catalog.setval('advert_candidate_collector_statuses_id_seq', 133212, true);
publicuc0lt9t3fj0da4false164?125916414advertsTABLE"CREATE TABLE adverts (
id integer NOT NULL,
car_id integer NOT NULL,
source_name character varying(20),
url text,
first_extraction timestamp without time zone,
last_observed_at timestamp without time zone,
created_at timestamp without time zone,
updated_at timestamp without time zone,
source_id character varying(255),
deactivated_at timestamp without time zone,
seller_id integer NOT NULL,
data_source_id character varying(100),
price integer,
availability_state character varying(15)
);
ROP TABLE public.adverts;
publicuc0lt9t3fj0da4false5?125916420adverts_id_seSEQUENCEpCREATE SEQUENCE adverts_id_seq
START WITH 1
INCREMENT BY 1
#Tometzky's solution isn't quite right, at least with 9.2's pg_dump. It'll create the table in the new schema, but pg_dump schema-qualifies the ALTER TABLE ... OWNER TO statements, so those will fail:
postgres=# CREATE DATABASE demo;
\cCREATE DATABASE
postgres=# \c demo
You are now connected to database "demo" as user "postgres".
demo=# CREATE TABLE public.test ( dummy text );
CREATE TABLE
demo=# \d
List of relations
Schema | Name | Type | Owner
--------+------+-------+----------
public | test | table | postgres
(1 row)
demo=# \q
$
$ pg_dump -U postgres -f demo.sql demo
$ sed -i 's/^SET search_path = public, pg_catalog;$/SET search_path = testschema, pg_catalog;/' demo.sql
$ grep testschema demo.sql
SET search_path = testschema, pg_catalog;
$ dropdb -U postgres demo
$ createdb -U postgres demo
$ psql -U postgres -c 'CREATE SCHEMA testschema;' demo
CREATE SCHEMA
$ psql -U postgres -f demo.sql -v ON_ERROR_STOP=1 -v QUIET=1 demo
psql:demo.sql:40: ERROR: relation "public.test" does not exist
$ psql demo
demo=> \d testschema.test
Table "testschema.test"
Column | Type | Modifiers
--------+------+-----------
dummy | text |
You will also need to edit the dump to remove the schema-qualification on public.test or change it to the new schema name. sed is a useful tool for this.
I could've sworn the correct way to do this was with pg_dump -Fc -n public -f dump.dbbackup then pg_restore into a new schema, but I can't seem to find out exactly how right now.
Update: Nope, it looks like sed is your best bet. See I want to restore the database with a different schema
Near the beginning of a dump file (created with pg_dump databasename) is a line:
SET search_path = public, pg_catalog;
Just change it to:
SET search_path = your_schema_name, pg_catalog;
Also you'll need to search for
ALTER TABLE public.
and replace with:
ALTER TABLE your_schema_name.