Alter postgres table primary key UUID to Character Varying - postgresql

I want to convert my PostgreSQL table primary key UUID to character varying
ALTER TABLE payment_authorization ALTER COLUMN id TYPE VARCHAR;
When I run the above command showing below error, Beacause foreign key constraints failed. In my system have 200 tables. Is there any easy way to change all tables primary key?

Changing all the tables in place will probably be slow and cumbersome.
The easiest solution might be:
export the database with
pg_dump -F p -f dumpfile.sql dbname
replace uuid with text in the dump using an editor:
sed --in-place -e 's/uuid/text/g' dumpfile.sql
drop and re-create the database:
DROP DATABASE dbname;
CREATE DATABASE dbname;
import the dump:
psql -U postgres -d dbname -1 -f dumpfile.sql

Related

NextVal of postgresql usage

CREATE SEQUENCE :schema.empseq;
CREATE TABLE emp(empid bigint NOT NULL DEFAULT NEXTVAL(':schema.empseq'));
I am execute like psql -d dbname -U username -f emp.sql -v schema=post
Getting an error
schema ":schema" does not exist
The documentation here talks about how psql interpolates values into SQL.
CREATE SEQUENCE :schema.empseq;
CREATE TABLE emp(empid bigint NOT NULL DEFAULT NEXTVAL(:'schema' || '.empseq'));
might work for you.

Create tables in a specific schema using psql?

I have a batch that allows me to create a database psql and create tables later.
psql -f "%cd%\db.sql" postgres
echo database creted !
pause
psql -f "%cd%\db_table.sql" mabase postgres
echo tables creted !
the db.sql file:
CREATE DATABASE mabase;
and here is the db_table.sql file:
create table myTable(
idTable INT4 not null,
nom VARCHAR(254) null,
date DATE null,
constraint PK_idTable primary key (idTable)
);
This batch runs as a charme and creates the database as well as the table.
My problem is that the table is created in the public default schema. What I would like to do is create a schema first, that I can do it with one more line in my batch:
psql -f "%cd%\schema.sql" mabase postgres
schema.sql file:
CREATE SCHEMA mabase_schema;
Now the problem is to create the table in the new schema and not in the public schema.
I tried with the line:
psql -f "%cd%\db_table.sql" mabase_schema.mabase postgres
But without success !
How will I be able to define the schema in my batch file?

How do I drop a very specific table in Postgresql (public.pg_ts_parser vs. pg_catalog.pg_ts_parser)?

I am having a problem upgrading my Postgresql 9.2 database to 9.4. The problem I have is that I have these tables that are incompatible for an upgrade:
public.pg_ts_dict.dict_init
public.pg_ts_dict.dict_lexize
public.pg_ts_parser.prs_start
public.pg_ts_parser.prs_nexttoken
public.pg_ts_parser.prs_end
public.pg_ts_parser.prs_headline
public.pg_ts_parser.prs_lextype
Postgresql says that I should delete these tables and the upgrade should work. I am currently trying to figure out how to do this.
Postgresql has pg_catalog.pg_ts_parser and pg_catalog.pg_ts_dict. These are system catalogs and can absolutely not be removed, nor do I want to. I want to remove the public.pg_ts_* tables.
More specifically I want to dump the tables, upgrade the database, and then restore both public.pg_ts_parser and public.pg_ts_dict. However every time I try to dump or drop the tables, it defaults to the system catalog. Which I don't want. How can I specify these exact tables? Thanks for any help in advance.
-------EDIT------
Here are the commands I am running to dump the tables.
pg_dump -Fc -t public.pg_ts_dict -t public.pg_ts_parser > file.dump
pg_dump: No matching tables were found
Here is a variation
pg_dump -Fc -t pg_ts_dict -t pg_ts_parser > file.dump
The second variation contains the dump of the system catalog pg_ts_dict and parser not the public version. However, it is very confusing because the contents of the file.dump contains these lines of code among # signs and ^ signs.
DROP TABLE pg_catalog.pg_ts_dict;
^#^#^#pg_catalog^#^#^#^#^#^#^H^#^#^#postgres^#^D^#^#^#true^A^A^#^#^#^C^#^#^#^#^#^#^#^#^# ^G^#^#^#^#^#^#^#^#^A^#^#^#0^#^A^#^#^#0^#
^#^#^#pg_ts_dict^#^C^#^#^#ACL^#^A^#^#^#^#<86>^#^#^#REVOKE ALL ON TABLE pg_ts_dict FROM PUBLIC;
REVOKE ALL ON TABLE pg_ts_dict FROM postgres;
GRANT SELECT ON TABLE pg_ts_dict TO PUBLIC;
^#^#^#^#^#^A^A^#^#^#^#
^#^#^#pg_catalog^A^A^#^#^#^#^H^#^#^#postgres^#^E^#^#^#false^#^B^#^#^#54^A^A^#^#^#^C^#^#^#^#^#^#^#^#^#7^#^#^#^#^#^#^#^#^#^D^#^#^#1259^#^D^#^#^#3601^#^L^#^#^#pg_ts_parser^#^E^#^#^#TABLE^#^B^#^#^#^#ö^#^#^#CREATE TABLE pg_ts_parser (
prsname name NOT NULL,
prsnamespace oid NOT NULL,
prsstart regproc NOT NULL,
prstoken regproc NOT NULL,
prsend regproc NOT NULL,
prsheadline regproc NOT NULL,
prslextype regproc NOT NULL
Not sure what to make of this.
Your call should actually work as is:
pg_dump -Fc -t public.pg_ts_dict -t public.pg_ts_parser > file.dump
You can use a wildcard to include all tables starting with pg_ts_.
pg_dump -Fc -t 'public.pg_ts_*' > file.dump
On the Linux shell, you may need the extra quotes. (Related question on dba.SE.) Remove the quotes in Windows.
To make it abundantly clear you could exclude the same tables from pg_catalog explicitly. Normally, this is not necessary, but something seems to be abnormal in your case.
pg_dump -Fc -t 'public.pg_ts_*' -T 'pg_catalog.pg_ts_*' > file.dump
The documentation:
Also, you must write something like -t sch.tab to select a table in a
particular schema, rather than the old locution of -n sch -t tab.

Can't copy table to another database with pg_dump

I'm trying to copy a table from one database to another database (NOT schema). The code I used in terminal is as below:
pg_dump -U postgres -t OldSchema.TableToCopy OldDatabase | psql -U postgres -d NewDatabase
When I press Enter it requests postgres password I enter my pass and then It requests psql password. I enter it and press Enter. I receive lots of:
invalid command \N
ERROR: relation "TableToCopy" does not exist
Both tables have UTF8 encoding. Am I doing something wrong?
OS: windows XP
Error output:
psql:TblToCopy.sql:39236: invalid command \N
psql:TblToCopy.sql:39237: invalid command \N
psql:TblToCopy.sql:39238: invalid command \N
.
.
.
After Hundreds of above errors, the terminal echoes:
psql:TblToCopy.sql:39245: ERROR: syntax error at or near "509"
LINE 1: 509 some gibberish words and letters here
And Finally:
sql:TblToCopy.sql:39245: ERROR: relation "TableToCopy" does not exist
EDIT
I read this response to the same problem \N error with psql , it says to use INSERT instead of COPY, but in the file pg_dump created COPY. How to say to pg_dump to use INSERT instead of COPY?
I converted the file with iconv to utf-8. Now that error has gone but I have a new error. In this particular case when I use psql to import data to database something new happens. Table gets created but without data. It says:
SET
SET
SET
SET
SET
SET
SET
SET
CREATE TABLE
ALTER TABLE
psql:tblNew.sql:39610: ERROR: value too long for type character(3)
CONTEXT: COPY words, line 1, column first_two_letters: "سر"
ALTER TABLE
ALTER TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE TRIGGER
I've tried to create a database with Encoding: UTF8 with a table and insert the two UTF-8 encoded characters the COPY command is trying to insert and it works when using INSERT.
CREATE DATABASE test
WITH OWNER = postgres
ENCODING = 'UTF8'
TABLESPACE = pg_default
LC_COLLATE = 'English_United States.1252'
LC_CTYPE = 'English_United States.1252'
CONNECTION LIMIT = -1;
CREATE TABLE x
(
first_two_letters character(3)
)
WITH (
OIDS=FALSE
);
ALTER TABLE x
OWNER TO postgres;
INSERT INTO x(
first_two_letters)
VALUES ('سر');
According to http://rishida.net/tools/conversion/ for the failing COPY the Unicode code points are:
U+0633 U+0631
which are two characters, which means you should be able to store them in a column defined as character(3), which stores strings up to 3 characters (not bytes) in length.
and if we try to INSERT, it succeeds:
INSERT INTO x(
first_two_letters)
VALUES (U&'\0633\0631');
From the pgdump documentation you can INSERT instead of COPY by using the --inserts option
--inserts
Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is mainly useful for making dumps that can
be loaded into non-PostgreSQL databases. However, since this option
generates a separate command for each row, an error in reloading a row
causes only that row to be lost rather than the entire table contents.
Note that the restore might fail altogether if you have rearranged
column order. The --column-inserts option is safe against column order
changes, though even slower.
Try to use this instead for Step 1:
pg_dump -U postgres -t OldSchema."TableToCopy" --inserts OldDatabase > Table.sql
I've also tried to COPY from a table to a file and use COPY to import and for me it works.
Are you sure your client and server database encoding is UTF8 ?
Firstly, export the table named "x" from schema "public" on database "test" to a plain text SQL file:
pg_dump -U postgres -t public."x" test > x.sql
which creates the x.sql file that contains:
--
-- PostgreSQL database dump
--
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: x; Type: TABLE; Schema: public; Owner: postgres; Tablespace:
--
CREATE TABLE x (
first_two_letters character(3)
);
ALTER TABLE public.x OWNER TO postgres;
--
-- Data for Name: x; Type: TABLE DATA; Schema: public; Owner: postgres
--
COPY x (first_two_letters) FROM stdin;
سر
\.
--
-- PostgreSQL database dump complete
--
Secondly, import with:
psql -U postgres -d test -f x.sql
The table name should be quoted , as the following
pg_dump -U postgres -t OldSchema."TableToCopy" OldDatabase | psql -U postgres -d NewDatabase
And I suggest you do the job in two steps
Step 1
pg_dump -U postgres -t OldSchema."TableToCopy" OldDatabase > Table.sql
If step 1 goes ok then do the step2.
Step 2
psql -U postgres -d NewDatabase -f Table.sql

Dot (.) in Schema name makes pg_dump unusable

I have a schema named 2sample.sc. When I want to pg_dump some of its table, the following error appears:
pg_dump: No matching tables were found
My pg_dump command:
pg_dump -U postgres -t 2sample.sc."error_log" --inserts games > dump.sql
My pg_dump works fine on other schemas like 2sample.
What I did:
I tried to escape dot(.) with no success though
Use "schema.name.with.dots.in.it"."table.name.with.dots.in.it" to specify the schema.table:
-- test schema with a dot in its name
DROP SCHEMA "tmp.tmp" CASCADE;
CREATE SCHEMA "tmp.tmp" ;
SET search_path="tmp.tmp";
CREATE TABLE nononono
( dont SERIAL NOT NULL
);
insert into nononono
SELECT generate_series(1,10)
;
$pg_dump -t \"tmp.tmp\".\"nononono\" --schema-only -U postgres the_database
Output (snipped):
SET search_path = "tmp.tmp", pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: nononono; Type: TABLE; Schema: tmp.tmp; Owner: postgres; Tablespace:
--
CREATE TABLE nononono (
dont integer NOT NULL
);
BTW: why would you want to add a dot to a schema (or table) name? It is asking for trouble. The same for MixedCaseNames. Underscores work just fine.