Postgres database pg_dump - dump file verbosity explanation - postgresql

I created a postgres database dump with pg_dump. I would like to now why the backup file is so verbose.
a) What is the purpose of this config:
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
SET default_tablespace = '';
SET default_table_access_method = heap;
b) Why is a creation of a table so verbose:
CREATE TABLE public.tag (
id integer NOT NULL,
name character varying(100) NOT NULL,
description text
);
ALTER TABLE public.tag OWNER TO postgres;
CREATE SEQUENCE public.tag_id_seq
AS integer
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE public.tag_id_seq OWNER TO postgres;
ALTER SEQUENCE public.tag_id_seq OWNED BY public.tag.id;
ALTER TABLE ONLY public.tag ALTER COLUMN id SET DEFAULT nextval('public.tag_id_seq'::regclass);
Why not just like I created it before:
CREATE TABLE tag (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
description TEXT
);
c) Why is COPY ... FROM stdin; used rather than INSERT INTO ... VALUES ?
COPY public.tag (id, name, description) FROM stdin;
1 vegetarian No animals.
2 vegan No animal products.
3 vegetable
\.
SELECT pg_catalog.setval('public.tag_id_seq', 3, true);
ALTER TABLE ONLY public.tag
ADD CONSTRAINT tag_pkey PRIMARY KEY (id);

I would suggest spending some time here pg_dump. In mean time:
a) That sets up the database environment to be compatible with the database you dumped from. Also a security feature SELECT pg_catalog.set_config('search_path', '', false); See here CVE-2018-1058 for more on that.
b) This is because a dump has three parts 'pre-data, data, post-data`. See link above for complete explanation. By default it will dump all three, but keeps the parts separate. It also allows for things like triggers that work best if restored separately and later so they are not running when inputting the data.
c) COPY is orders of magnitude faster then INSERTS. You can specify using INSERTS, but I would recommend not unless you are moving data to another SQL database that does not understand COPY.

Related

pg_dump success, but no data in backed file

I use
pg_dump -d xx_sjcbhs --table=wd555.ft_bjgzflcbmxb --column-inserts > e:\temp\ontable2.sql
backup a table. The command is Ok, but the backup file have no insert statement, and no data.
--
-- PostgreSQL database dump
--
-- Dumped from database version 14.5
-- Dumped by pg_dump version 14.5
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
SET default_tablespace = '';
--
-- Name: ft_bjgzflcbmxb; Type: TABLE; Schema: wd555; Owner: postgres
--
CREATE TABLE wd555.ft_bjgzflcbmxb (
xxbsm text NOT NULL,
xxmc text NOT NULL,
tyshxydm text NOT NULL,
id text NOT NULL,
yyyymm text NOT NULL,
bjdm text NOT NULL,
cbflbm text NOT NULL,
cbfl text NOT NULL,
je numeric(18,2) NOT NULL,
zgh text NOT NULL,
rysxdm text NOT NULL,
ftbz text,
ftsm text,
dftje numeric(18,2) NOT NULL,
zrs numeric(18,2) NOT NULL,
bjrs numeric(18,2) NOT NULL
)
PARTITION BY LIST (yyyymm);
--
-- PostgreSQL database dump complete
You got what you requested, because a partitioned table contains no data. You have to dump the partitions as well if you want to dump the data. To makes that easier, note that the --table option accepts a pattern as argument, so that you don't have to enumerate all partitions.

Postgres - DB backed without data but PK is not autoincrementing correctly

I wanted to backup my db with no data, but my PK is autoincrementing in no order. On some tables it goes 1,3,5,6,7.., some columns it is 1,3,4.. and some columns even start from 2.
Here is my sql code example:
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
CREATE SCHEMA "Codes";
ALTER SCHEMA "Codes" OWNER TO postgres;
CREATE TABLE "Codes"."VehicleColor" (
"ID" integer NOT NULL,
"Name" character varying(255) NOT NULL,
"Code" character varying(30),
"Note" character varying(2000),
"Active" boolean NOT NULL
);
ALTER TABLE "Codes"."VehicleColor" OWNER TO postgres;
ALTER TABLE "Codes"."VehicleColor" ALTER COLUMN "ID" ADD GENERATED ALWAYS AS IDENTITY (
SEQUENCE NAME "Codes"."VehicleColor_ID_seq"
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1
);
ALTER TABLE ONLY "Codes"."VehicleColor"
ADD CONSTRAINT "VehicleColor_pkey" PRIMARY KEY ("ID");
SELECT pg_catalog.setval('"Codes"."VehicleColor_ID_seq"', 1, true);
With this last line Select pg_catalog.setval... table VehicleColor PK order was 1,3,5,6,7.. and without this last line, PK order is 1,3,4..
Does anyone has idea why this happens?

Create unique constraint initially disabled

This is my table :
CREATE TABLE [dbo].[TestTable]
(
[Name1] varchar(50) COLLATE French_CI_AS NOT NULL,
[Name2] varchar(255) COLLATE French_CI_AS NULL,
CONSTRAINT [TestTable_uniqueName1] UNIQUE ([Name1]),
CONSTRAINT [TestTable_uniqueName1Name2] UNIQUE ([Name1], [Name2])
)
ALTER TABLE [dbo].[TestTable]
ADD CONSTRAINT [TestTable_uniqueName1]
UNIQUE NONCLUSTERED ([Name1])
ALTER TABLE [dbo].[TestTable]
ADD CONSTRAINT [TestTable_uniqueName1Name2]
UNIQUE NONCLUSTERED ([Name1], [Name2])
GO
ALTER INDEX [TestTable_uniqueName1]
ON [dbo].[TestTable]
DISABLE
GO
My idea is to enable/disable one or other unique contraint depending on the customer application. With this way, I can catch the thrown exception in my c# code, and display a specific error message to the GUI.
Now, my problem is to alter the collation of columns Name1 & Name2, I need to make them case sensitive (French_CS_AS). To alter these fields, I have to drop the two constraints and recreate it. According to the explained schema, I cannot create an enabled constraint and then disable it, because by some customers, I have duplicate keys for one or other constraint.
For my update script, my idea number 1 was
Save the name of enabled constraints in a temp table
Drop the constraints
Alter columns
Create DISABLED unique constraints
Enable specific constraints according to the saved values in points 1.
My problem is in point 4., I don't find how to create a disabled unique constraint with an ALTER TABLE statement. Is it possible to create it directly in the sys.indexes table ?
My idea number 2 was
Rename TestTable to TestTableCopy
Recreate TestTable with the new fields collation, and otherwise the same schema (indexes, FK, triggers, ...)
Disable specifical unique contraints in TestTable
Migrate data from TestTableCopy to TestTable
Drop TestTableCopy
In this way, my fear is to loose some links with other tables/dependencies, beceause it is a central table in my database.
Is there any other way to achieve my goal?
If necessary, I can use unique indexes instead of unique constraints.
It looks like it is impossible to create a unique index on a column that already has duplicate values.
So, rather than having a disabled unique index either:
not have an index at all (which is the same as having a disabled index from the query processor point of view),
or create a non-unique index.
For those instanses where your client has unique data create unique index. For those instanses where your client has non-unique data create non-unique index.
CREATE PROCEDURE [dbo].[spUsers_AddUsers]
#Name1 varchar(50) ,
#Name2 varchar(50) ,
#Unique bit
AS
declare #err int
begin tran
if #Unique = 1 begin
if not exists (SELECT * FROM Users WHERE Name1 = #Name1 and Name2 = #Name2)
begin
INSERT INTO Users (Name1,Name2)
VALUES (#Name1,#Name2)
set #err = ##ERROR
end else
begin
UPDATE Users
set Name1 = #Name1,
Name2 = #Name2
where Name1 = #Name1 and Name2 = #Name2
set #err = ##ERROR
end
end else begin
if not exists ( SELECT * FROM Users WHERE Name1 = #Name1 )
begin
INSERT INTO Users (Name1,Name2)
VALUES (#Name1,#Name2)
set #err = ##ERROR
end else
begin
UPDATE Users
set Name1 = #Name1,
Name2 = #Name2
where Name1 = #Name1
set #err = ##ERROR
end
if #err = 0 commit tran
else rollback tran
So first you check if you need an unique Name1 and Name2 or just Name1. Then if you do you an insert/update based on what constrain you have.

TSQL Alter PRIMARY KEY Cluster Index MSSQL2008r2

is it possible to ALTER a PRIMARY KEY CLUSTERED Index on an existing table without losing the data?
If so, what is the ALTER command for this please?
EDIT
I want to add an additional column to the PRIMARY KEY CLUSTERED Index
Thanks
Here is what I've done in the past to change a primary key on a table:
BEGIN TRANSACTION doStuff
DECLARE #isValid bit
SET #isValid = 1
DECLARE #pkName varchar(50)
SET #pkName = (
SELECT TOP 1 name
FROM sys.key_constraints
WHERE type ='pk'
AND OBJECT_NAME(parent_object_id) = N'TableName'
)
DECLARE #sql nvarchar(2000)
SET #sql = N'
ALTER TABLE dbo.TableName
DROP CONSTRAINT ' + #pkName
EXEC (#sql)
IF (##ERROR <> 0)
BEGIN
PRINT 'Error deleting primary key'
SET #isValid = 0
END
ALTER TABLE dbo.TableName
ADD PRIMARY KEY (primary key columns separated by comma)
IF (##ERROR <> 0)
BEGIN
PRINT 'Error creating primary key'
SET #isValid = 0
END
IF (#isValid = 1)
BEGIN
PRINT 'Commit'
COMMIT TRANSACTION doStuff
END
ELSE
BEGIN
PRINT 'Rollback'
ROLLBACK TRANSACTION doStuff
END
Note as pointed out in: Best way to change clustered index (PK) in SQL 2005 this will reorder the data in your table throughout the operation, so depending on the size of the table it could take a significant amount of time.

Changing primary key int type to serial

Is there a way to change existing primary key type from int to serial without dropping the table? I already have a lot of data in the table and I don't want to delete it.
Converting an int to a serial more or less only means adding a sequence default to the value, so to make it a serial;
Pick a starting value for the serial, greater than any existing value in the table
SELECT MAX(id)+1 FROM mytable
Create a sequence for the serial (tablename_columnname_seq is a good name)
CREATE SEQUENCE test_id_seq MINVALUE 3 (assuming you want to start at 3)
Alter the default of the column to use the sequence
ALTER TABLE test ALTER id SET DEFAULT nextval('test_id_seq')
Alter the sequence to be owned by the table/column;
ALTER SEQUENCE test_id_seq OWNED BY test.id
A very simple SQLfiddle demo.
And as always, make a habit of running a full backup before running altering SQL queries from random people on the Internet ;-)
-- temp schema for testing
-- ----------------------------
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
CREATE TABLE bagger
( id INTEGER NOT NULL PRIMARY KEY
, tralala varchar
);
INSERT INTO bagger(id,tralala)
SELECT gs, 'zzz_' || gs::text
FROM generate_series(1,100) gs
;
DELETE FROM bagger WHERE random() <0.9;
-- SELECT * FROM bagger;
-- CREATE A sequence and tie it to bagger.id
-- -------------------------------------------
CREATE SEQUENCE bagger_id_seq;
ALTER TABLE bagger
ALTER COLUMN id SET NOT NULL
, ALTER COLUMN id SET DEFAULT nextval('player_id_seq')
;
ALTER SEQUENCE bagger_id_seq
OWNED BY bagger.id
;
SELECT setval('bagger_id_seq', MAX(ba.id))
FROM bagger ba
;
-- Check the result
-- ------------------
SELECT * FROM bagger;
\d bagger
\d bagger_id_seq