Postgres how to create table with automatic create_by - postgresql

If I want to create a table with column create_by automatically filled with the user who create the entry, what is the DDL look like?
Wonder whether Postgres can do this similar to create_at
e.g. create_at TIMESTAMP NOT NULL DEFAULT NOW()
kind of thing.

SQL Fiddle
PostgreSQL 9.6 Schema Setup:
CREATE TABLE foo
(
id serial primary key
, "bar" varchar(1)
, created_by text NOT NULL DEFAULT current_user
, created_at timestamp DEFAULT current_timestamp
)
;
INSERT INTO foo
("bar")
VALUES
('a'),
('b'),
('c')
;
Query 1:
select *
from foo
Results:
| id | bar | created_by | created_at |
|----|-----|---------------|-----------------------------|
| 1 | a | user_17_3a66a | 2017-11-04T05:05:18.161681Z |
| 2 | b | user_17_3a66a | 2017-11-04T05:05:18.161681Z |
| 3 | c | user_17_3a66a | 2017-11-04T05:05:18.161681Z |

Related

psycopg copy_expert doesn't work if id is not in csv

I have a data insert goal in Postgres. My table's columns:
id, col_1, col_2, col_3, col_4, col_5, col_6, col_7
Id column is auto incremented.
My Python insert code
copy_table_query = "COPY my_table (col_1, col_2, col_3, col_4, col_5, col_6, col_7) FROM STDIN WITH (DELIMITER '\t');"
curs.copy_expert(copy_table_query, data)
But it tries to insert col_1 into id and of course it fails with psycopg2.errors.InvalidTextRepresentation: invalid input syntax for type bigint. Because col_1 is string.
How can I let Postgres generate ids while I just insert data from CSV?
Example that shows it works. There is really no need to use copy_expert you can use [copy_from](https://www.psycopg.org/docs/cursor.html#cursor.copy_from. By default the separator is tab. You specify the columns with the columns parameter.
cat csv_test.csv
test f
test2 t
test3 t
\d csv_test
Table "public.csv_test"
Column | Type | Collation | Nullable | Default
--------+-------------------+-----------+----------+--------------------------------------
id | integer | | not null | nextval('csv_test_id_seq'::regclass)
col1 | character varying | | |
col2 | boolean | | |
with open('csv_test.csv') as csv_file:
cur.copy_from(csv_file, 'csv_test', columns=['col1', 'col2'])
con.commit()
select * from csv_test ;
id | col1 | col2
----+-------+------
1 | test | f
2 | test2 | t
3 | test3 | t

Query Clarification on multiple table insert

I have a table populated by CSV raw data
| NNAME | DateDriven | username |
|--------------------------------|
| Thunder| 1-1-1999 | mickey |
|--------------------------------|
And an existing MSSQL database
> Tables
Drivers
| ------------- |
| ID | username |
|---------------|
| 1 | mickey |
| 2 | jonny |
| 3 | ryan |
-----------------
Cars
-----------------------------
| ID | NNAME | DateDriven |
|---------------------------|
| | | |
-----------------------------
Car_Drivers Table
-----------------------
| Cars_ID | Driver_ID |
|---------------------|
| | |
-----------------------
How can I take the cvs table data and insert it into the above? I am very lost!
CARS IDs are identity(1,1). Table Car_Drivers has a composite primary key off two foreign keys.
What I think I need to do is create a join to convert username to ID but I am getting lost completing the insert query.
Desired outcome
Cars Table
-----------------------------
| ID | NNAME | DateDriven |
|---------------------------|
| 1 | Thunder | 1-1-1999 |
-----------------------------
Car_Drivers Table
-----------------------
| Cars_ID | Driver_ID |
|---------------------|
| 1 | 1 |
-----------------------
The following ought to do what you need. The problem is that you need to keep some temporary data around as rows are inserted into Cars, but some of the data is from a different table. Merge provides the answer:
-- Create the test data.
declare #CSVData as Table ( NName NVarChar(16), DateDriven Char(8), Username NVarChar(16));
insert into #CSVData ( NName, DateDriven, Username ) values
( N'Thunder', '1-1-1999', N'mickey' );
select * from #CSVData;
declare #Drivers as Table ( Id SmallInt Identity, Username NVarChar(16) );
insert into #Drivers ( Username ) values
( N'mickey' ), ( N'jonny' ), ( N'ryan' );
select * from #Drivers;
declare #Cars as Table ( Id SmallInt Identity, NName NVarChar(16), DateDriven Char(8) );
declare #CarDrivers as Table ( Cars_Id SmallInt, Driver_Id SmallInt );
-- Temporary data needed for the #CarDrivers table.
declare #NewCars as Table ( Username NVarChar(16), Cars_Id SmallInt );
-- Merge the new data into #Cars .
-- MERGE allows the use of OUTPUT with references to columns not inserted,
-- e.g. Username .
merge into #Cars
using ( select NName, DateDriven, Username from #CSVData ) as CSVData
on 1 = 0
when not matched by target then
insert ( NName, DateDriven ) values ( CSVData.NName, CSVData.DateDriven )
output CSVData.Username, Inserted.Id into #NewCars;
-- Display the results.
select * from #Cars;
-- Display the temporary data.
select * from #NewCars;
-- Add the connections.
insert into #CarDrivers ( Cars_Id, Driver_Id )
select NewCars.Cars_Id, Drivers.Id
from #NewCars as NewCars inner join
#Drivers as Drivers on Drivers.Username = NewCars.Username;
-- Display the results.
select * from #CarDrivers;
DBFiddle.

Transactions confirmation received , but table remains empty

In my application, i write transactions to post gres schema prod.
In order to debug, I have using the psql command line client on OSX
My table the only fields I have to fill are the are message field (json blob) and and status field (text).
Here is what the schema looks like
Table "prod.suggestions"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
------------------+--------------------------+-----------+----------+--------------------+----------+--------------+-------------
id | uuid | | not null | uuid_generate_v4() | plain | |
message | jsonb | | not null | | extended | |
status | text | | not null | | extended | |
transaction_hash | text | | | | extended | |
created_at | timestamp with time zone | | | CURRENT_TIMESTAMP | plain | |
updated_at | timestamp with time zone | | | CURRENT_TIMESTAMP | plain | |
Indexes:
"suggestions_pkey" PRIMARY KEY, btree (id)
Triggers:
update_updated_at_on_prod_suggestions BEFORE UPDATE ON prod.suggestions FOR EACH ROW EXECUTE PROCEDURE update_updated_at()
here is the function the trigger executes:
create function update_updated_at()
returns trigger
as
$body$
begin
new.updated_at = current_timestamp;
return new;
end;
$body$
language plpgsql;
Here is query to write the message:
INSERT INTO prod.suggestions (message, status) VALUES ('{"name": "Paint house", "tags": ["Improvements", "Office"], "finished": true}' , 'rcvd');
It returns INSERT 0 1 which I assume is a sucesss.
however when i query the table, it doesnt return anything.
select * from prod.suggestions;
I will appreciate any pointers on this.
This had nothing to do with postgres. I have another workers thread that was deleting all the data from the table.

Outdated row doesn't move out from foreign table partition postgres

I'm trying to learn how sharding is configured in Postgres.
My Postgres setup has a temperature table which has 4 partitions each covering different range of "timestamp" value.
postgres=# \d+ temperature
Partitioned table "public.temperature"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
-----------+-----------------------------+-----------+----------+-----------------------------------------+---------+--------------+-------------
id | bigint | | not null | nextval('temperature_id_seq'::regclass) | plain | |
city_id | integer | | not null | | plain | |
timestamp | timestamp without time zone | | not null | | plain | |
temp | numeric(5,2) | | not null | | main | |
Partition key: RANGE ("timestamp")
Partitions: temperature_201901 FOR VALUES FROM ('2019-01-01 00:00:00') TO ('2019-02-01 00:00:00'),
temperature_201902 FOR VALUES FROM ('2019-02-01 00:00:00') TO ('2019-03-01 00:00:00'),
temperature_201903 FOR VALUES FROM ('2019-03-01 00:00:00') TO ('2019-04-01 00:00:00'),
temperature_201904 FOR VALUES FROM ('2019-04-01 00:00:00') TO ('2019-05-01 00:00:00')
temperature_201904 table, in particular, is a foreign table
postgres=# \d+ temperature_201904
Foreign table "public.temperature_201904"
Column | Type | Collation | Nullable | Default | FDW options | Storage | Stats target | Description
-----------+-----------------------------+-----------+----------+-----------------------------------------+-------------+---------+--------------+-------------
id | bigint | | not null | nextval('temperature_id_seq'::regclass) | | plain | |
city_id | integer | | not null | | | plain | |
timestamp | timestamp without time zone | | not null | | | plain | |
temp | numeric(5,2) | | not null | | | main | |
Partition of: temperature FOR VALUES FROM ('2019-04-01 00:00:00') TO ('2019-05-01 00:00:00')
Partition constraint: (("timestamp" IS NOT NULL) AND ("timestamp" >= '2019-04-01 00:00:00'::timestamp without time zone) AND ("timestamp" < '2019-05-01 00:00:00'::timestamp without time zone))
Server: shard02
Insert works as expected. If I insert the following value and check from the remote host shard02, then the value exists. Fantastic!
postgres=# select * from temperature_201904;
id | city_id | timestamp | temp
----+---------+---------------------+-------
1 | 1 | 2019-04-02 00:00:00 | 12.30
(1 row)
However, if I update the timestamp of this row such that it's no longer valid for the range defined for the partition, I'd expect it to get moved out and placed into the correct partition, temperature_201901, but it's not.
postgres=# update temperature set timestamp = '2019-01-04' where id=1;
UPDATE 1
postgres=# select * from temperature_201904 ;
id | city_id | timestamp | temp
----+---------+---------------------+-------
1 | 1 | 2019-01-04 00:00:00 | 12.30
Again, just to reiterate, this table has a range temperature_201904 FOR VALUES FROM ('2019-04-01 00:00:00') TO ('2019-05-01 00:00:00') and is a foreign table.
Feels like I'm missing something here.
Is this an expected behavior? If so, is there a way to configure such that data are automatically moved between nodes as their partition constraints are changed?
Thanks in advance!
postgres=# SELECT version();
version
------------------------------------------------------------------------------------------------------------------
PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
This seems to be expected. From the docs
While rows can be moved from local partitions to a foreign-table partition (provided the foreign data wrapper supports tuple routing), they cannot be moved from a foreign-table partition to another partition.
Now I would have expected an ERROR rather than silently violating the implied constraint, but I wouldn't expect this to have worked the way to you want it to.

How to retrive Column name and datatype from \d in postgresql

I am new to Postgresql and working on a project which takes snapshot of relation.I want to retrive the first 2 column name and datatype from \d+ command in postgresql and then use this result to create a another table with only first 2 column
I am stuck on this . Can someone guide me on this ?
Column | Type | Modifiers | Storage | Stats target | Description
--------------+-----------------------------+------------------------------------------------------------+----------+--------------+-------------
i | integer | | plain | |
updated_time | timestamp without time zone | default '2000-01-01 00:00:00'::timestamp without time zone | plain | |
version | numeric | default '0'::numeric | main | |
is_updated | boolean | default false | plain | |
name | character varying(20) | | extended | |
I would just use plPgSql here, eg:
t=# do
$$
begin
execute format('create table so as select %s from pg_database',(select string_agg(column_name,',') from information_schema.columns where table_name = 'pg_database' and ordinal_position <=2));
end;
$$
;
DO
t=# \d so
Table "public.so"
Column | Type | Modifiers
---------+------+-----------
datname | name |
datdba | oid |
t=# \d pg_database
Table "pg_catalog.pg_database"
Column | Type | Modifiers
---------------+-----------+-----------
datname | name | not null
datdba | oid | not null
encoding | integer | not null
datcollate | name | not null
datctype | name | not null
datistemplate | boolean | not null
datallowconn | boolean | not null
datconnlimit | integer | not null
datlastsysoid | oid | not null
datfrozenxid | xid | not null
datminmxid | xid | not null
dattablespace | oid | not null
datacl | aclitem[] |
Indexes:
"pg_database_datname_index" UNIQUE, btree (datname), tablespace "pg_global"
"pg_database_oid_index" UNIQUE, btree (oid), tablespace "pg_global"
Tablespace: "pg_global"
update
the above is easily modifiable for other options if needed,eg:
t=# drop table so;
DROP TABLE
t=# do
$$
begin
execute format('create table so (%s) ',(select string_agg(column_name||' '||data_type||' '||case when is_nullable = 'NO' then 'NOT NULL' else '' end,',') from information_schema.columns where table_name = 'pg_database' and ordinal_position <=2));
end;
$$
;
DO
t=# \d so
Table "public.so"
Column | Type | Modifiers
---------+------+-----------
datname | name | not null
datdba | oid | not null
to include some modifiers...
update2
lastly if you want to use exact result from \d meta command - you can build you dinamic query from the one used by psql for \d:
-bash-4.2$ psql -E -c "\d pg_database"
********* QUERY **********
SELECT c.oid,
n.nspname,
...
and so forth