I have following table:
CREATE TABLE Kundendaten (
beschreiben_knr INTEGER REFERENCES Kunde(knr) DEFERRABLE INITIALLY DEFERRED,
erstelldatum DATE,
anschrift VARCHAR(40),
sonderrabat INTEGER,
PRIMARY KEY (erstelldatum, beschreiben_knr)
);
If i make this query:
select * from Kundendaten ORDER BY erstelldatum DESC;
i get:
beschreiben_knr | erstelldatum | anschrift | sonderrabat
-----------------+--------------+---------------+-------------
1 | 2015-11-01 | Winkelgasse 5 | 0
2 | 2015-11-01 | Badeteich 7 | 10
3 | 2015-11-01 | Senfgasse 7 | 15
1 | 2015-10-30 | Sonnenweg 3 | 5
But i need to get only the entry for the highest date entry if there are more then one. In this case the last row should not appear.
How can i achieve this in postgresql?
You want something like WHERE erstelldatum = MAX(DATE) but that doesn't work. You can use a sub-query to get the newest date.
SELECT *
FROM Kundendaten
WHERE erstelldatum = (
SELECT MAX(erstelldatum) FROM Kundendaten
);
(SQL Fiddle)
Postgres will optimize that subquery so it is only run once, but you'll want to make sure erstelldatum is indexed.
Related
I have the following table:
CREATE TABLE tree_node (
id serial primary key,
name character varying(255),
parent_id integer references tree (id)
);
The table contains many trees with up to about 1000 nodes.
(I'm able to query a tree and its descendants efficiently with a recursive query).
However, I need to be able to copy a single tree in one operation. Say I have a tree with 3 nodes, ids 1,2,3 (this is potentially a large tree). I would like to make a copy of it i.e. creating new nodes with new ids. (Here the copied tree is ids 4,5,6):
id | name | parent_id
----+-----------------+-----------
1 | NodeA |
2 | NodeA.1 | 1
3 | NodeA.1.1 | 2
4 | NodeA(copy) |
5 | NodeA.1(copy) | 4
6 | NodeA.1.1(copy) | 5
Is there a way to copy a tree and its descendants more efficiently than inserting each tree node separately (because the new parent_id is needed)?
There you go:
\i tmp.sql
CREATE TABLE tree_node (
id serial primary key
, name varchar
, parent_id integer references tree_node (id)
);
INSERT INTO tree_node(name, parent_id) VALUES
( 'Node-A', NULL)
, ( 'Node-A.1', 1)
, ( 'Node-A.1.1', 2)
;
SELECT * FROM tree_node;
-- Find the top value of the sequence
-- and use it as an increment on all the copies
WITH top(val) AS
(select currval('tree_node_id_seq')
)
INSERT INTO tree_node(id, name, parent_id)
SELECT id+top.val
, name|| '(copy)'
, parent_id + top.val
FROM tree_node
CROSS JOIN top
;
SELECT * FROM tree_node;
-- bump the sequence
WITH nxt AS (
select max(id) mx from tree_node
)
SELECT setval('tree_node_id_seq', (select mx FROM nxt) )
;
Output:
DROP SCHEMA
CREATE SCHEMA
SET
CREATE TABLE
INSERT 0 3
id | name | parent_id
----+------------+-----------
1 | Node-A |
2 | Node-A.1 | 1
3 | Node-A.1.1 | 2
(3 rows)
INSERT 0 3
id | name | parent_id
----+------------------+-----------
1 | Node-A |
2 | Node-A.1 | 1
3 | Node-A.1.1 | 2
4 | Node-A(copy) |
5 | Node-A.1(copy) | 4
6 | Node-A.1.1(copy) | 5
(6 rows)
setval
--------
6
(1 row)
I want to create web app which will use tree data structures. Users will be able to create, update and delete trees. I have the following table in PostgreSQL called nodes in database:
id INTEGER PRIMARY KEY,
name VARCHAR(50) NOT NULL UNIQUE,
parent_id INTEGER NULL REFERENCE nodes(id)
Getting data
I want to get data in the following form:
id | name | children
---|------|--------------
1 | a | [2,3]
2 | b | []
3 | c | [4]
4 | d | []
I created query which returns data in form
id | name | parent_id
---|------|--------------
1 | a |
2 | b | 1
3 | c | 1
4 | d | 3
And here is code:
WITH RECURSIVE nodes_cte(id, name, parent_id, level) AS (
SELECT nodes.id, nodes.name, nodes.parent_id, 0 AS level
FROM nodes
WHERE name = 'a'
UNION ALL
SELECT nodes.id, nodes.name, nodes.parent_id, level+1
FROM nodes
JOIN nodes_cte
ON nodes_cte.id = nodes.parent_id
)
SELECT * FROM nodes_cte;
Can I change SQL code to get what I want or should I do that in app??
Inserting data
I want to know what are the ways to insert data into the table. I think that following approach will work for me:
create sequence in database
increase sequence for number of elements in tree
manually compute ids in app and insert elements in the table
Are there better ways?
CREATE TABLE nodes
( id INTEGER PRIMARY KEY
, name VARCHAR(50) NOT NULL UNIQUE
, parent_id INTEGER NULL REFERENCES nodes(id)
);
-- I created query which returns data in form
INSERT INTO nodes(id,name,parent_id)VALUES
( 1 , 'a' , NULL)
,( 2 , 'b' , 1)
,( 3 , 'c' , 1)
,( 4 , 'd' , 3)
;
SELECT p.id, p.name
, array_agg(c.id) AS children
FROM nodes p
LEFT JOIN nodes c ON c.parent_id = p.id
GROUP BY p.id, p.name
;
Result:
id | name | children
----+------+----------
1 | a | {2,3}
2 | b | {NULL}
3 | c | {4}
4 | d | {NULL}
(4 rows)
Extra: using generate_series() to insert a bunch of records. Each record having id/3 as parent, (except when zero).
INSERT INTO nodes(id,name,parent_id)
SELECT gs, 'zzz_'|| gs::text, NULLIF(gs/3 , 0)
FROM generate_series ( 5,25) gs
;
INSERTING/UPDATING DATA
Normally, your front-end should not mess with sequences, but leave that to the DBMS. You already have a UNIQUE constraint on name, because it is a natural key . So, your front-end should use that key to address rows in the nodes table, like in:
CREATE TABLE nodes2
( id SERIAL NOT NULL PRIMARY KEY
, name VARCHAR(50) NOT NULL UNIQUE
, parent_id INTEGER NULL REFERENCES nodes(id)
);
INSERT INTO nodes2(name,parent_id)
SELECT 'Omg_'|| gs::text, NULLIF(gs/3 , 0)
FROM generate_series ( 1,15) gs
;
PREPARE upd (text, text) AS
-- child, parent
UPDATE nodes2 c
SET parent_id = p.id
FROM nodes2 p
WHERE p.name = $2 -- parent
AND c.name = $1 -- child
;
EXECUTE upd( 'Omg_12', 'Omg_11');
EXECUTE upd( 'Omg_15', 'Omg_11');
Result:
CREATE TABLE
INSERT 0 15
PREPARE
UPDATE 1
UPDATE 1
id | name | children
----+--------+-----------
1 | Omg_1 | {3,4,5}
2 | Omg_2 | {6,7,8}
3 | Omg_3 | {9,10,11}
4 | Omg_4 | {13,14}
5 | Omg_5 | {NULL}
6 | Omg_6 | {NULL}
7 | Omg_7 | {NULL}
8 | Omg_8 | {NULL}
9 | Omg_9 | {NULL}
10 | Omg_10 | {NULL}
11 | Omg_11 | {15,12}
12 | Omg_12 | {NULL}
13 | Omg_13 | {NULL}
14 | Omg_14 | {NULL}
15 | Omg_15 | {NULL}
(15 rows)
I am working with a PostgreSQL 8.4.13 database.
Recently I had around around 86.5 million records in a table. I deleted almost all of them - only 5000 records are left now. I ran:
vacuum full
after deleting the rows and that returned disk space to the OS (thx to suggestion from fellow SO member)
But I see that my id numbers are still stuck at millions. For ex:
id | date_time | event_id | secs_since_1970 | value
---------+-------------------------+----------+-----------------+-----------
61216287 | 2013/03/18 16:42:42:041 | 6 | 1363646562.04 | 46.4082
61216289 | 2013/03/18 16:42:43:041 | 6 | 1363646563.04 | 55.4496
61216290 | 2013/03/18 16:42:44:041 | 6 | 1363646564.04 | 40.0553
61216291 | 2013/03/18 16:42:45:041 | 6 | 1363646565.04 | 38.5694
In an attempt to start the id value at 1 again for the remaining rows, I tried:
cluster mytable_pkey on mytable;
where mytable is the name of my table. But that did not help.
So, my question(s) is/are:
Is there a way to get the index (id value) to start at 1 again?
If I add or update the table with a new record, will it start from 1 or pick up the next highest integer value (say 61216292 in above example)?
My table description is as follows: There is no FK constraint and no sequence in it.
jbossql=> \d mytable;
Table "public.mytable"
Column | Type | Modifiers
-----------------+------------------------+-----------
id | bigint | not null
date_time | character varying(255) |
event_id | bigint |
secs_since_1970 | double precision |
value | real |
Indexes:
"mydata_pkey" PRIMARY KEY, btree (id) CLUSTER
Drop the primary key fisrt and create a temporary sequence.
alter table mytable drop constraint mydata_pkey;
create temporary sequence temp_seq;
Use the sequence to update:
update mytable
set id = nextval('temp_seq');
Recreate the primary key and drop the sequence
alter table mytable add primary key (id);
drop sequence temp_seq;
If there is a foreign key dependency on this table then you will have to deal with it first and the update will be a more complex procedure.
Is your primary key defined using a serial? If so that creates an implicit sequence. You can use ALTER SEQUENCE (see: http://www.postgresql.org/docs/8.2/static/sql-altersequence.html for syntax) to change the starting number back to 1.
Based on the fact that you have some records left (just noticed the 5000 left), you DO NOT want to reset that number to a number before the last ID of the remaining records because then that sequence will generate non-unique numbers. The point of using a sequence is it gives you a transactional way to increment a number and guarantee successive operations get unique incremented numbers.
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
--
-- Note: "deferrable initially deferred" appears to be the default
--
CREATE TABLE funky
( id SERIAL NOT NULL PRIMARY KEY DEFERRABLE INITIALLY DEFERRED
, tekst varchar
);
-- create some data with gaps in it
INSERT INTO funky(id, tekst)
SELECT gs, 'Number:' || gs::text
FROM generate_series(1,100,10) gs
;
-- set the sequence to the max occuring id
SELECT setval('funky_id_seq' , mx.mx)
FROM (SELECT max(id) AS mx FROM funky) mx
;
SELECT * FROM funky ;
-- compress the keyspace, making the ids consecutive
UPDATE funky xy
SET id = self.newid
FROM (
SELECT id AS id
, row_number() OVER (ORDER BY id) AS newid
FROM funky
) self
WHERE self.id = xy.id
;
-- set the sequence to the new max occuring id
SELECT setval('funky_id_seq' , mx.mx)
FROM (SELECT max(id) AS mx FROM funky) mx
;
SELECT * FROM funky ;
Result:
CREATE TABLE
INSERT 0 10
setval
--------
91
(1 row)
id | tekst
----+-----------
1 | Number:1
11 | Number:11
21 | Number:21
31 | Number:31
41 | Number:41
51 | Number:51
61 | Number:61
71 | Number:71
81 | Number:81
91 | Number:91
(10 rows)
UPDATE 10
setval
--------
10
(1 row)
id | tekst
----+-----------
1 | Number:1
2 | Number:11
3 | Number:21
4 | Number:31
5 | Number:41
6 | Number:51
7 | Number:61
8 | Number:71
9 | Number:81
10 | Number:91
(10 rows)
WARNING WARNING WARNING WARNING WARNING WARNING ACHTUNG:
Changing key values is generally a terrible idea. Avoid it at all cost.
I have a table that contains data for every day in 2002, but it has some missing dates. Namely, 354 records for 2002 (instead of 365). For my calculations, I need to have the missing data in the table with Null values
+-----+------------+------------+
| ID | rainfall | date |
+-----+------------+------------+
| 100 | 110.2 | 2002-05-06 |
| 101 | 56.6 | 2002-05-07 |
| 102 | 65.6 | 2002-05-09 |
| 103 | 75.9 | 2002-05-10 |
+-----+------------+------------+
you see that 2002-05-08 is missing. I want my final table to be like:
+-----+------------+------------+
| ID | rainfall | date |
+-----+------------+------------+
| 100 | 110.2 | 2002-05-06 |
| 101 | 56.6 | 2002-05-07 |
| 102 | | 2002-05-08 |
| 103 | 65.6 | 2002-05-09 |
| 104 | 75.9 | 2002-05-10 |
+-----+------------+------------+
Is there a way to do that in PostgreSQL?
It doesn't matter if I have the result just as a query result (not necessarily an updated table)
date is a reserved word in standard SQL and the name of a data type in PostgreSQL. PostgreSQL allows it as identifier, but that doesn't make it a good idea. I use thedate as column name instead.
Don't rely on the absence of gaps in a surrogate ID. That's almost always a bad idea. Treat such an ID as unique number without meaning, even if it seems to carry certain other attributes most of the time.
In this particular case, as #Clodoaldo commented, thedate seems to be a perfect primary key and the column id is just cruft - which I removed:
CREATE TEMP TABLE tbl (thedate date PRIMARY KEY, rainfall numeric);
INSERT INTO tbl(thedate, rainfall) VALUES
('2002-05-06', 110.2)
, ('2002-05-07', 56.6)
, ('2002-05-09', 65.6)
, ('2002-05-10', 75.9);
Query
Full table by query:
SELECT x.thedate, t.rainfall -- rainfall automatically NULL for missing rows
FROM (
SELECT generate_series(min(thedate), max(thedate), '1d')::date AS thedate
FROM tbl
) x
LEFT JOIN tbl t USING (thedate)
ORDER BY x.thedate
Similar to what #a_horse_with_no_name posted, but simplified and ignoring the pruned id.
Fills in gaps between first and last date found in the table. If there can be leading / lagging gaps, extend accordingly. You can use date_trunc() like #Clodoaldo demonstrated - but his query suffers from syntax errors and can be simpler.
INSERT missing rows
The fastest and most readable way to do it is a NOT EXISTS anti-semi-join.
INSERT INTO tbl (thedate, rainfall)
SELECT x.thedate, NULL
FROM (
SELECT generate_series(min(thedate), max(thedate), '1d')::date AS thedate
FROM tbl
) x
WHERE NOT EXISTS (SELECT 1 FROM tbl t WHERE t.thedate = x.thedate)
Just do an outer join against a query that returns all dates in 2002:
with all_dates as (
select date '2002-01-01' + i as date_col
from generate_series(0, extract(doy from date '2002-12-31')::int - 1) as i
)
select row_number() over (order by ad.date_col) as id,
t.rainfall,
ad.date_col as date
from all_dates ad
left join your_table t on ad.date_col = t.date
order by ad.date_col;
This will not change your table, it will just produce the result as desired.
Note that the generated id column will not contain the same values as the ID column in your table as it is merely a counter in the result set.
You could also replace the row_number() function with extract(doy from ad.date_col)
To fill the gaps. This will not reorder the IDs:
insert into t (rainfall, "date") values
select null, "date"
from (
select d::date as "date"
from (
t
right join
generate_series(
(select date_trunc('year', min("date")) from t)::timestamp,
(select max("date") from t),
'1 day'
) s(d) on t."date" = s.d::date
where t."date" is null
) q
) s
You have to fully re-create your table as indexes haves to change.
The better way to do it is to use your prefered dbi language, make a loop ignoring ID and putting values in a new table with new serialized IDs.
for day in (whole needed calendar)
value = select rainfall from oldbrokentable where date = day
insert into newcleanedtable date=day, rainfall=value, id=serialized
(That's not real code! Just conceptual to be adapted to your prefered scripting language)
Related to - PostgreSQL DISTINCT ON with different ORDER BY
I have table purchases (product_id, purchased_at, address_id)
Sample data:
| id | product_id | purchased_at | address_id |
| 1 | 2 | 20 Mar 2012 21:01 | 1 |
| 2 | 2 | 20 Mar 2012 21:33 | 1 |
| 3 | 2 | 20 Mar 2012 21:39 | 2 |
| 4 | 2 | 20 Mar 2012 21:48 | 2 |
The result I expect is the most recent purchased product (full row) for each address_id and that result must be sorted in descendant order by the purchased_at field:
| id | product_id | purchased_at | address_id |
| 4 | 2 | 20 Mar 2012 21:48 | 2 |
| 2 | 2 | 20 Mar 2012 21:33 | 1 |
Using query:
SELECT DISTINCT ON (address_id) purchases.address_id, purchases.*
FROM "purchases"
WHERE "purchases"."product_id" = 2
ORDER BY purchases.address_id ASC, purchases.purchased_at DESC
I'm getting:
| id | product_id | purchased_at | address_id |
| 2 | 2 | 20 Mar 2012 21:33 | 1 |
| 4 | 2 | 20 Mar 2012 21:48 | 2 |
So the rows is same, but order is wrong. Any way to fix it?
Quite a clear question :)
SELECT t1.* FROM purchases t1
LEFT JOIN purchases t2
ON t1.address_id = t2.address_id AND t1.purchased_at < t2.purchased_at
WHERE t2.purchased_at IS NULL
ORDER BY t1.purchased_at DESC
And most likely a faster approach:
SELECT t1.* FROM purchases t1
JOIN (
SELECT address_id, max(purchased_at) max_purchased_at
FROM purchases
GROUP BY address_id
) t2
ON t1.address_id = t2.address_id AND t1.purchased_at = t2.max_purchased_at
ORDER BY t1.purchased_at DESC
Your ORDER BY is used by DISTINCT ON for picking which row for each distinct address_id to produce. If you then want to order the resulting records, make the DISTINCT ON a subselect and order its results:
SELECT * FROM
(
SELECT DISTINCT ON (address_id) purchases.address_id, purchases.*
FROM "purchases"
WHERE "purchases"."product_id" = 2
ORDER BY purchases.address_id ASC, purchases.purchased_at DESC
) distinct_addrs
order by distinct_addrs.purchased_at DESC
This query is trickier to rephrase properly than it looks.
The currently accepted, join-based answer doesn’t correctly handle the case where two candidate rows have the same given purchased_at value: it will return both rows.
You can get the right behaviour this way:
SELECT * FROM purchases AS given
WHERE product_id = 2
AND NOT EXISTS (
SELECT NULL FROM purchases AS other
WHERE given.address_id = other.address_id
AND (given.purchased_at < other.purchased_at OR given.id < other.id)
)
ORDER BY purchased_at DESC
Note how it has a fallback of comparing id values to disambiguate the case in which the purchased_at values match. This ensures that the condition can only ever be true for a single row among those that have the same address_id value.
The original query using DISTINCT ON handles this case automatically!
Also note the way that you are forced to encode the fact that you want “the latest for each address_id” twice, both in the given.purchased_at < other.purchased_at condition and the ORDER BY purchased_at DESC clause, and you have to make sure they match. I had to spend a few extra minutes to convince myself that this query is really positively correct.
It’s much easier to write this query correctly and understandbly by using DISTINCT ON together with an outer subquery, as suggested by dbenhur.
Try this !
SELECT DISTINCT ON (address_id) *
FROM purchases
WHERE product_id = 2
ORDER BY address_id, purchased_at DESC