In Oracle, MySQL I can select from partition
SELECT ... FROM ... PARTITION (...)
In SQL Server syntax is a bit different involving partitioning function.
is there a way to do it in PostgreSQL?
thank you!
PostgreSQL provided partitioning through table inheritance.
Partitions are child tables with an unique name like any other table, so you just select from them directly with their names. The only special case is for the parent table: to select data from the parent table ignoring the child tables, an additional keyword ONLY is used as in SELECT * FROM ONLY parent_table.
Example from the manual:
CREATE TABLE measurement_y2006m02 (
CHECK ( logdate >= DATE '2006-02-01' AND logdate < DATE '2006-03-01' )
) INHERITS (measurement);
So select * from measurement_y2006m02 would get data from only this partition.
Related
I am trying to use list partitioning in PostgreSQL.
https://www.postgresql.org/docs/current/ddl-partitioning.html
So, I have some questions about that.
Is there a limit on the number of values or partition tables in list partitioning?
When a partitioning table is created as shown below, can i check the value list with SQL? (like keys = [test, test_2])
CREATE TABLE part_table (id int, branch text, key_name text) PARTITION BY LIST (key_name);
CREATE TABLE part_default PARTITION OF part_table DEFAULT;
CREATE TABLE part_test PARTITION OF part_table FOR VALUES IN ('test');
CREATE TABLE part_test_2 PARTITION OF part_table FOR VALUES IN ('test_2');
When using the partitioning table created above, if data is added with key_name = "test_3", it is added to the default table. If 'test_3' exists in the default table and partitioning is attempted with the corresponding value, the following error occurs.
In this case, is there a good way to partition with the value 'test_3' without deleting the value in the default table?
CREATE TABLE part_test_3 PARTITION OF part_table FOR VALUES IN ('test_3');
Error: updated partition constraint for default partition "part_default" would be violated by some row
Is it possible to change the table name or value of a partition table?
Thank you..!
Is there a limit on the number of values or partition tables in list
partitioning?
Some test: https://www.depesz.com/2021/01/17/are-there-limits-to-partition-counts/
The value in current table and value reside in which partition.
SELECT
tableoid::pg_catalog.regclass,
array_agg(DISTINCT key_name)
FROM
part_table
GROUP BY
1;
To get all the current partition, and the configed value range. Use the following.
SELECT
c.oid::pg_catalog.regclass,
c.relkind,
inhdetachpending as is_detached,
pg_catalog.pg_get_expr(c.relpartbound, c.oid)
FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i
WHERE c.oid = i.inhrelid
AND i.inhparent = '58281'
--the following query will return 58281.
select c.oid
from pg_catalog.pg_class c
where relname ='part_table';
I have a Kafka stream with user_id and want to produce another stream with user_id and number of records in a JDBC table.
Following is how I tried to achieve this (I'm new to flink, so please correct me if that's not how things are supposed to be done). The issue is that flink ignores all updates to JDBC table after the job has started.
As far as I understand the answer to this is to use lookup joins but flink complains that lookup joins are not supported on temporal views. Also tried to use versioned views without much success.
What would be the correct approach to achieve what I want?
CREATE TABLE kafka_stream (
user_id STRING,
event_time TIMESTAMP(3) METADATA FROM 'timestamp',
WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND
) WITH (
'connector' = 'kafka',
-- ...
)
-- NEXT SQL --
CREATE TABLE jdbc_table (
user_id STRING,
checked_at TIMESTAMP,
PRIMARY KEY(user_id) NOT ENFORCED
) WITH (
'connector' = 'jdbc',
-- ...
)
-- NEXT SQL --
CREATE TEMPORARY VIEW checks_counts AS
SELECT user_id, count(*) as num_checks
FROM jdbc_table
GROUP BY user_id
-- NEXT SQL --
INSERT INTO output_kafka_stream
SELECT
kafka_stream.user_id,
checks_counts.num_checks
FROM kafka_stream
LEFT JOIN checks_counts ON kafka_stream.user_id = checks_counts.user_id
ّ am trying to insert multiple records got from the join table to another table user_to_property. In the user_to_property table user_to_property_id is primary, not null it is not autoincrementing. So I am trying to add user_to_property_id manually by an increment of 1.
WITH selectedData AS
( -- selection of the data that needs to be inserted
SELECT t2.user_id as userId
FROM property_lines t1
INNER JOIN user t2 ON t1.account_id = t2.account_id
)
INSERT INTO user_to_property (user_to_property_id, user_id, property_id, created_date)
VALUES ((SELECT MAX( user_to_property_id )+1 FROM user_to_property),(SELECT
selectedData.userId
FROM selectedData),3,now());
The above query gives me the below error:
ERROR: more than one row returned by a subquery used as an expression
How to insert multiple records to a table from the join of other tables? where the user_to_property table contains a unique record for the same user-id and property_id there should be only 1 record.
Typically for Insert you use either values or select. The structure values( select...) often (generally?) just causes more trouble than it worth, and it is never necessary. You can always select a constant or an expression. In this case convert to just select. For generating your ID get the max value from your table and then just add the row_number that you are inserting: (see demo)
insert into user_to_property(user_to_property_id
, user_id
, property_id
, created
)
with start_with(current_max_id) as
( select max(user_to_property_id) from user_to_property )
select current_max_id + id_incr, user_id, 3, now()
from (
select t2.user_id, row_number() over() id_incr
from property_lines t1
join users t2 on t1.account_id = t2.account_id
) js
join start_with on true;
A couple notes:
DO NOT use user for table name, or any other object name. It is a
documented reserved word by both Postgres and SQL standard (and has
been since Postgres v7.1 and the SQL 92 Standard at lest).
You really should create another column or change the column type
user_to_property_id to auto-generated. Using Max()+1, or
anything based on that idea, is a virtual guarantee you will generate
duplicate keys. Much to the amusement of users and developers alike.
What happens in an MVCC when 2 users run the query concurrently.
I'm working on a table which has more than 10 columns. One of the column name is ASAT which is of type DATE(Format is yyyy-mm-dd HH:MM:SS:mmm).
I'm looking for a sql query which returns all records of max date. Trying to use that query in java for JDBC call.
I tried this:
Select * from tablename where ASAT in (select MAX(ASAT) from tablename).
But it is not returning any records.
Any help is really appreciated.Thanks
How about:
SELECT MAX(Asat) FROM TableA;
SELECT MAX(Asat) FROM TableA GROUP BY Asat;
When you self join, I suggest aliasing each copy of the table. Personally I use the table letter with a number afterwards in case I need to track it for larger queries.
Select *
from tablename t1
where t1.ASAT = (
select MAX(t2.ASAT)
from tablename t2
)
I believe you are looking for something like this if I'm understanding you. First build a CTE containing the primary key and the MAX(ASAT). Then join to it, selecting where the primary key matches the primary key of the row with the MAX(ASAT). Note your "ID" may have to be more than one column.
with tbl_max_asat(id, max_asat) as (
select id, max(asat) max_asat
from tablename
group by id
)
select *
from tablename t
join tbl_max_asat tma
on t.id = tma.id;
This old post just popped up because it was edited today. Maybe my answer will still help someone. :-)
I am trying to select all data out of the same specific table partition for 100+ tables using the DB2 EXPORT utility. The partition name is constant across all of my partitioned tables, which makes this method more advantageous than using some other possible methods.
I cannot detach the partitions as they are in a production environment.
In order to script this for semi-automation, I need to be able to run the query:
SELECT * FROM MYTABLE
WHERE PARTITION_NAME = MYPARTITION;
I am not able to find the correct syntax for utilizing this type of logic in my SELECT statement passed to the EXPORT utility.
You can do something like this by looking up the partition number first:
SELECT SEQNO
FROM SYSCAT.DATAPARTITIONS
WHERE TABNAME = 'YOURTABLE' AND DATAPARTITIONNAME = 'WHATEVER'
then using the SEQNO value in the query:
SELECT * FROM MYTABLE
WHERE DATAPARTITIONNUM(anycolumn) = <SEQNO value>
Edit:
Since it does not matter what column you reference in DATAPARTITIONNUM(), and since each table is guaranteed to have at least one column, you can automatically generate queries by joining SYSCAT.DATAPARTITIONS and SYSCAT.COLUMNS:
select
'select * from', p.tabname,
'where datapartitionnum(', colname, ') = ', seqno
from syscat.datapartitions p
inner join syscat.columns c
on p.tabschema = c.tabschema and p.tabname = c.tabname
where colno = 1
and datapartitionname = '<your partition name>'
and p.tabname in (<your table list>)
However, building dependency on database metadata into your application is, in my view, not very reliable. You can simply specify the appropriate partitioning key range to extract the data, which will be as efficient.