I'm trying to do an upsert on a table with two constraints. One is that the column a is unique, the other is that the columns b, c, d and e are unique together. What I don't want is that a, b, c, d and e are unique together, because that would allow two rows having the same value in column a.
The following fails if the second constraint (unique b, c, d, e) is violated:
INSERT INTO my_table (a, b, c, d, e, f, g)
SELECT (a, b, c, d, e, f, g)
FROM my_temp_table temp
ON CONFLICT (a) DO UPDATE SET
a=EXCLUDED.a,
b=EXCLUDED.b,
c=EXCLUDED.c,
d=EXCLUDED.d,
e=EXCLUDED.e,
f=EXCLUDED.f,
g=EXCLUDED.g;
The following fails if the first constraint (unique a) is violated:
INSERT INTO my_table (a, b, c, d, e, f, g)
SELECT (a, b, c, d, e, f, g)
FROM my_temp_table temp
ON CONFLICT ON CONSTRAINT my_table_unique_together_b_c_d_e DO UPDATE SET
a=EXCLUDED.a,
b=EXCLUDED.b,
c=EXCLUDED.c,
d=EXCLUDED.d,
e=EXCLUDED.e,
f=EXCLUDED.f,
g=EXCLUDED.g;
How can I bring those two together? I first tried to define a constraint that says "either a is unique or b, c, d and e are unique together" but it looks like that isn't possible. I then tried two INSERT statements with WHERE clauses making sure that the other constraint doesn't get violated, but there is a third case where a row might violate both constraints at the same time. To handle the last case I considered dropping one of the constraints and creating it after the INSERT, but isn't there a better way to do this?
I tried this, but according to the PostgreSQL documentation it can only DO NOTHING:
INSERT INTO my_table (a, b, c, d, e, f, g)
SELECT (a, b, c, d, e, f, g)
FROM my_temp_table temp
ON CONFLICT DO UPDATE SET
a=EXCLUDED.a,
b=EXCLUDED.b,
c=EXCLUDED.c,
d=EXCLUDED.d,
e=EXCLUDED.e,
f=EXCLUDED.f,
g=EXCLUDED.g;
I read in another question that it might work using MERGE in PostgreSQL 15 but sadly it's not available on AWS RDS yet. I need to find a way to do this using PostgreSQL 14.
I think what you need is a somewhat different design. I suppose "a" is a surrogate key and b,c,d,e,f,g make up the natural key. And I suppose there are other columns, that are the data.
So force column "a" to be automatically generated, like this:
CREATE TEMP TABLE my_table(
a bigint GENERATED ALWAYS AS IDENTITY,
b bigint NOT NULL,
c bigint NOT NULL,
d bigint NOT NULL,
e bigint NOT NULL,
f bigint NOT NULL,
g bigint NOT NULL,
data text,
CONSTRAINT my_table_unique_together_b_c_d_e UNIQUE (b,c,d,e,f,g)
);
And then just skip the a column from your insert:
INSERT INTO my_table (b, c, d, e, f, g)
SELECT (b, c, d, e, f, g)
FROM my_temp_table temp
ON CONFLICT ON CONSTRAINT my_table_unique_together_b_c_d_e DO UPDATE SET
data=EXCLUDED.data;
Related to this Is there a way to return all non-null values even if one is null in PostgreSQL? - the solution of which allowed me to return null values, however it returns it into the same key instead of the one assigned.
For this example in particular so I'd like to insert it as A=valueOfA, B=valueOfB instead of A=valueOfA,valueOfB.
select concat_ws(",", A, B, C) into D;
// if C is null, it will return A=valueOfA,valueOfB
Thanks! :)
It might be easier to simply generate a JSON value:
jsonb_build_object('a', a, 'b', b, 'c', c)
This would e.g. include "a": null if column a was null. If you want to remove those, use jsonb_strip_nulls:
jsonb_strip_nulls(jsonb_build_object('a', a, 'b', b, 'c', c))
The statement I've got is
select coalesce(A||','||B||','||C) into D
If C is null, then D will = < NULL >, but if I remove C and just have A and B, then D will = the values of A & B. There may be situations where any one of those is NULL and I still want to return any non-null values into D - is this possible? Thanks!
Use concat_ws() instead:
select concat_ws(',', a, b, c)
I have a collection in Azure CosmosDB where each document has 2 fields, say
{"a":1,"b":2}
{"a":1,"b":3}
{"a":2,"b":5}
and I want to return a set of possible values of B for each A, like this:
[{"a":1, "b":[2,3] },
{"a":2, "b":[5] }]
Trying SELECT c.a, ARRAY(SELECT DISTINCT c.b FROM c) FROM c GROUP BY c.a
I receive the error
Property reference 'c.b' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
Same for ARRAY(select value c.b from c); and ARRAY_CONCAT does not look like an aggregate function.
Are there ways to do it without aggregating client-side?
Per my knowledge, this is not supported in Cosmos Db so far. Because Array is not aggregate function anyway which means it can't used with group by.
So, i suppose that you need to retrieve the data sort by a. Then loop the result to produce the b array mapping to the duplicate a.
For, Distinct values of b (output is in string format and values of b are comma separated)
SELECT a, String.Join(",", ARRAY_AGG(DISTINCT b)) AS list FROM c;
You can also use below, but the output format is not readable
SELECT a, ARRAY_AGG(DISTINCT b) AS list FROM c;
i want to do this :
select
id,
sum(field1) as a,
sum(field2) as b,
a - b as result group by id;
but firebird showing "Column unknown. a." error at line "a - b as result". How do make this. Thanks..
Write the sum() out in the subtraction too, ie
select
id,
sum(field1) as a,
sum(field2) as b,
sum(field1) - sum(field2) as result
group by id;