I'm currently using the following query to update my PSQL table:
INSERT INTO table(key, value, value_first_seen) VALUES ('test', 'value', '2022-04-14 20:50:23.02858+00')
ON CONFLICT (key) DO
UPDATE
SET key = excluded.key;
I would like to also update value_first_seen only if the value column has changed.
I want to achieve something like the query below, but was not able to find any solutions:
INSERT INTO table(key, value, value_first_seen) VALUES ('test', 'value', '2022-04-14 20:50:23.02858+00')
ON CONFLICT (key) DO
UPDATE
SET value = excluded.value
IF value != excluded.value SET value_first_seen = excluded.value_first_seen;
Thanks!
Use a case to choose.
INSERT INTO table(key, value, value_first_seen) VALUES ('test', 'value', '2022-04-14 20:50:23.02858+00')
ON CONFLICT (key) DO
UPDATE
SET key = excluded.key,
value_first_seen = case
when value != excluded.value then
excluded.value_first_seen
else
value_first_seen
end,
value = excluded.value;
Related
As context, I am creating a bucket of key values with empty documents to fulfill a want to quickly check IDs just through checking key existence in comparison to checking values. In the cluster, I have two buckets, source-bucket and new-bucket. The documents in source-bucket are in the form:
ID: {
ID: ...,
type: ...
}
You can move the contents of source to the new bucket using the query
INSERT INTO `new-bucket` (KEY k, VALUE v) SELECT meta(v).id AS k FROM `source-bucket` as v
Is there a way to copy over just the key? Something along the lines of this (although this example doesn't work):
INSERT INTO `new-bucket` (KEY k, VALUE v) values (SELECT meta().id FROM `source-bucket`, NULL)
I guess I'm not familiar enough with the n1ql syntax to under how to construct a query like this. Let me know if you have an answer to this. If this is a duplicate, feel free to point to the answer.
If you need empty object use {}.
CREATE PRIMARY INDEX ON `source-bucket`;
INSERT INTO `new-bucket` (KEY k, VALUE {})
SELECT meta(b).id AS k FROM `source-bucket` as b
NOTE: document value can be empty object or any data type. The following all are valid.
INSERT INTO default VALUES ("k01", {"a":1});
INSERT INTO default VALUES ("k02", {});
INSERT INTO default VALUES ("k03", 1);
INSERT INTO default VALUES ("k04", "aa");
INSERT INTO default VALUES ("k05", true);
INSERT INTO default VALUES ("k06", ["aa"]);
INSERT INTO default VALUES ("k07", NULL);
I am running an update query like
update datavalue
set categoryoptioncomboid = '21519'
where dataelementid = '577' and
categoryoptioncomboid = '471';
but it is giving an error
ERROR: duplicate key value violates unique constraint "datavalue_pkey"
DETAIL: Key (dataelementid, periodid, sourceid, categoryoptioncomboid, attributeoptioncomboid)=(577, 35538, 10299, 21519, 15) already exists.
Is there a way to make postgres continue updating and skip any errors? Is there a way without using procedure for loop?
I'd try something like this:
update datavalue
set categoryoptioncomboid = '21519'
where
dataelementid = '577' and categoryoptioncomboid = '471'
and not exists (
select 1
from datavalue dv
where dv.dataelementid=datavalue.dataelementid
and dv.periodid=datavalue.periodid
and dv.sourceid=datavalue.sourceid
and dv.categoryoptioncomboid='21519'
and dv.attributeoptioncomboid=datavalue.attributeoptioncomboid
);
Another idea is to insert with on conflict and then delete unneeded rows. But it requires knowledge of the full definition of datavalue table columns.
I was looking for a way to update dynamically the default value of a selectForm.
My code is the following:
%spark2.pyspark
d_var = {}
d_var['one'] = ["0"]
d_var['two'] = ["1"]
keys = []
values = [('0', 'True'), ('1', 'False')]
for key in sorted(d_var.keys()):
keys.append((key, key))
key = z.select('Keys', keys, keys[0][0])
default_value = '0' if key == 'one' else '1'
print default_value
value = z.select('Option', values, default_value)
When I change the selected value in the first select I expect that the second select will be updated but nothing happens, only in the first execution of the paragraph.
Thanks in advance.
It's a correct behaviour. Dynamic form stores value after initialization or after last change.
User is expected that paragraph will run with value from dynamic forms.
And there is no opportunity to distinguish when need to update or no need to update dynamic form.
I'm using PostgreSQL 9.3.
I have a varchar column in a table that can be null and I want to update it depending of its value is null or not.
I didn't manage to do a function that takes a String as argument and updates the value like this:
If the column is null, the function concatenates the current string value, a comma and the string given as argument, else it just adds the string at the end of the current string value (without comma).
So how can I make a different Update depending of the column value to update?
You can use a case statement to conditionally update a column:
update the_table
set the_colum = case
when the column is null then 'foobar'
else the_column||', '||'foobar'
end
An another approach
UPDATE foo
SET bar = COALESCE(NULLIF(concat_ws(', ', NULLIF(bar, ''), NULLIF('a_string', '')), ''), 'a_string')
when adding a new record like this
ContentContacts c2 = new ContentContacts();
c2.updated_user = c2.created_user = loggedUserId;
c2.created_date = c2.updated_date = DateTime.UtcNow;
db.ContentContacts.AddObject(c2);
I'm getting
Cannot insert the value NULL into column 'main_email_support', table 'SQL2008R2.dbo.ContentContacts'; column does not allow nulls. INSERT fails. The statement has been terminated.
but the default value in the database is an empty string like:
why am I getting such error? shouldn't the EF says something like:
"ohh, it's a nullvalue, so let's add the column default value instead"
I did a small test, created a table with a column that had a default value but did not allow nulls.
Then this SQL Statement:
INSERT INTO [Test].[dbo].[Table_1]
([TestText])
VALUES
(null)
Gives this error:
Msg 515, Level 16, State 2, Line 1
Cannot insert the value NULL into column 'TestText', table
'Test.dbo.Table_1'; column does not allow nulls. INSERT fails.
The problem here is that the insert specifies all the columns, also those with default values. The the Insert tries to update the columns with null values.
You have 2 options:
Update the table through a view, which does not contain the default columns
Set the default values in your C# code
A default value is business logic, so there is a case for it being set in the business layer of your application.