I recently upgraded my postgres db from 9.5.4 to 10.7 and noticed some odd behavior with an existing query.
The trimmed down version looks like this:
update
mytable
set
job_id = 6
where
id in (
select * from
(
select id from
mytable
where job_id is null
limit 2
) x for update)
and job_id is null
I would expect the number of rows to be updated to equal 2, but instead it is updating all the records that match the subquery without the limit. If I remove the for update statement, or the matching job_id is null statement, the records updated does equal 2 as expected. Before we updated this query would update the correct number of rows.
Did some behavior in 10.x change?
Related
I have the following table in PostgreSQL DB:
DB exempt
I need a PostgreSQL command to get a specific value from tbl column, based on time_launched and id columns. More precisely, I need to get a value from tbl column which corresponds to a specific id and latest (time-wise) value from time_launched column. Consequently, the request should return "x" as an output.
I've tried those requests (using psycopg2 module) but they did not work:
db_object.execute("SELECT * FROM check_ids WHERE id = %s AND MIN(time_launched)", (id_variable,))
db_object.execute(SELECT DISTINCT on(id, check_id) id, check_id, time_launched, tbl, tbl_1 FROM check_ids order by id, check_id time_launched desc)
Looks like a simple ORDER BY with a LIMIT 1 should do the trick:
SELECT tbl
FROM check_ids
WHERE id = %s
ORDER BY time_launched DESC
LIMIT 1
The WHERE clause filters results by the provided id, the ORDER BY clause ensures results are sorted in reverse chronological order, and LIMIT 1 only returns the first (most recent) row
I have simple query that takes some results from User model.
Query 1:
SELECT users.id, users.name, company_id, updated_at
FROM "users"
WHERE (TRIM(telephone) = '8973847' AND company_id = 90)
LIMIT 20 OFFSET 0;
Result:
Then I have done some update on the customer 341683 and again I run the same query that time the result shows different, means the last updated shows first. So postgres is taking the last updated by default or any other things happens here?
Without an order by clause, the database is free to return rows in any order, and will usually just return them in whichever way is fastest. It stands to reason the row you recently updated will be in some cache, and thus returned first.
If you need to rely on the order of the returned rows, you need to explicitly state it, e.g.:
SELECT users.id, users.name, company_id, updated_at
FROM "users"
WHERE (TRIM(telephone) = '8973847' AND company_id = 90)
ORDER BY id -- Here!
LIMIT 20 OFFSET 0
I have a table of datas on postgres. This table, called it 'table1', have a unique constraint on field 'id'. this table also have 3 other fields, 'write_date', 'state', 'state_detail'.
all this time, i got no problem when accesing and joining these table with another table with field 'id' as the relational field. but, this time, i got a strange result when i querying this table1.
when i run this query (called it Query1):
SELECT id, write_date, state, state_detail
FROM table1
WHERE write_date = '2019-07-30 19:42:49.314' or write_date = '2019-07-30 14:29:06.945'
it gives me 2 rows of datas, with the same id, but different value for the other fields:
id || write_date || state || state_detail
168972 2019-07-30 14:29:06.945 1 80
168972 2019-07-30 19:42:49.314 2 120
BUT, when i run this query (called it Query2):
SELECT id, write_date, state, state_detail
FROM table1
WHERE id = 168972
it gives me just 1 row:
id || write_date || state || state_detail
168972 2019-07-30 19:42:49.314 2 120
How come it gives the different result. i mean, i checked 'table1', it has the unique constraint 'id' as primary key. But, how come this happened?
i have restart the postgres service, and i run those 2 queries again. And it still gives me the same result as above.
This looks like a case of index corruption, specifically on the unique index on the id column. Could you run the following query:
SELECT ctid, id, write_date, state, state_detail FROM table1
WHERE write_date = '2019-07-30 19:42:49.314' or write_date = '2019-07-30 14:29:06.945'
You will likely receive 2 rows back for the id, with two different ctids. The ctid represents the physical location on disk for each row. Presuming you get two rows back, you will need to pick a row and delete the other one in order to "fix" the data. In addition, you'll want to recreate the unique index on the table, in order to prevent this from happening again.
Oh, and don't be surprised if you find other rows in the table like this. Depending on the source of the corruption (bad disks, bad memory, recent crash, upgrade to glibc?), this could be the sign of future trouble to come. I'd probably double-check all my tables for any issues, recreate any unique indexes, and look at the OS level for any signs of disk corruption or I/O errors. And if you aren't on the latest version of Postgres, you should upgrade.
I'm seeing a strange issue since upgrading to postgres 10 from 9.4. This parameterised update query used to work reliably, but now it regularly (but not always) fails to enforce the LIMIT clause
UPDATE tablename SET someValue = ? WHERE myKey IN (
SELECT myKey
FROM tablename
WHERE status = 'good'
ORDER BY timestamp
ASC LIMIT ? FOR UPDATE
)
try this :
SELECT ... FOR UPDATE SKIP LOCKED
limit ? for update skip locked is a feature added from 9.5.
with this concept, every thread just sees the records that are not locked for update by other threads and so no race occurs.
Hello I have a simple table like that:
+------------+------------+----------------------+----------------+
|id (serial) | date(date) | customer_fk(integer) | value(integer) |
+------------+------------+----------------------+----------------+
I want to use every row like a daily accumulator, if a customer value arrives
and if doesn't exist a record for that customer and date, then create a new row for that customer and date, but if exist only increment the value.
I don't know how implement something like that, I only know how increment a value using SET, but more logic is required here. Thanks in advance.
I'm using version 9.4
It sounds like what you are wanting to do is an UPSERT.
http://www.postgresql.org/docs/devel/static/sql-insert.html
In this type of query, you update the record if it exists or you create a new one if it does not. The key in your table would consist of customer_fk and date.
This would be a normal insert, but with ON CONFLICT DO UPDATE SET value = value + 1.
NOTE: This only works as of Postgres 9.5. It is not possible in previous versions. For versions prior to 9.1, the only solution is two steps. For 9.1 or later, a CTE may be used as well.
For earlier versions of Postgres, you will need to perform an UPDATE first with customer_fk and date in the WHERE clause. From there, check to see if the number of affected rows is 0. If it is, then do the INSERT. The only problem with this is there is a chance of a race condition if this operation happens twice at nearly the same time (common in a web environment) since the INSERT has a chance of failing for one of them and your count will always have a chance of being slightly off.
If you are using Postgres 9.1 or above, you can use an updatable CTE as cleverly pointed out here: Insert, on duplicate update in PostgreSQL?
This solution is less likely to result in a race condition since it's executed in one step.
WITH new_values (date::date, customer_fk::integer, value::integer) AS (
VALUES
(today, 24, 1)
),
upsert AS (
UPDATE mytable m
SET value = value + 1
FROM new_values nv
WHERE m.date = nv.date AND m.customer_fk = nv.customer_fk
RETURNING m.*
)
INSERT INTO mytable (date, customer_fk, value)
SELECT date, customer_fk, value
FROM new_values
WHERE NOT EXISTS (SELECT 1
FROM upsert up
WHERE up.date = new_values.date
AND up.customer_fk = new_values.customer_fk)
This contains two CTE tables. One contains the data you are inserting (new_values) and the other contains the results of an UPDATE query using those values (upsert). The last part uses these two tables to check if the records in new_values are not present in upsert, which would mean the UPDATE failed, and performs an INSERT to create the record instead.
As a side note, if you were doing this in another SQL engine that conforms to the standard, you would use a MERGE query instead. [ https://en.wikipedia.org/wiki/Merge_(SQL) ]