Let's start with:
CREATE TABLE "houses" (
"id" serial NOT NULL PRIMARY KEY,
"name" character varying NOT NULL)
Imagine I try to concurrently (!) insert into the table in a single statement multiple records (maybe 10 maybe 1000).
INSERT INTO houses (name) VALUES
('B6717'),
('HG120');
Is it guaranteed that when a single thread inserts X records in a single statement (when in the same time other threads simultaneously try to insert other records to the same table) that those records will have IDs numbered from A to A+X-1 ? Or is it possible A+100 will be taken by thread 1 and A+99 by thread 2?
Inserting 10000 records at once using two PgAdmin connections seems to be enough to prove that serial type does not guarantee continuity within a batch on my PostgreSQL 9.5
DO
$do$
BEGIN
FOR i IN 1..200 LOOP
EXECUTE format('INSERT INTO houses (name) VALUES %s%s;', repeat('(''a' || i || '''),', 9999), '(''a' || i || ''')');
END LOOP;
END
$do$;
Above results in quite frequent overlap between ids belonging to two different batches
SELECT * FROM houses WHERE id BETWEEN 34370435 AND 34370535 ORDER BY id;
34370435;"b29"
34370436;"b29"
34370437;"b29"
34370438;"a100"
34370439;"b29"
34370440;"b29"
34370441;"a100"
...
I thought this was going to be harder to prove but it turns out it is not guaranteed.
I used a ruby script to have 4 threads insert thousands of records simultaneously and checked whether records created by a single statement had gaps in them and they did.
Thread.new do
100.times do |u|
House.import(1000.times.map do |i|
{
tenant: "#{t}-#{u}",
name: i,
}
end)
end
end
end.each(&:join)
House.distinct.pluck(:tenant).all? do |t|
recs = House.where(
tenant: t,
).order('id').to_a
recs.first.id - recs.first.name.to_i == recs.last.id - recs.last.name.to_i
end
Example of the gaps:
[#<House:0x00007fd2341b5e00
id: 177002,
tenant: "0-43",
name: "0",>,
#<House:0x00007fd2341b5c48
id: 177007,
tenant: "0-43",
name: "1">,
...
As you can see the GAP was 5 between first and second rows inserted within the same single INSERT statement.
Related
SUMMARY: I've two tables I want to derive info out of: family_values (family_name, item_regex) and product_ids (product_id) to be able to update the property family_name in a third.
Here the plan is to grab a json array from the small family_values table and use the column value item_regex to do a test match against the product_id for every row in product_ids.
MORE DETAILS: Importing static data from CSV to table of orders. But, in evaluating cost of goods and market value I'm needing to continuously determine family from a prefix regex (item_regex from family_values) match on the product_id.
On the client this looks like this:
const families = {
FOOBAR: 'Big Ogre',
FOOBA: 'Wood Elf',
FOO: 'Valkyrie'
};
// And to find family, and subsequently COGs and Market Value:
const findFamily = product_id => Object.keys(families).find(f => new RegExp('^' + f).test(product_id));
This is a huge hit for the client so I made a family_values table in PG to include a representative: family_name, item_regex, cogs, market_value.
Then, the product_ids has a list of only the products the app cares about (out of millions). This is actually used with an insert trigger 'on before' to ignore any CSV entries that aren't in the product_ids view. So, I guess after that the product_ids view could be taken out of the equation because the orders, after inserting readonly data, has its own matching product_id. It does NOT have family_name, so I still have the issue of determining that client-side.
PSUEDO CODE: update family column of orders with family_name from family_values regex match against orders.product_id
OR update the product_ids table with a new family column and use that with the existing on insert trigger (used to left pad zeros and normalize data right now). Now I'm thinking this may be just an update as suggested, but not real good with regex in PG. I'm a PG novice.
PROBLEM: But, I'm having a hangup in doing what I thought would be like a JS Array Find operation. The family_values have been sorted on the item_regex so that the most strict match would be on top, and therefor found first.
For example, with sorting we have:
family_values_array = [
{"family_name": "Big Ogre", "item_regex": "FOOBAR"},
{"family_name": "Wood Elf", "item_regex": "FOOBA"},
{"family_name": "Valkyrie", "item_regex": "FOO"}]
So, that the comparison of product_id of ^FOOBA would yield family "Wood Elf".
SOLUTION:
The solution I finally came about using was simply using concat to write out the front-anchored regex. It was so simple in the end. The key line I was missing is:
select * into family_value_row from iol.family_values
where lvl3_id = product_row.lvl3_id and product_row.product_id
like concat(item_regex, '%') limit 1;
Whole function:
create or replace function iol.populate_families () returns void as $$
declare
product_row record;
family_value_row record;
begin
for product_row in
select product_id, lvl3_id from iol.products
loop
-- family_name is what we want after finding the BEST match fr a product_id against item_regex
select * into family_value_row from iol.family_values
where lvl3_id = product_row.lvl3_id and product_row.product_id like concat(item_regex, '%') limit 1;
-- update family_name and value columns
update iol.products set
family_name = family_value_row.family_name,
cog_cents = family_value_row.cog_cents,
market_value_cents = family_value_row.market_value_cents
where product_id = product_row.product_id;
end loop;
end;
$$
LANGUAGE plpgsql;
Use concat as updated above:
select * into family_value_row from iol.family_values
where lvl3_id = product_row.lvl3_id and product_row.product_id
like concat(item_regex, '%') limit 1;
I have a table ErrorCase in postgres database. This table has one field case_id with datatype text. Its value is generated by format: yymmdd_xxxx. yymmdd is the date when the record insert to DB, xxxx is the number of record in that date.
For example, 3th error case on 2019/08/01 will have the case_id = 190801_0003. On 08/04, if there is one more case, its case_id will be 190804_0001, and go on.
I already using trigger in database to generate value for this field:
DECLARE
total integer;
BEGIN
SELECT (COUNT(*) + 1) INTO total FROM public.ErrorCase WHERE create_at = current_date;
IF (NEW.case_id is null) THEN
NEW.case_id = to_char(current_timestamp, 'YYMMDD_') || trim(to_char(total, '0000'));
END IF;
RETURN NEW;
END
And in Spring Project, I config the application properties for jpa/hibernates:
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:postgresql://localhost:5432/table_name
username: postgres
password: postgres
hikari:
poolName: Hikari
auto-commit: false
jpa:
database-platform: io.github.jhipster.domain.util.FixedPostgreSQL82Dialect
database: POSTGRESQL
show-sql: true
properties:
hibernate.id.new_generator_mappings: true
hibernate.connection.provider_disables_autocommit: true
hibernate.cache.use_second_level_cache: true
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: true
Currently, it generates the case_id correctly.
However, when insert many records in nearly same time, it generates the same case_id for two record. I guess the reason is because of the isolation level. When the first transaction not yet committed, the second transaction do the SELECT query to build case_id. So, the result of SELECT query does not include the record from first query (because it has not committed yet). Therefore, the second case_id has the same result as the first one.
Please suggest me any solution for this problems, which isolation level is good for this case???
"yymmdd is the date when the record insert to DB, xxxx is the number of record in that date" - no offense but that is a horrible design.
You should have two separate columns, one date column and one integer column. If you want to increment the counter during an insert, make that date column the primary key and use insert on conflict. You can get rid that horribly inefficient trigger and more importantly that will be safe for concurrent modifications even with read committed.
Something like:
create table error_case
(
error_date date not null primary key,
counter integer not null default 1
);
Then use the following to insert rows:
insert into error_case (error_date)
values (date '2019-08-01')
on conflict (error_date) do update
set counter = counter + 1;
No trigger needed and safe for concurrent inserts.
If you really need a text column as a "case ID", create a view that returns that format:
create view v_error_case
as
select concat(to_char(error_date, 'yymmdd'), '_', to_char(counter, '0000')) as case_id,
... other columns
from error_case;
I am running PostgreSQL 9.6 and am running an experiment on the following table structure:
CREATE TABLE my_bit_varying_test (
id SERIAL PRIMARY KEY,
mr_bit_varying BIT VARYING
);
Just to understand how much performance I could expect if I was resetting bits on 100,000-bit data concurrently, I wrote a small PL/pgSQL block like this:
DO $$
DECLARE
t BIT VARYING(100000) := B'0';
idd INT;
BEGIN
FOR I IN 1..100000
LOOP
IF I % 2 = 0 THEN
t := t || B'1';
ELSE
t := t || B'0';
end if;
END LOOP ;
INSERT INTO my_bit_varying_test (mr_bit_varying) VALUES (t) RETURNING id INTO idd;
UPDATE my_bit_varying_test SET mr_bit_varying = set_bit(mr_bit_varying, 100, 1) WHERE id = idd;
UPDATE my_bit_varying_test SET mr_bit_varying = set_bit(mr_bit_varying, 99, 1) WHERE id = idd;
UPDATE my_bit_varying_test SET mr_bit_varying = set_bit(mr_bit_varying, 34587, 1) WHERE id = idd;
UPDATE my_bit_varying_test SET mr_bit_varying = set_bit(mr_bit_varying, 1, 1) WHERE id = idd;
FOR I IN 1..100000
LOOP
IF I % 2 = 0 THEN
UPDATE my_bit_varying_test
SET mr_bit_varying = set_bit(mr_bit_varying, I, 1)
WHERE id = idd;
ELSE
UPDATE my_bit_varying_test
SET mr_bit_varying = set_bit(mr_bit_varying, I, 0)
WHERE id = idd;
end if;
END LOOP ;
END
$$;
When I run the PL/pgSQL though, it takes several minutes to complete, and I've narrowed it down to the for loop that is updating the table. Is it running slowly because of the compression on the BIT VARYING column? Is there any way to improve the performance?
Edit This is a simulated, simplified example. What this is actually for is that I have tens of thousands of jobs running that each need to report back their status, which updates every few seconds.
Now, I could normalize it and have a "run status" table that held all the workers and their statuses, but that would involve storing tens of thousands of rows. So, my thought is that I could use a bitmap to store the client and status, and the mask would tell me in order which ones had run and which ones had completed. The front bit would be used as an "error bit" since I don't need to know exactly which client failed, only that a failure exists.
So for example, you might have 5 workers for one job. If they all completed, then the status would be "01111", indicating that all jobs were complete, and none of them failed. If worker number 2 fails, then the status is "111110", indicating that there was an error, and all workers completed except the last one.
So, you can see this as a contrived way of handling large numbers of job statuses. Of course I'm up for other ideas, but even if I go that route, for the future, I'd still like to know how to update a variable bit quickly, because well, I'm curious.
If it is really the TOAST compression that is your problem, you can simply disable it for that table:
ALTER TABLE my_bit_varying_test SET STORAGE EXTERNAL;
You can try a set based approach to replace the second loop. A set based approach is usually fatser than looping. Use generate_series() to get the indexes.
UPDATE my_bit_varying_test
SET mr_bit_varying = set_bit(mr_bit_varying, gs.i, abs(gs.i % 2 - 1))
FROM generate_series(1, 100000) gs(i)
WHERE id = idd;
Also consider creating an index on my_bit_varying_test (id), if you don't already have one.
In DB2, I need to do an insert, then, using results/data from that insert, update a related table. I need to do it on a million plus records and would prefer not to lock the entire database. So, 1) how do I 'couple' the insert and update statements? 2) how can I ensure the integrity of the transaction (without locking the whole she-bang)?
some pseudo-code should help clarify
STEP 1
insert into table1 (neededId, id) select DYNAMICVALUE, id from tableX where needed value is null
STEP 2
update table2 set neededId = (GET THE DYNAMIC VALUE JUST INSERTED) where id = (THE ID JUST INSERTED)
note: in table1, the ID col is not unique, so i can't just filter on that to find the new DYNAMICVALUE
This should be more clear (FTR, this works, but I don't like it, because I'd have to lock the tables to maintain integrity. Would be great it I could run these statements together, and allow the update to refer to the newAddressNumber value.)
/****RUNNING TOP INSERT FIRST****/*
--insert a new address for each order that does not have a address id
insert into addresses
(customerId, addressNumber, address)
select
cust.Id,
--get next available addressNumber
ifNull((select max(addy2.addressNumber) from addresses addy2 where addy2.customerId = cust.id),0) + 1 as newAddressNumber,
cust.address
from customers cust
where exists (
--find all customers with at least 1 order where addressNumber is null
select 1 from orders ord
where 1=1
and ord.customerId = cust.id
and ord.addressNumber is null
)
/*****RUNNING THIS UPDATE SECOND*****/
update orders ord1
set addressNumber = (
select max(addressNumber) from addresses addy3
where addy3.customerId = ord1.customerId
)
where 1=1
and ord1.addressNumber is null
The IDENTITY_VAL_LOCAL function is a non-deterministic function that returns the most recently assigned value for an identity column, where the assignment occurred as a result of a single INSERT statement using a VALUES clause
I have a large database, that I want to do some logic to update new fields.
The primary key is id for the table harvard_assignees
The LOGIC GOES LIKE THIS
Select all of the records based on id
For each record (WHILE), if (state is NOT NULL && country is NULL), update country_out = "US" ELSE update country_out=country
I see step 1 as a PostgreSQL query and step 2 as a function. Just trying to figure out the easiest way to implement natively with the exact syntax.
====
The second function is a little more interesting, requiring (I believe) DISTINCT:
Find all DISTINCT foreign_keys (a bivariate key of pat_type,patent)
Count Records that contain that value (e.g., n=3 records have fkey "D","388585")
Update those 3 records to identify percent as 1/n (e.g., UPDATE 3 records, set percent = 1/3)
For the first one:
UPDATE
harvard_assignees
SET
country_out = (CASE
WHEN (state is NOT NULL AND country is NULL) THEN 'US'
ELSE country
END);
At first it had condition "id = ..." but I removed that because I believe you actually want to update all records.
And for the second one:
UPDATE
example_table
SET
percent = (SELECT 1/cnt FROM (SELECT count(*) AS cnt FROM example_table AS x WHERE x.fn_key_1 = example_table.fn_key_1 AND x.fn_key_2 = example_table.fn_key_2) AS tmp WHERE cnt > 0)
That one will be kind of slow though.
I'm thinking on a solution based on window functions, you may want to explore those too.