The indexing process is hang up? - postgresql

I am loading table with 41 millions of files from sql dump to table.
PostgreSQL Server 14.3 is setuped with best practice from google (workmem, jobs etc).
Table have a lot of index. After loading of dump I have seen in cmd next:
...
INSERT 0 250
INSERT 0 250
INSERT 0 141
setval
----------
41349316
ALTER TABLE
ALTER TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
And by output it continue to do something. Process in CMD is not finished. No new line just as I had show.
I checked current activity with:
select * from pg_stat_activity where datname = 'dbname';
It show idle for column state google says that it show last command that was run in session. I checked after few hours and nothing is changed.
pg_stat_progress_create_index do not show nothing.
So I do not know what to do. Could indexing process is hanged? Or all fine and I should to wait? If yes what it's doing now? What I can\should to do?
UPD from next morning: Today I rechecked all
SELECT * FROM pg_stat_progress_create_index;
is still do not show nothing.
Console window did two new prints:
CREATE INDEX
CREATE INDEX
I again checked:
select * from pg_stat_activity where datname = 'dbname';
it show that active process is:
CREATE INDEX index_insert_status ON public.xml_files USING btree (insert_status);
But why pg_stat_progress_create_index do not show nothing??

Related

Adding an index in mysql workbench corrupted the table?

Step-by-step:
Right clicked on tbl > Table Inspector > Clicked "Columns" tab > Right click > Create Index >
In that section I left the following defaults:
Algo: Default
Locking: Default (allow as much concurrency as possible)
It gave a timeout error
I then tried to run a simple "SELECT * ", but it's timing out every time now.
I didn't think that adding an index can corrupt a table so I didn't do a backup and now in a bit of a panic mode... Is there anything that can be done to reverse this?
When doing the show full processlist I see the following:
A header
Another header
'Waiting for table metadata lock'
'CREATE INDEX idx_all_mls_2_Centris_No ON mcgillim_matrix.all_mls_2 (Centris_No) COMMENT '''' ALGORITHM DEFAULT LOCK DEFAULT'
In the processlist, it's clearly visible your index creation is waiting for metlock which means your table is already locked by another query which is like select distinct t1.broker_name and running from 3460 seconds.
You have two options here.
Let that SQL complete first. Then index will create.
Another, Kill that Select SQL which will not harm your system and can run later.
To kill query, You can find ID in information_schema.processlist. then simply run the below query.
kill ID;

Postgres pg_prewarm to keep table in buffer gives 0 after dropping table

I followed the steps as outlined in this tutorial to use pg_prewarm extention for pre-warming the buffer cache in PostgreSQL:
https://ismailyenigul.medium.com/pg-prewarm-extention-to-pre-warming-the-buffer-cache-in-postgresql-7e033b9a386d
The first time I ran it, I got 1 as the result:
mydb=> SELECT pg_prewarm('useraccount');
pg_prewarm
------------
1
After that I had to drop the table and recreate it. Since then, when I run the same command, I get 0 as the result always. I am not sure if that's expected or if I am missing something?
mydb=> SELECT pg_prewarm('useraccount');
pg_prewarm
------------
0
The function comment in contrib/pg_prewarm/pg_prewarm.c says:
* [...] The
* return value is the number of blocks successfully prewarmed.
So the first time, there was 1 block in the table. After dropping and re-creating the table, it is empty, so 0 blocks are cached.

PostgreSQL - ALTER SEQUENCE query never completes

I'm using Postgres for the first time and have been experimenting a bit. I have a table that I created with this query:
CREATE TABLE test(
id SERIAL,
name int NOT NULL,
PRIMARY KEY (id)
);
and I'm trying to reset the sequence for the id column with this query (after emptying the table of all current data):
ALTER SEQUENCE test_id_seq RESTART WITH 1;
But when I run this query it runs indefinitely. All previous queries I've run complete within milliseconds but this has run for upwards of 3 minutes before I kill it. What should I be doing differently
This is very likely caused by a lock.
See if there are sessions with status “idle in transacion” in pg_stat_activity.
Check for locks using
SELECT pid, mode FROM pg_locks WHERE relation = 'test_id_seq'::regclass;
This could be a bug with Postgres https://www.postgresql-archive.org/Slow-alter-sequence-with-PG10-1-td6002088.html
You could try a hacky reset, such as incrementing the sequence by a negative amount to get what you want:
SELECT * FROM <sequence_name>;
look at the column "last_value" to determine how much to subtract from the current value. Lets say your sequence's last_value is 1000:
ALTER SEQUENCE <sequence_name> INCREMENT BY -999;
Execute the subtraction
SELECT NEXTVAL('< sequence_name>');
This should reset the sequence back to one
Reset the sequence to its original increment value:
ALTER SEQUENCE <sequence_name_> INCREMENT BY 1
I'm not on this version of postgres, so I can't test it for you...but it's worth a shot.

postgres SKIP LOCKED not working

Below the steps I followed to test the SKIP LOCKED:
open one sql console of some Postgres UI client
Connect to Postgres DB
execute the queries
CREATE TABLE t_demo AS
SELECT *
FROM generate_series(1, 4) AS id;
check rows are created in that table:
TABLE t_demo
select rows using below query:
SELECT *
FROM t_demo
WHERE id = 2
FOR UPDATE SKIP LOCKED;
it is returning results as 2
Now execute the above query again:
SELECT *
FROM t_demo
WHERE id = 2
FOR UPDATE SKIP LOCKED;
this second query should not return any results, but it is returning results as 2
https://www.postgresql.org/docs/current/static/sql-select.html#SQL-FOR-UPDATE-SHARE
To prevent the operation from waiting for other transactions to
commit, use either the NOWAIT or SKIP LOCKED option
(emphasis mine)
if you run both queries in one window - you probably either run both in one transaction (then your next statement is not other transaction" or autocommiting after each statement (default)((but then you commit first statement transaction before second starts, thus lock released and you observe no effect

set enable_seqscan = off, through jdbc | Postgres

I have two queries, insert and update. I did a benchmark through postgres console with a large dataset and found that postgres was not picking up the index. To solve this - I disabled seqscan for those two queries and got a huge performance boost; Postgres was able to pick up the indexes for scanning through the table.
Problem:
I am doing the same thing through jdbc
statement.executeUpdate("set enable_seqscan = off");
statement.executeUpdate("My_Insert_Query");
statement.executeUpdate("My_Update_Query");
statement.executeUpdate("set enable_seqscan = on");
But seems like postgres is not turning seq_scan off for and the queries are taking way too long to execute.
Master Table
Master_Id auto-generated
child_unique integer
Child Table
child_unique integer
Master_id integer
Insert into Master (child_unique) from Child as i WHERE NOT EXISTS ( SELECT * from Master where Master.child_unique = i.child_unique);
Update Child set Master_id = Master.Master_id from Master where Master.child_unique = Child.child_unique;
For every unique row in Child which is not present in Master- I insert that into my Master table and get the auto generated Master_ID and insert it back into the Child table.
Both tables have index on child_unique.
Index is picked up on the Master table where as it is not in the case of Child table.
How did I find out? Using Postgres's pg_stat_all_indexes table.
Firstly, I agree with Frank above - fix the real problem.
However, if you really want to disable seq-scans you've failed to provide any information to help you do so.
Are these statements all executed on the same connection? (turn your logging on/up in PostgreSQL's config file to find out)
Are there any other jdbc-generated bits being sent to the server? (logging again)
What does a "show enable_seqscan" return after the first statement?