Adding an index in mysql workbench corrupted the table? - mysql-workbench

Step-by-step:
Right clicked on tbl > Table Inspector > Clicked "Columns" tab > Right click > Create Index >
In that section I left the following defaults:
Algo: Default
Locking: Default (allow as much concurrency as possible)
It gave a timeout error
I then tried to run a simple "SELECT * ", but it's timing out every time now.
I didn't think that adding an index can corrupt a table so I didn't do a backup and now in a bit of a panic mode... Is there anything that can be done to reverse this?
When doing the show full processlist I see the following:
A header
Another header
'Waiting for table metadata lock'
'CREATE INDEX idx_all_mls_2_Centris_No ON mcgillim_matrix.all_mls_2 (Centris_No) COMMENT '''' ALGORITHM DEFAULT LOCK DEFAULT'

In the processlist, it's clearly visible your index creation is waiting for metlock which means your table is already locked by another query which is like select distinct t1.broker_name and running from 3460 seconds.
You have two options here.
Let that SQL complete first. Then index will create.
Another, Kill that Select SQL which will not harm your system and can run later.
To kill query, You can find ID in information_schema.processlist. then simply run the below query.
kill ID;

Related

The indexing process is hang up?

I am loading table with 41 millions of files from sql dump to table.
PostgreSQL Server 14.3 is setuped with best practice from google (workmem, jobs etc).
Table have a lot of index. After loading of dump I have seen in cmd next:
...
INSERT 0 250
INSERT 0 250
INSERT 0 141
setval
----------
41349316
ALTER TABLE
ALTER TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
And by output it continue to do something. Process in CMD is not finished. No new line just as I had show.
I checked current activity with:
select * from pg_stat_activity where datname = 'dbname';
It show idle for column state google says that it show last command that was run in session. I checked after few hours and nothing is changed.
pg_stat_progress_create_index do not show nothing.
So I do not know what to do. Could indexing process is hanged? Or all fine and I should to wait? If yes what it's doing now? What I can\should to do?
UPD from next morning: Today I rechecked all
SELECT * FROM pg_stat_progress_create_index;
is still do not show nothing.
Console window did two new prints:
CREATE INDEX
CREATE INDEX
I again checked:
select * from pg_stat_activity where datname = 'dbname';
it show that active process is:
CREATE INDEX index_insert_status ON public.xml_files USING btree (insert_status);
But why pg_stat_progress_create_index do not show nothing??

Drop index locks

My Postgres version is 9.6
I tried today to drop an index from my DB with this command:
drop index index_name;
And it caused a lot of locks - the whole application was stuck until I killed all the sessions of the drop (why it was devided to several sessions?).
When I checked the locks I saw that almost all the blocked sessions execute that query:
SELECT a.attname, format_type(a.atttypid, a.atttypmod),
pg_get_expr(d.adbin, d.adrelid), a.attnotnull, a.atttypid, a.atttypmod
FROM
pg_attribute a LEFT JOIN pg_attrdef d
ON a.attrelid = d.adrelid AND a.attnum = d.adnum
WHERE a.attrelid = <index_able_name>::regclass
AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
Is that make sense that this will block system actions?
So I decided to drop the index with concurrently option to prevent locks.
drop index concurrently index_name;
I execute it now from PGAdmin (because you can't run it from noraml transaction).
It run over that 20 minutes and didnt finished yet. Index size is 20MB+-.
And when I'm checking the DB for locks I see that there is a select query on that table and that's blocks the drop command.
But when I took this select and execute in another session - this was vary fast (2-3) seconds.
So why is that blocking my drop? is there another option to do that? maybe to disable index instead?
drop index and drop index concurrently are usually very fast commands, but they both, as all DDL commands, require exclusive access to the table.
They differ only in how they try to achieve this exclusive access. Plain drop index would simply request exclusive lock on the table. This would block all queries (even selects) that try to use the table after the start of the drop query. It will do this until it will get the exclusive lock - when all transactions touching the table in any way, which started before the drop command, would finish and the transaction with a drop is committed. This explains why your application stopped working.
The concurrently version also needs brief exclusive access. But it works differently - it will not block the table, but wait until there no other query touching it and then does its (usually brief) work. But if the table is constantly busy it will never find such a moment, and wait for it infinitely. Also I suppose it just tries to lock the table repeatedly every X milliseconds until it succeeds, so a later parallel execution can be more lucky and finish faster.
If you see multiple simultaneous sessions trying to drop an index, and you do not expect that, then you have a bug in your application. The database would never do this on its own.

In PostgreSQL, is there a CLI command to copy the speed of a SELECT statement as well as the SELECT statement into a text file (without the data)?

I am currently comparing performance of PostgreSQL with several other SQL systems. I am aware of the \timing option to turn on timing queries. However, I would very much like to automate the process of copying the statements executed and the query speed below it. I imagine there is a simple way to log this?
Let's say I run:
CREATE TABLE t1 AS
SELECT itemID, prodCategory
FROM products
WHERE prodCategory = "footwear"
I want to automatically save into a text file:
CREATE TABLE t1 AS
SELECT itemID, prodCategory
FROM products
WHERE prodCategory = "footwear"
SELECT 7790
Time: 10.884 ms
If OS Specifications are needed, I am using MacOS.
I just learned that you can use the:
script filename
command to save everything that is printed on your screen. If timing is on, you can record the queries and the query time outputs.
To stop recording, simply type exit.

PostgreSQL - REINDEX still working even after two hours

I have started REINDEX on my PostgreSQL database. It can be visible in GUI that it processed a number of tables, and then stop responding. It looks like it is still working, even after two hours. The GUI is not responsive and its last row says: "NOTICE: table public.res_request_history" was reindexed."
Can I safely stop REINDEX? What can I do to actually make REINDEX work?
Thanks.
Yes, you can use pg_cancel_backend(pid). PID you can find executing 'select pg_stat_activity()'.
For example:
--Will display running queries and corresponding pid
SELECT query, pid FROM pg_stat_activity;
--You can then cancel one of them by calling this method with its pid
SELECT pg_cancel_backend(<pid>);

Postgresql, query results to new table

Windows/NET/ODBC
I would like to get query results to new table on some handy way which I can see through data adapter but I can't find a way to do it.
There is no much examples around to satisfy beginner's level on this.
Don't know temporary or not but after seeing results that table is no more needed so I can delete it 'by hand' or it can be deleted automatically.
This is what I try:
mCmd = New OdbcCommand("CREATE TEMP TABLE temp1 ON COMMIT DROP AS " & _
"SELECT dtbl_id, name, mystr, myint, myouble FROM " & myTable & " " & _
"WHERE myFlag='1' ORDER BY dtbl_id", mCon)
n = mCmd.ExecuteNonQuery
This run's without error and in 'n' I get correct number of matched rows!!
But with pgAdmin I don't see those table no where?? No matter if I look under opened transaction or after transaction is closed.
Second, should I define columns for temp1 table first or they can be made automatically based on query results (that would be nice!).
Please minimal example to illustrate me what to do based on upper code to get new table filled with query results.
A shorter way to do the same thing your current code does is with CREATE TEMPORARY TABLE AS SELECT ... . See the entry for CREATE TABLE AS in the manual.
Temporary tables are not visible outside the session ("connection") that created them, they're intended as a temporary location for data that the session will use in later queries. If you want a created table to be accessible from other sessions, don't use a TEMPORARY table.
Maybe you want UNLOGGED (9.2 or newer) for data that's generated and doesn't need to be durable, but must be visible to other sessions?
See related: Is there a way to access temporary tables of other sessions in PostgreSQL?