PostgresSql 9.6 deletes suddenly became slow - postgresql

I have a database table where debug log entries are recorded. There are no foreign keys - it is a single standalone table.
I wrote a utility to delete a number of entries starting with the oldest.
There are 65 million entries so I deleted them 100,000 at a time to give some progress feedback to the user.
There is a primary key column called id
All was going fine until it got to about 5,000,000 million records remaining. Then it started taking over 1 minute to execute.
What is more, if I used PgAdmin and type the query in myself, but use an Id that I know is less than the minimum id, it still takes over one minute to execute!
I.e: delete from public.inettklog where id <= 56301001
And I know the min(id) is 56301002
Here is the result of an explain analyze

Your stats are way out of date. It thinks it will find 30 million rows, but instead finds zero. ANALYZE the table.

Related

Postgresql - Slow running Query for a simple select statement attached to timescaledb

I have a append only table with more than ~80M records attached to timescaledb, the frequency of insertion of records to table is one minute. Also, there is an index created on non unique column and start time (ds_id, start_time).
When I try to run the simple
select * from observation where ds_id in (27525, 27567, 28787,27099)
The query itself is taking longer than 1 minute to give the output.
I, also tried to analyze the table, as it is append only there is no scope of vacuum on this table.
So, I am in confusion why the simple select query is taking much time. I am thinking due to huge number of records it is taking time to query the results.
Please help me in understanding the issue and help me with fixing
query plan: https://explain.depesz.com/s/M8H7
Thanks in advance.
Note: ds_id (fk) and start_time(insertion time) are the one used for getting results. Also, I am sorry for not providing the table structure and details as it is confidential. :(

How to select unique records from table with big number of records

I use postgresql and I have a database table with more than 5 million records. The structure of the table is as follows:
A lot of records is inserted every day. There are many records with the same reference.
I want to select all records but I do not want duplicates, the records with the same reference.
I tried with query as follows:
SELECT DISTINCT ON (reference) reference_url, reference FROM daily_run_vehicle WHERE handled = False and retries < 5 ORDER BY reference DESC;
It executed, it gives me correct result, but it takes to long to execute.
Is there any better way to do this?
Create Sort keys on columns which yo used in where condition
after large data movement into the table, we need to do "vacuum" command it will refresh all the keys and after that Analyze the table with "Analyze" command. it will help to rebuild the stats of the table.

How can I speed up a Postgres query, in which I want to query all entries within a date range

So I'm doing this query
select * from table where time>'2019-01-28 04:13:36.790000' and time<'2019-01-28 04:13:46.790000';
It used to be very fast, but as the table grew it's now taking several minutes to complete. I'm not exactly sure how many entries are in the table. I'm guessing tens of millions. I just want to be able to query entries in a given time interval. Is there anything I can do to the table to make this quicker.
It's hard to say for sure without more context, but if you don't already have an index on time, consider adding one.
CREATE INDEX idx_table_time ON table (time ASC)

Postgres select query with offset for large table taking too much time to process

To process a table having 3 million rows, I am using the following query in psql:
select id, trans_id, name
from omx.customer
where user_token is null
order by id, trans_id l
imit 1000 offset 200000000
It's taking more than 3 min to fetch the data. How to improve the performance?
The problem you have is that to know which 1000 records to fetch the database actually has to fetch all of the 200000000 records to count them.
The main strategy to combat this problem is to use a where clause instead of the offset.
If you know the previous 1000 rows (because this is some kind of iteratively used query) you can instead take the id and trans_id from the last row of that set and fetch the 1000 rows following it.
If the figure of 200000000 doesn't need to be exact and you can make a good guess of where to start then that might be an avenue to attack the problem.

Oracle order by query very slow

I am facing a problem when doing order by on a table.
My select query is working fine, but when i do order by (even on the primary key) it just goes on and on with no results. Finally i need to kill the session. The table has20K records.
Any suggestion for the this?
Query is as:
SELECT * FROM Users ORDER BY ID;
I do not about know about the query plan as i am new to oracle
For the unordered query, Is SQL Developer retrieving and displaying 20K rows, or just the fisrt 50? Your comparison might not be fair.
What is the size of those 20K rows: select bytes/1024/1024 MB from user_segments where segment_name = 'USERS'; I've seen many cases where a few megabytes of data use many gigabytes of storage. Maybe the data was very large before and somebody just deleted it (this doesn't remove the space). Or maybe somebody inserted those rows 1 at a time with an APPEND hint, and each row is taking an entire block.
Your query might be waiting for more temp tablespace for sorting, look at DBA_RESUMABLE.