Postgres database takes time to get total record count - postgresql

I am using Postgres database and I had a daily_txns table which contains more then 2,00,00,000 records in it,
so I have created child table partitions inheriting the master transaction table as daily_txns_child_2017_08_01 for daily transaction.
I am searching for a indexed column **mid** and **created_date** it takes longer time to respond, even for getting the total records in it takes longer time.
How to speed up the data fetch? It takes me more then 5 min for first query and for second query it takes more then 20 min.
My Query are as below.
select * from daily_txns where created_date <='2017-08-01' and created_date>='2017-07-01' and mid='0446721M0008690' order by created_date desc limit 10;
select count(mid) from daily_txns where created_date <='2017-08-01' and created_date>='2017-07-01' and mid='0446721M0008690';
Or the time taken is a normal one.

Related

How to get the count of hourly inserts on a table in PostgreSQL

I need a setup where rows older than 60 days get removed from the table in PostgreSQL.
I Have created a function and a trigger:
BEGIN
DELETE FROM table
WHERE updateDate < NOW() - INTERVAL '60 days';
RETURN NULL;
END;
$$;
But I believe if the insert frequency is high, this will have to scan the entire table quite often, which will cause high DB load.
I could run this function through a cron job or Lambda function every hour/day. I need to know the insert every hour on that table to take that decision.
Is there a query or job that I can setup which will collect the details?
Just to count the number of records per hour, you could run this query:
SELECT CAST(updateDate AS date) AS day
, EXTRACT(HOUR FROM updateDate) AS hour
, COUNT(*)
FROM _your_table
WHERE updateDate BETWEEN ? AND ?
GROUP BY
1,2
ORDER BY
1,2;
We do about 40 million INSERT's a day on a single table, that is partitioned by month. And after 3 months we just drop the partition. That is way faster than a DELETE.

PostgresSql 9.6 deletes suddenly became slow

I have a database table where debug log entries are recorded. There are no foreign keys - it is a single standalone table.
I wrote a utility to delete a number of entries starting with the oldest.
There are 65 million entries so I deleted them 100,000 at a time to give some progress feedback to the user.
There is a primary key column called id
All was going fine until it got to about 5,000,000 million records remaining. Then it started taking over 1 minute to execute.
What is more, if I used PgAdmin and type the query in myself, but use an Id that I know is less than the minimum id, it still takes over one minute to execute!
I.e: delete from public.inettklog where id <= 56301001
And I know the min(id) is 56301002
Here is the result of an explain analyze
Your stats are way out of date. It thinks it will find 30 million rows, but instead finds zero. ANALYZE the table.

The last updated data shows first in the postgres selet query?

I have simple query that takes some results from User model.
Query 1:
SELECT users.id, users.name, company_id, updated_at
FROM "users"
WHERE (TRIM(telephone) = '8973847' AND company_id = 90)
LIMIT 20 OFFSET 0;
Result:
Then I have done some update on the customer 341683 and again I run the same query that time the result shows different, means the last updated shows first. So postgres is taking the last updated by default or any other things happens here?
Without an order by clause, the database is free to return rows in any order, and will usually just return them in whichever way is fastest. It stands to reason the row you recently updated will be in some cache, and thus returned first.
If you need to rely on the order of the returned rows, you need to explicitly state it, e.g.:
SELECT users.id, users.name, company_id, updated_at
FROM "users"
WHERE (TRIM(telephone) = '8973847' AND company_id = 90)
ORDER BY id -- Here!
LIMIT 20 OFFSET 0

How to disable one query in Postgres for specified (partioned) table

SELECT COUNT(*) as count FROM "public"."views";
This query executes every time by TablePlus client when I open a partitioned table with many partitions and processed too long time.
How I can disable the execution of this query on this table?

Postgres select query with offset for large table taking too much time to process

To process a table having 3 million rows, I am using the following query in psql:
select id, trans_id, name
from omx.customer
where user_token is null
order by id, trans_id l
imit 1000 offset 200000000
It's taking more than 3 min to fetch the data. How to improve the performance?
The problem you have is that to know which 1000 records to fetch the database actually has to fetch all of the 200000000 records to count them.
The main strategy to combat this problem is to use a where clause instead of the offset.
If you know the previous 1000 rows (because this is some kind of iteratively used query) you can instead take the id and trans_id from the last row of that set and fetch the 1000 rows following it.
If the figure of 200000000 doesn't need to be exact and you can make a good guess of where to start then that might be an avenue to attack the problem.