Postgresql select count(*) takes too long time - postgresql

I have a table in my postgresql table. The table has about 9.100.000 rows. When I execute a query select count(*) from table the execution time is about 1.5 minutes. Is this normal? And what can I do decrease this tim?

If you want an estimation of the size you can use count_estimate. It is much faster.
https://wiki.postgresql.org/wiki/Count_estimate
Another workaround is to use a statistics field, to increase it every time a new row is being added.
Also please read https://www.citusdata.com/blog/2016/10/12/count-performance/

Related

Postgresql - Slow running Query for a simple select statement attached to timescaledb

I have a append only table with more than ~80M records attached to timescaledb, the frequency of insertion of records to table is one minute. Also, there is an index created on non unique column and start time (ds_id, start_time).
When I try to run the simple
select * from observation where ds_id in (27525, 27567, 28787,27099)
The query itself is taking longer than 1 minute to give the output.
I, also tried to analyze the table, as it is append only there is no scope of vacuum on this table.
So, I am in confusion why the simple select query is taking much time. I am thinking due to huge number of records it is taking time to query the results.
Please help me in understanding the issue and help me with fixing
query plan: https://explain.depesz.com/s/M8H7
Thanks in advance.
Note: ds_id (fk) and start_time(insertion time) are the one used for getting results. Also, I am sorry for not providing the table structure and details as it is confidential. :(

Frequently used queries in Postgres

I have a simple update query as below which is frequently being used.
UPDATE table_name SET column_1 = ?, column_2 = ? WHERE id = ?;
The update query will execute every time when the customer is having a chat with support.
I have tried using indexes for two columns used in the query for reducing the time consumption.
I don't see much difference. Is it correct to use indexes here or is there any other approach to avoid the frequently use or time consuming.
There are a couple of things you can do to make this faster:
an index on id
no index on column_1 and column_2 and a fillfactor under 100 on the table
Then you can get HOT updates, which are a considerable performance gain, since the indexes don't have to get updated.
use prepared statements to save on planning time

select 10000 records take too long time in PostgreSQL

my table contains 1 billion records. It is also partitioned by month.Id and datetime is the primary key for the table. When I select
select col1,col2,..col8
from mytable t
inner join cte on t.Id=cte.id and dtime>'2020-01-01' and dtime<'2020-10-01'
It uses index scan, but takes more than 5 minutes to select.
Please suggest me.
Note: I have set work_mem to 1GB. cte table results comes with in 3 seconds.
Well it's the nature of join and it is usually known as a time consuming operation.
First of all, I recommend to use in rather than join. Of course they have got different meanings, but in some cases technically you can use them interchangeably. Check this question out.
Secondly, according to the relation algebra whenever you use join each rows of mytable table is combined with each rows from the second table, and DBMS needs to make a huge temporary table, and finally igonre unsuitable rows. Undoubtedly all the steps and the result would take much time. Before using the Join opeation, it's better to filter your tables (for example mytable based date) and make them smaller, and then use the join operations.

How can I speed up a Postgres query, in which I want to query all entries within a date range

So I'm doing this query
select * from table where time>'2019-01-28 04:13:36.790000' and time<'2019-01-28 04:13:46.790000';
It used to be very fast, but as the table grew it's now taking several minutes to complete. I'm not exactly sure how many entries are in the table. I'm guessing tens of millions. I just want to be able to query entries in a given time interval. Is there anything I can do to the table to make this quicker.
It's hard to say for sure without more context, but if you don't already have an index on time, consider adding one.
CREATE INDEX idx_table_time ON table (time ASC)

Select * from table_name is running slow

The table contains around 700 000 data. Is there any way to make the query run faster?
This table is stored on a server.
I have tried to run the query by taking the specific columns.
If select * from table_name is unusually slow, check for these things:
Network speed. How large is the data and how fast is your network? For large queries you may want to think about your data in bytes instead of rows. Run select bytes/1024/1024/1024 gb from dba_segments where segment_name = 'TABLE_NAME'; and compare that with your network speed.
Row fetch size. If the application or IDE is fetching one-row-at-a-time, each row has a large overhead with network lag. You may need to increase that setting.
Empty segment. In a few weird cases the table's segment size can increase and never shrink. For example, if the table used to have billions of rows, and they were deleted but not truncated, the space would not be released. Then a select * from table_name may need to read a lot of empty extents to get to the real data. If the GB size from the above query seems too large, run alter table table_name move; to rebuild the table and possible save space.
Recursive query. A query that simple almost couldn't have a bad execution plan. It's possible, but rate, for a recursive query has a bad execution plan. While the query is running, look at select * from gv$sql where users_executing > 0;. There might be a data dictionary query that's really slow and needs to be tuned.