Hello and thank you for reading my question!
Currently, we use PostgreSQL v.10 on 3 nodes through stolon (https://github.com/sorintlab/stolon)
We have 3 tables (I want to make my question simple):
Invoice (150 000 000 records)
User (35 000 000 records)
User_Address (20 000 000 records)
The main query looks like this (The original query is a large, using a temporary table and have a lot of where conditions, but the sample shows my problem.)
select
i.*
from invoice as i
inner join get_similar_name('Jon') as s on i.name ilike s.name
left join user_address as a on i.user_id = a.user_id
where
a.state = 'OH'
and
i.last_name = 'Smith'
and
i.date between '2016-01-01'::date and '2018-12-31'::date;
The function get_similar_name returns similar names (example: get_similar_name('Jon') will return John, Jonny, Jonathan ... etc) average 200 - 1000 names. I must use the function :\
The query was executed a long time, around 30 - 120 seconds,
but if I exclude the function get_similar_name from the query, then execution time will be not more then 1 second.
I already configured PostgreSQL and the server working pretty good. I also created indexes and my query don't use seq scan and etc.
We don't have the possibility to make partitioned tables because we have a lot of columns for this. We can't divide a table only by one row.
I think about migrating my warehouse to MongoDB
My questions are:
Am I right about moving to MongoDB?
Increase performance if I move warehouse from PostgreSQL to 20-40 nodes under MongoDB control?
Is it possible to have the function get_similar_name on MongoDB or similar solution? If yes, how?
Do you have good experience to use fulltext search in MongoDB?
Is it right way to use MongoDB on production?
Can you please advise a "google-vector" to right solution on your opinion?
I don't know if moving to MongoDB will solve a text search problem, but Postgres has excellent features like Vector and trigram. Have you tired any of this?
https://www.compose.com/articles/mastering-postgresql-tools-full-text-search-and-phrase-search/
https://www.postgresql.org/docs/9.6/pgtrgm.html
On my previous project, we used pg_trgm and were pretty happy with its performance.
Related
I'm trying to locate the cause of a slow query that hits 3 tables with records ranging from a few hundred thousand to a several million
tango - 6166101
kilo_golf - 822805
three_romeo - 535782
Version
PostgreSQL 11.10
Current query
select count(*) as aggregate
from "tango"
where "lima" = juliet
and not exists(select 1
from "three_romeo" quebec_seven_oscar
where quebec_seven_oscar.six_two = tango.six_two
and quebec_seven_oscar."romeo" >= six_seven
and quebec_seven_oscar."three_seven" in
('partial_survey', 'survey_completed', 'wrong_number', 'moved'))
and ("mike" <= '2021-02-03 13:26:22' or "mike" is null)
and not exists(select 1
from "kilo_golf" as "delta"
where "delta"."to" = tango.six_two
and "two" = november
and "delta"."romeo" >= '2021-02-05 13:49:15')
and not exists(select 1
from "three_romeo" as "four"
where "four".foxtrot = tango.quebec_seven_victor
and "four"."three_seven" in ('deceased', 'block_calls', 'block_all'))
and "tango"."yankee" is null;
This is the analysis of the query in its current state - https://explain.depesz.com/s/Di51
It feels like the problematic area is in the tango table
tango.lima is equal to 'juliet' in the majority of records (low cardinality), we don't currently have an index on this
The long filter makes me wonder if I should create some sort of composite index?
After reading another post (https://stackoverflow.com/a/50148594/682754) tried removing the or "mike" is null and this helped quite a lot
https://explain.depesz.com/s/XgmB
Should I try and remove the not exists in favour of using joins?
Thanks
I don't think that using explicit joins will help you, since PostgreSQL converts NOT EXISTS into an anti-join anyway.
But you spotted the problem: it is the OR. I would recommend that you use a dynamic query: add the cindition only if mikeis not NULL rather than having a static query with OR.
You are counting about 6 million rows, and that will take some time. The reason that removing or "mike" is null can help so much is that it no longer needs to count the rows where mike is null, which is vast majority of them.
But this is of no use to you if you actually do need to count those rows. So, do you? I'm having a hard time picturing a situation in which you need an exact count of 6 million rows often enough that waiting 4 seconds for it is a problem.
Question
I would like to know: How can I rewrite/alter my search query/strategy to get an acceptable performance for my end users?
The search
I'm implementing a search for our users, they are provided the ability to search for candidates on our system based on:
A professional group they fall into,
A location + radius,
A full text search.
The query
select v.id
from (
select
c.id,
c.ts_description,
c.latitude,
c.longitude,
g.group
from entities.candidates c
join entities.candidates_connections cc on cc.candidates_id = c.id
join system.groups g on cc.systems_id = g.id
) v
-- Group selection
where v.group = 'medical'
-- Location + radius
and earth_distance(ll_to_earth(v.latitude, v.longitude), ll_to_earth(50.87050439999999, -1.2191283)) < 48270
-- Full text search
and v.ts_description ## to_tsquery('simple', 'nurse | doctor')
;
Data size & benchmarks
I am working with 1.7 million records
I have the 3 conditions in order of impact which were benchmarked in isolation:
Group clause: 3s & reduces to 700k records
Location clause: 8s & reduces to 54k records
Full text clause: 60s+ & reduces to 10k records
When combined they seem to take 71s, which is the full impact of the 3 queries in isolation, my expectation was that when putting all 3 clauses together they would work sequentially i.e on the subset of data from the previous clause therefore the timing should reduce dramatically - but this has not happened.
What I've tried
All join conditions & where clauses are indexed
Notably the ts_description index (GIN) is 2GB
lat/lng is indexed with ll_to_earth() to reduce the impact inline
I nested each where clause into a different subquery in order
Changed the order of all clauses & subqueries
Increased the shared_buffers size to increase the potential cache hits
It seems you do not need to subquery, and it is also a good practice to filter with numeric fields, so, instead of filtering with where v.group = 'medical' for example, create a dictionary and just filter with where v.group = 1
select
DISTINCT c.id,
from entities.candidates c
join entities.candidates_connections cc on cc.candidates_id = c.id
join system.groups g on cc.systems_id = g.id
where tablename.group = 1
and earth_distance(ll_to_earth(v.latitude, v.longitude), ll_to_earth(50.87050439999999, -1.2191283)) < 48270
and v.ts_description ## to_tsquery(0, 1 | 2)
also, use EXPLAIN ANALYSE to see and check your execution plan. These quick tips will help you improve it clearly.
There were some best practice cases that I had not considered, I have subsequently implemented these to gain a substantial performance increase:
tsvector Index Size Reduction
I was storing up to 25,000 characters in the tsvector, this meant that when more complicated full text search queries were used there was just an immense amount of work to do, I reduced this down to 10,000 which has made a big difference and for my use case this is an acceptable trade-off.
Create a Materialised View
I created a materialised view that contains the join, this offloads a little bit of the work, additionally I built my indexes on there and run a concurrent refresh on a 2 hour interval. This gives me a pretty stable table to work with.
Even though my search yields 10k records I end up paginating on the front-end so I only ever bring back up to 100 results on the screen, this allows me to join onto the original table for only the 100 records I'm going to send back.
Increase RAM & utilise pg_prewarm
I increased the server RAM to give me enough space to store my materialised view into, then ran pg_prewarm on my materialised view. Keeping it in memory yielded the biggest performance increase for me, bringing a 2m query down to 3s.
We have around 90 million rows in a new Postgresql table in an RDS instance. It contains 2 numbers, start_num and end_num(Bigint, mostly finance related) and details related to those numbers. The PK is on the start_num and end_num and table is CLUSTERed on this. The query will always be range query. Input will be a number and the output will be range in which this number is falling along with details. For eg: There is a row which has start_num=112233443322 and end_num as 112233543322. The input comes in as 112233443645. So the row containing 112233443322, 112233543322 needs to be returned.
select start_num, end_num from ipinfo.ipv4 where input_value between start_num and end_num;
This is always going into seq scan and the PK is not getting used. I have tried creating separate indexes on start_num and end_num desc but not much change in time. We are looking for an output of less than 300 ms. Now, I am wondering if that is even possible in Postgresql for range queries on large data sets or this is due to the Postgresql being on AWS RDS.
Looking forward to some advice on steps to improve the performance.
Note that PostgreSQL website mentions that it has a limit on number of columns between 250-1600 columns depending on column types.
Scenario:
Say I have data in 17 tables each table having around 100 columns. All are joinable through primary keys. Would it be okay if I select all these columns in a single select statement? The query would be pretty complex but can be programmatically generated. The reason for doing this is to get denormalised data to populate a web page. Please do not ask why though :)
Quite obviously if I do create table table1 as (<the complex select statement>), I will be hitting the limit mentioned in the website. But do simple queries also face the same restriction?
I could probably find this out by doing the exercise myself. In the next few days I probably will. However, if someone has an idea about this and the problems I might face by doing a single query, please share the knowledge.
I can't find definitive documentation to back this up, but I have
received the following error using JDBC on Postgresql 9.1 before.
org.postgresql.util.PSQLException: ERROR: target lists can have at most 1664 entries
As I say though, I can't find the documentation for that so it may
vary by release.
I've found the confirmation. The maximum is 1664.
This is one of the metrics that is available for confirmation in the INFORMATION_SCHEMA.SQL_SIZING table.
SELECT * FROM INFORMATION_SCHEMA.SQL_SIZING
WHERE SIZING_NAME = 'MAXIMUM COLUMNS IN SELECT';
For example, I want to retrieve all data from citizen table which contains about 18K rows.
String sqlResult = "SELECT * FROM CITIZEN";
Query query = getEntityManager().createNativeQuery(sqlResult);
query.setFirstResult(searchFrom);
query.setMaxResults(searchCount); // searchCount is 20
List<Object[]> listStayCit = query.getResultList();
Everything was fine until "searchFrom" offset was large ( 17K or something ). For example, it took 3-4 mins to get 20 rows ( 17,000 to 17,020 ). So is there any better way to get it faster but not via tunning the DB ?
P/s: Sorry for my bad English
You Could use batch queries.
A good article explaining solution to ur problem is available here:
http://java-persistence-performance.blogspot.in/2010/08/batch-fetching-optimizing-object-graph.html