Use index to speed up query using values from different tables - postgresql

I have a table products, a table orders and a table orderProducts.
Products have a name as a PK (apple, banana, mango) and a price .
orders have a created_at date and an id as a PK.
orderProducts connects orders and products, so they have a product_name and an order_id. Now I would like to show all orders for a given product that happened in the last 24 hours.
I use the following query:
SELECT
orders.id,
orders.created_at,
products.name,
products.price
FROM
orderProducts
JOIN products ON
products.name=orderProducts.product
JOIN orders ON
orders.id=orderProducts.order
WHERE
products.name='banana'
AND
orders.created_at BETWEEN NOW() - INTERVAL '24 HOURS' AND NOW()
ORDER BY
orders.created_at
This works, but I would like to optimize this query with an index. This index would need to first be ordered by
the product name, so it can be filtered
then the created_at of the order in descending order, so it can select only the ones from 24 hours ago
The problem is, that from what I have seen, indexes can only be created on a single table, without the possibility of joining another tables values to it. Since two individual index do not solve this problem either, I was wondering if there was an alternative way to optimize this particular query.
Here are the table scripts:
CREATE TABLE products
(
name text PRIMARY KEY,
price integer,
)
CREATE TABLE orders
(
id SERIAL PRIMARY KEY,
created_at TIMESTAMP DEFAULT NOW(),
)
CREATE TABLE orderProducts
(
product text REFERENCES products(name),
"order" integer REFERENCES orders(id),
)

First of all. Please do not put indices everywhere - that lead to slower changing operations...
As proposed by #Laurenz Albe - do not guess - check.
Other than that. Note that you know product name, price is repeated - so you can query that once. Question if in your case two queries are going to be faster then single one... Check that.
Please read docs. I would try this index:
create index orders_id_created_at on orders(created_at desc, id)
Normally id should go first, since that is unique, however here system should be able to filter out on both predicates - where/join. Just guessing here.
orderProducts I would like to see index on both columns, however for this query only one should be needed. In practice you are going from products to orders, or other way - both paths are possible, that is why I've wrote about indexing both columns. I would use two separate indexes:
create index orderproducts_product_id on orderproducts (product_id) include (order_id);
create index orderproducts_order_id on orderproducts (order_id) include (product_id);
Probably that is not changing much, but... idea is to use only index, but not the table itself.

These rules are important in terms of performance:
Integer index faster than string index, therefore, you should try to make the primary keys always be an integer. Because join the tables uses primary keys too.
If when in where clauses always use two fields then we must create an index for both fields.
Foreign-Keys are not indexed, you must create an index for foreign-key fields manually.
So, recommended table scripts will be are that:
CREATE TABLE products
(
id serial primary key,
name text,
price integer
);
CREATE UNIQUE INDEX products_name_idx ON products USING btree (name);
CREATE TABLE orders
(
id SERIAL PRIMARY KEY,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX orders_created_at_idx ON orders USING btree (created_at);
CREATE TABLE orderProducts
(
product_id integer REFERENCES products(id),
order_id integer REFERENCES orders(id)
);
CREATE INDEX orderproducts_product_id_idx ON orderproducts USING btree (product_id, order_id);
---- OR ----
CREATE INDEX orderproducts_product_id ON orderproducts (product_id);
CREATE INDEX orderproducts_order_id ON orderproducts (order_id);

Related

Why order by multiple column caused table scan

I have a question regarding to postgresql order by.
In my case, I have one table with two column(id, first_name), pk is id, also i set an index on clolumn first_name.
CREATE TABLE STUDENT (
ID UUID NOT NULL,
FIRST_NAME VARCHAR(255) NULL,
CONSTRAINT Student_PK PRIMARY KEY (ID)
);
CREATE INDEX INDEX_NAME ON STUDENT (FIRST_NAME);
when i execute query below: it will trigger an index scan
explain SELECT id, first_name
FROM public.student
order by first_name asc
limit 1
offset 0
//Index Scan using index_name on student (cost=0.14..50.25 rows=140 width=532)
also, when i order by id, it still trigger an index scan:
explain SELECT id, first_name
FROM public.student
order by id asc
limit 1
offset 0
//Index Scan using student_pk on student (cost=0.14..50.25 rows=140 width=532)
My question is, when i use order by id, first_name, why it trigger a seq scan?
explain SELECT id, first_name
FROM public.student
order by id asc, first_name asc
limit 1
offset 0
//Seq Scan on student (cost=0.00..11.40 rows=140 width=532)
I've looked at a lot of PostgreSQL documentation, but I can't find any explains about this. Can someone give me some explain about this phenomenon?
Thank you~
This is because you are not using the latest version of PostgreSQL, version 13.
If it uses the index on (id) alone, then it will need to "read ahead" in the index to find any ties, and if it finds any then it has to go break those ties by looking at first_name and resorting to re-sorting. Doing this was implemented in v13 (called "Incremental Sort"), but not before that.
Of course there can't be any ties on id in the first place as it is a primary key, but the planner does not make use of this knowledge.
You explicitly created a secondary index on FIRST_NAME, and also Postgres already created the table with id having an index, as this field is the primary key of the table (the clustered index).
When you use:
order by id, first_name
You are requesting a two-tier sort of your data. Neither the id nor FIRST_NAME index alone can satisfy this sort. Actually, using either index would require doing seeks back to the main table to find the values for the alternate column. As a result, Postgres is choosing to not use any index at all, and just scan the table. Note that your current result might change depending on the size of the table and type of the data.

Index method for a column used only for ordering

I have a table product_images with a foreign key product_id and integer field order to manualy set order of product's images. Knowing that the table will be used only like this:
SELECT * FROM product_images
WHERE product_id = ?
ORDER BY "order"
-- what is the optimal index method for product_id and order?
Is that enough?:
CREATE INDEX product_images_unique_order
ON "product_images"("product_id", "order");
SQL Fiddle
Yes, that should do it.
PostgreSQL might decide not to use that index, depending on how many rows you have, how many images any given product_id has, and how scattered about the table all of the rows with the same product_id are, and how wide the rows of the product_images table are; plus many other things.
But by having that index you provide PostgreSQL with the opportunity to use it.

Index on foreign keys

I'm just trying to best understand index.
On pg 106 of 70-461 - Querying Microsoft Sql Server 2012,
it says when a primary or unique constraint SQL Sever will automatically create a unique index.
But no index are created for foreign keys.
Therefore to make joins more efficient is it best to just create a non_clustered index on the foreign keys?
Not sure what part is the question.
An index is used to enforce a unique constraint.
A FK by nature does not require an index.
But if the FK has an index the query optimizer will often use it in the join.
In this query docMVEnum1.valueID is a FK with an index.
The query optimizer used that index.
Even with the index it was still the most expensive part of the query.
select docMVEnum1.sID, docEnum1.value
from docMVEnum1
join docEnum1
on docEnum1.valueID = docMVEnum1.valueID
Also by nature a FK is often used in a where clause.
Indexes are not free.
They improve select but slow down insert and update.
No, you don't need to create a index for the foreign keys, it will not promise that it will make joins more efficient.
The indexes for unique and PK are created to improve the INSERT and UPDATE performance.
While you are querying with JOIN it will use zero or one index to seek / scan the table.
Lets say that you have couple of tables like
MyTable
(
ID int (PK),
Description varchar(max),
ColumnFK int (FK to LookupTable)
)
Table LookupTable
(
ID int (PK),
Description varchar(max)
)
SELECT MyTable.ID, MyTable.Description, MyTable.ColumnFK, LookupTable.Description
FROM MyTable
INNER JOIN LookupTable
on LookupTable.ID = MyTable.ColumnFK,
WHERE ID between 5 and 10000
most probably is that the profiler will use index scan to find all the relevant IDs in MyTable so it will pick from Mytable columns ColumnFK1 and Description.
if you were thinking of adding the FK to the unique or pk just evaluate what happens if you are going to have many FK in the same table?
Note that intentionally I added to the predicate MyTable.Description and made it varchar(max) to show that you will reach the data for such a query.

postgreSQL table design

I need to create a table (postgresql 9.1) and I am stuck. Could you possibly help?
The incoming data can assume either of the two formats:
client id(int), shop id(int), asof(date), quantity
client id(int), , asof(date), quantity
The given incoming CSV template is: {client id, shop id, shop type, shop genre, asof, quantity}
In the first case, the key is -- client id, shop id, asof
In the second case, the key is -- client id, shop type, shop genre, asof
I tried something like:
create table(
client_id int references...,
shop_id int references...,
shop_type int references...,
shop_genre varchar(30),
asof date,
quantity real,
primary key( client_id, shop_id, shop_type, shop_genre, asof )
);
But then I ran into a problem. When data is of format 1, the inserts fail because of nulls in pk.
The queries within a client can be either by shop id, or by a combination of shop type and genre. There are no use cases of partial or regex matches on genre.
What would be a suitable design? Must I split this into 2 tables and then take a union of search results? Or, is it customary to put 0's and blanks for missing values and move along?
If it matters, the table is expected to be 100-500 million rows once all historic data is loaded.
Thanks.
You could try partial unique indexes aka filtered unique index aka conditional unique indexes.
http://www.postgresql.org/docs/9.2/static/indexes-partial.html
Basically what it comes down to is the uniqueness is filtered based on a where clause,
For example(Of course test for correctness and impact on performance):
CREATE TABLE client(
pk_id SERIAL,
client_id int,
shop_id int,
shop_type int,
shop_genre varchar(30),
asof date,
quantity real,
PRIMARY KEY (pk_id)
);
CREATE UNIQUE INDEX uidx1_client
ON client
USING btree
(client_id, shop_id, asof, quantity)
WHERE client_id = 200;
CREATE UNIQUE INDEX uidx2_client
ON client
USING btree
(client_id, asof, quantity)
WHERE client_id = 500;
A simple solution would be to create a field for the primary key which would use one of two algorithms to generate its data depending on what is passed in.
If you wanted a fully normalised solution, you would probably need to split the shop information into two separate tables and have it referenced from this table using outer joins.
You may also be able to use table inheritance available in postgres.

Do I need a primary key for my table, which has a UNIQUE (composite 4-columns), one of which can be NULL?

I have the following table (PostgreSQL 8.3) which stores prices of some products. The prices are synchronised with another database, basically most of the fields below (apart from one) are not updated by our client - but instead dropped and refreshed every once-in-a-while to sync with another stock database:
CREATE TABLE product_pricebands (
template_sku varchar(20) NOT NULL,
colourid integer REFERENCES colour (colourid) ON DELETE CASCADE,
currencyid integer NOT NULL REFERENCES currency (currencyid) ON DELETE CASCADE,
siteid integer NOT NULL REFERENCES site (siteid) ON DELETE CASCADE,
master_price numeric(10,2),
my_custom_field boolean,
UNIQUE (template_sku, siteid, currencyid, colourid)
);
On the synchronisation, I basically DELETE most of the data above except for data WHERE my_custom_field is TRUE (if it's TRUE, it means the client updated this field via their CMS and therefore this record should not be dropped). I then INSERT 100s to 1000s of rows into the table, and UPDATE where the INSERT fails (i.e. where the combination of (template_sku, siteid, currencyid, colourid) already exists).
My question is - what best practice should be applied here to create a primary key? Is a primary key even needed? I wanted to make the primary key = (template_sku, siteid, currencyid, colourid) - but the colourid field can be NULL, and using it in a composite primary key is not possible.
From what I read on other forum posts, I think I have done the above correctly, and just need to clarify:
1) Should I use a "serial" primary key just in case I ever need one? At the moment I don't, and don't think I ever will, because the important data in the table is the price and my custom field, only identified by the (template_sku, siteid, currencyid, colourid) combination.
2) Since (template_sku, siteid, currencyid, colourid) is the combination that I will use to query a product's price, should I add any further indexing to my columns, such as the "template_sku" which is a varchar? Or is the UNIQUE constraint a good index already for my SELECTs?
Should I use a "serial" primary key just in case I ever need one?
You can easily add a serial column later if you need one:
ALTER TABLE product_pricebands ADD COLUMN id serial;
The column will be filled with unique values automatically. You can even make it the primary key in the same statement (if no primary key is defined, yet):
ALTER TABLE product_pricebands ADD COLUMN id serial PRIMARY KEY;
If you reference the table from other tables I would advise to use such a surrogate primary key, because it is rather unwieldy to link by four columns. It is also slower in SELECTs with JOINs.
Either way, you should define a primary key. The UNIQUE index including a nullable column is not a full replacement. It allows duplicates for combinations including a NULL value, because two NULL values are never considered the same. This can lead to trouble.
As
the colourid field can be NULL
you might want to create two unique indexes. The combination (template_sku, siteid, currencyid, colourid) cannot be a PRIMARY KEY, because of the nullable colourid, but you can create a UNIQUE constraint like you already have (implementing an index automatically):
ALTER TABLE product_pricebands ADD CONSTRAINT product_pricebands_uni_idx
UNIQUE (template_sku, siteid, currencyid, colourid)
This index perfectly covers the queries you mention in 2).
Create a partial unique index in addition if you want to avoid "duplicates" with (colourid IS NULL):
CREATE UNIQUE INDEX product_pricebands_uni_null_idx
ON product_pricebands (template_sku, siteid, currencyid)
WHERE colourid IS NULL;
To cover all bases. I wrote more about that technique in a related answer on dba.SE.
The simple alternative to the above is to make colourid NOT NULL and create a primary key instead of the above product_pricebands_uni_idx.
Also, as you
basically DELETE most of the data
for your refill operation, it will be faster to drop indexes, that are not needed during the refill operation, and recreate those afterwards. It is faster by an order of magnitude to build an index from scratch than to add all rows incrementally.
How do you know, which indexes are used (needed)?
Test your queries with EXPLAIN ANALYZE.
Or use the built-in statistics. pgAdmin displays statistics in a separate tab for the selected object.
It may also be faster to select the few rows with my_custom_field = TRUE into a temporary table, TRUNCATE the base table and re-INSERT the survivors. Depends on whether you have foreign keys defined. Would look like this:
CREATE TEMP TABLE pr_tmp AS
SELECT * FROM product_pricebands WHERE my_custom_field;
TRUNCATE product_pricebands;
INSERT INTO product_pricebands SELECT * FROM pr_tmp;
This avoids a lot of vacuuming.