Why:
We were discussing with colleague composite keys. The fact that Mysql requires sequential use of indexed columns in where clause without gaps to be used by planner. I wanted to show how Postgres uses second column in composite key for scan. And I failed! It was using first column, but not the second! Totally confused I played some and found it starts using the second column when index is 6.5 times smaller then table:
populate:
create table so2 (a int not null,b int not null, c text, d int not null);
with l as (select generate_series(999,999+76,1) r)
insert into so2
select l.r,l.r+1,concat('l',lpad('o',l.r,'o'),'ng'),1 from l;
;
alter table so2 ADD CONSTRAINT so2pk PRIMARY KEY (a,b);
analyze so2;
the plan that confused me:
t=# explain analyze select 42 from so2 where a=1004;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
Index Only Scan using so2pk on so2 (cost=0.14..8.16 rows=1 width=0) (actual time=0.013..0.013 rows=1 loops=1)
Index Cond: (a = 1004)
Heap Fetches: 1
Planning time: 0.090 ms
Execution time: 0.026 ms
(5 rows)
t=# explain analyze select 42 from so2 where b=1004;
QUERY PLAN
----------------------------------------------------------------------------------------------
Seq Scan on so2 (cost=0.00..11.96 rows=1 width=0) (actual time=0.006..0.028 rows=1 loops=1)
Filter: (b = 1004)
Rows Removed by Filter: 76
Planning time: 0.045 ms
Execution time: 0.036 ms
(5 rows)
Then I drop so2 and rerun prepare part with 999+77, not 999+76 and plan for column b changes:
t=# explain analyze select 42 from so2 where b=1004;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------
Index Only Scan using so2pk on so2 (cost=0.14..12.74 rows=1 width=0) (actual time=0.004..0.004 rows=1 loops=1)
Index Cond: (b = 1004)
Heap Fetches: 1
Planning time: 0.038 ms
Execution time: 0.013 ms
(5 rows)
The only difference I noticed is amount of pages the relation takes:
confusing plan' size:
t=# \dt+ so2
List of relations
Schema | Name | Type | Owner | Size | Description
--------+------+-------+-------+--------+-------------
public | so2 | table | vao | 120 kB |
(1 row)
expected one's size:
t=# \dt+ so2
List of relations
Schema | Name | Type | Owner | Size | Description
--------+------+-------+-------+--------+-------------
public | so2 | table | vao | 128 kB |
(1 row)
Index in both case is same:
t=# \di+ so2pk
List of relations
Schema | Name | Type | Owner | Table | Size | Description
--------+-------+-------+-------+-------+-------+-------------
public | so2pk | index | vao | so2 | 16 kB |
(1 row)
Settings that could affect plan are default:
select name,setting
from pg_settings
where source != 'default' and name in (
'enable_bitmapscan',
'enable_hashagg',
'enable_hashjoin',
'enable_indexscan',
'enable_indexonlyscan',
'enable_material',
'enable_mergejoin',
'enable_nestloop',
'enable_seqscan',
'enable_sort',
'enable_tidscan',
'seq_page_cost',
'random_page_cost',
'cpu_tuple_cost',
'cpu_index_tuple_cost',
'cpu_operator_cost',
'effective_cache_size',
'geqo',
'geqo_threshold',
'geqo_effort',
'geqo_pool_size',
'geqo_generations',
'geqo_selection_bias',
'geqo_seed',
'join_collapse_limit',
'from_collapse_limit',
'cursor_tuple_fraction',
'constraint_exclusion',
'default_statistics_target'
) order by name
;
name | setting
------+---------
(0 rows)
tried in several versions: 9.3.10, 9.5.4 with same behaviour
Now - excuse me for such long post! And questions:
16kB is smaller then 120kB - why would planner choose Seq Scan?..
UPDATED to reflect e4c5 kend remarks
Also:
For a second I thought it could be because text column is kept in extended stoprage so the table itself takes same amount of pages as index (all columns but text one), so I altered it to be main and plain - no effect...
I think the key difference here is the fact that mysql can only use one index per table where as postgresql does not have that limitation. More than one index can be used that's perhaps why they say
Multicolumn indexes should be used sparingly. In most situations, an
index on a single column is sufficient and saves space and time.
Indexes with more than three columns are unlikely to be helpful unless
the usage of the table is extremely stylized.
A few other pointers
1) You have far too little data to reach any conclusion. yes, the query plan does depend a great deal on size - the number of rows in the table. The first stored function creates only 62 rows and for that you don't need an index.
2) The value that you are search for a=4 is not in the table.
3) the value for b is always one, so an index on that column would be useless. Even a composite index wouldn't give it very high cardinality. ie a composite index on (a,b) is exactly the same as an index on a
Update
There is no line in the sand threshold at number of rows or which size in KB an index becomes effective. Query planner decides whether or not to use an index and what index to use based on several configuration factors described at https://www.postgresql.org/docs/9.2/static/runtime-config-query.html
Related
my_table includes:
user_id | character varying | | not null |
epic_id | text | | not null |
"IDX_user" UNIQUE, btree ("user_id") WHERE status < 101
"IDX_epic" UNIQUE, btree ("epic_id") WHERE status < 101
Problematic Query
EXPLAIN ANALYZE SELECT * FROM my_table WHERE "epic_id" = 'asdf' and "status" < 101 LIMIT 1;
Limit (cost=0.28..8.29 rows=1 width=276) (actual time=0.230..0.231 rows=0 loops=1)
-> Index Scan using "IDX_user" on my_table (cost=0.28..8.29 rows=1 width=276) (actual time=0.229..0.230 rows=0 loops=1)
Filter: ("epic_id" = 'asdf'::text)
Rows Removed by Filter: 273
Planning Time: 0.122 ms
Execution Time: 0.248 ms
There's a perfectly good IDX_epic. Why are we using IDX_user, traversing 100s of rows, and potentially causing annoying locks when this is used within a transaction?
Fun Tidbits
SET random_page_cost=1 does not help as other stackoverflow posts have recommended
On local, it uses the correct index! There are only 90 rows on local with status < 101
When doing an inner join of "epic_id" = table.random_column, the plan does use IDX_epic.
"user_id" and "epic_id" are different types, but from what I've read the difference between text and character varying is near 0.
According to pg_stat_all_indexes, IDX_epic has an idx_scan of 9, which confirms it's not being used except for my tests.
Why is epicId inside quotes but status is not?
Also you should use LIKE as in epicId LIKE 'asdf' to compare column values with strings.
VACUUM ANALYZE solved the issue--ANALYZE alone likely would have resolved it.
This is a follow-up question I posted a few days ago.
In PostgreSQL what does hashed subplan mean?
Below was my question.
I want to know how the optimizer rewrote the query and how to read the execution plan in PostgreSQL. Here is the sample code.
DROP TABLE ords;
CREATE TABLE ords (
ORD_ID INT NOT NULL,
ORD_PROD_ID VARCHAR(2) NOT NULL,
ETC_CONTENT VARCHAR(100));
ALTER TABLE ords ADD CONSTRAINT ords_PK PRIMARY KEY(ORD_ID);
CREATE INDEX ords_X01 ON ords(ORD_PROD_ID);
INSERT INTO ords
SELECT i
,chr(64+case when i <= 10 then i else 26 end)
,rpad('x',100,'x')
FROM generate_series(1,10000) a(i);
SELECT COUNT(*) FROM ords WHERE ORD_PROD_ID IN ('A','B','C');
DROP TABLE delivery;
CREATE TABLE delivery (
ORD_ID INT NOT NULL,
VEHICLE_ID VARCHAR(2) NOT NULL,
ETC_REMARKS VARCHAR(100));
ALTER TABLE delivery ADD CONSTRAINT delivery_PK primary key (ORD_ID, VEHICLE_ID);
CREATE INDEX delivery_X01 ON delivery(VEHICLE_ID);
INSERT INTO delivery
SELECT i
, chr(88 + case when i <= 10 then mod(i,2) else 2 end)
, rpad('x',100,'x')
FROM generate_series(1,10000) a(i);
analyze ords;
analyze delivery;
This is the SQL I am interested in.
SELECT *
FROM ords a
WHERE ( EXISTS (SELECT 1
FROM delivery b
WHERE a.ORD_ID = b.ORD_ID
AND b.VEHICLE_ID IN ('X','Y')
)
OR a.ORD_PROD_ID IN ('A','B','C')
);
Here is the execution plan
| Seq Scan on portal.ords a (actual time=0.038..2.027 rows=10 loops=1) |
| Output: a.ord_id, a.ord_prod_id, a.etc_content |
| Filter: ((alternatives: SubPlan 1 or hashed SubPlan 2) OR ((a.ord_prod_id)::text = ANY ('{A,B,C}'::text[]))) |
| Rows Removed by Filter: 9990 |
| Buffers: shared hit=181 |
| SubPlan 1 |
| -> Index Only Scan using delivery_pk on portal.delivery b (never executed) |
| Index Cond: (b.ord_id = a.ord_id) |
| Filter: ((b.vehicle_id)::text = ANY ('{X,Y}'::text[])) |
| Heap Fetches: 0 |
| SubPlan 2 |
| -> Index Scan using delivery_x01 on portal.delivery b_1 (actual time=0.023..0.025 rows=10 loops=1) |
| Output: b_1.ord_id |
| Index Cond: ((b_1.vehicle_id)::text = ANY ('{X,Y}'::text[])) |
| Buffers: shared hit=8 |
| Planning: |
| Buffers: shared hit=78 |
| Planning Time: 0.302 ms |
| Execution Time: 2.121 ms
I don't know how the optimizer transformed the SQL. What is the final SQL the optimizer rewrote? I have only one EXISTS sub-query in the SQL above, why are there two sub-plans? What does "hashed Sub-Plan 2" mean? I would appreciate it if anyone share a little knowledge with me.
Below is Laurenz Albe's answer.
You have the misconception that the optimizer rewrites the SQL statement. That is not the case. Rewriting the query is the job of the query rewriter, which for example replaces views with their definition. The optimizer comes up with a sequence of execution steps to compute the result. It produces a plan, not an SQL statement.
The optimizer plans two alternatives: either execute subplan 1 for each row found, or execute subplan 2 once (note that it is independent of a), build a hash table from the result and probe that hash for each row found in a.
At execution time, PostgreSQL decides to use the latter strategy, that is why subplan 1 is never executed.
Laurenz's answer enlightened me.
But, I wondered what the final query rewritten by the query rewriter would be.
Here is the rewritten query I thought the query rewriter would do.
Am I right?
What do you, readers of this question, think that the final rewritten query would be?
(
SELECT *
FROM ords a
WHERE EXISTS (SELECT 1
FROM delivery b
WHERE a.ORD_ID = B.ORD_ID
AND b.VEHICLE_ID IN ('X','Y')
OFFSET 0 --> the optimizer prevented subquery collapse.
)
*alternative OR*
SELECT a.*
FROM ords a *(Semi Hash Join)* delivery b --> the optimizer used b as an build input
WHERE a.ORD_ID = b.ORD_ID
AND b.VEHICLE_ID IN ('X','Y') --> the optimzer used the delivery_x01 index.
)
*filtered OR*
SELECT *
FROM ords a
WHERE a.ORD_PROD_ID IN ('A','B','C') --> the optimizer cannot use the ords_x01 index due to the query transformation
No. The subplans are not generated by the rewriter, but by the optimizer. As soon as the optimizer takes over, you leave the realm of SQL for good. The procedural execution steps it generates cannot be represented in the declarative SQL language.
I have a table with a foreign key and a timestamp for when the row was most recently updated. rows with the same foreign key value are updated at roughly the same time, plus or minus an hour. I have an index on (foreign_key, timestamp). This is on postgresql 11.
When I make a query like:
select * from table where foreign_key = $1 and timestamp > $2 order by primary_key;
It will use my index in cases where the timestamp query is selective across the entire table. But if the timestamp is far enough in the past that the majority of rows match it will scan the primary_key index assuming it'll be faster. This problem goes away if I remove the order by.
I've looked at Postgresql's CREATE STATISTICS, but it doesn't seem to help in cases where the correlation is over a range of values like a timestamp plus or minus five minutes, rather than an specific value.
What are the best ways to work around this? I can remove the order by, but that complicates the business logic. I can partition the table on the foreign key id, but that is also a pretty expensive change.
Specifics:
Table "public.property_home_attributes"
Column | Type | Collation | Nullable | Default
----------------------+-----------------------------+-----------+----------+------------------------------------------------------
id | integer | | not null | nextval('property_home_attributes_id_seq'::regclass)
mls_id | integer | | not null |
property_id | integer | | not null |
formatted_attributes | jsonb | | not null |
created_at | timestamp without time zone | | |
updated_at | timestamp without time zone | | |
Indexes:
"property_home_attributes_pkey" PRIMARY KEY, btree (id)
"index_property_home_attributes_on_property_id" UNIQUE, btree (property_id)
"index_property_home_attributes_on_updated_at" btree (updated_at)
"property_home_attributes_mls_id_updated_at_idx" btree (mls_id, updated_at)
The table has about 16 million rows.
psql=# EXPLAIN ANALYZE SELECT * FROM property_home_attributes WHERE mls_id = 46 AND (property_home_attributes.updated_at < '2019-10-30 16:52:06.326774') ORDER BY id ASC LIMIT 1000;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.56..10147.83 rows=1000 width=880) (actual time=1519.718..22310.674 rows=1000 loops=1)
-> Index Scan using property_home_attributes_pkey on property_home_attributes (cost=0.56..6094202.57 rows=600576 width=880) (actual time=1519.716..22310.398 rows=1000 loops=1)
Filter: ((updated_at < '2019-10-30 16:52:06.326774'::timestamp without time zone) AND (mls_id = 46))
Rows Removed by Filter: 358834
Planning Time: 0.110 ms
Execution Time: 22310.842 ms
(6 rows)
and then without the order by:
psql=# EXPLAIN ANALYZE SELECT * FROM property_home_attributes WHERE mls_id = 46 AND (property_home_attributes.updated_at < '2019-10-30 16:52:06.326774') LIMIT 1000;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.56..1049.38 rows=1000 width=880) (actual time=0.053..162.081 rows=1000 loops=1)
-> Index Scan using foo on property_home_attributes (cost=0.56..629893.60 rows=600576 width=880) (actual time=0.053..161.992 rows=1000 loops=1)
Index Cond: ((mls_id = 46) AND (updated_at < '2019-10-30 16:52:06.326774'::timestamp without time zone))
Planning Time: 0.100 ms
Execution Time: 162.140 ms
(5 rows)
If you want to keep PostgreSQL from using an index scan on property_home_attributes_pkey to support the ORDER BY, you can simply use
ORDER BY primary_key + 0
I am using postgres 9.5 on linux7. Here is the environment:
create table t1(c1 int primary key, c2 varchar(100));
insert some rows in just created table
do $$
begin
for i in 1..12000000 loop
insert into t1 values(i,to_char(i,'9999999'));
end loop;
end $$;
Now I want to update c2 column where c1=random value (EXPLAIN show that index is not used).
explain update t1 set c2=to_char(4,'9999999') where c1=cast(floor(random()*100000) as int);
QUERY PLAN
----------------------------------------------------------------------------------
Update on t1 (cost=10000000000.00..10000000017.20 rows=1 width=10)
-> Seq Scan on t1 (cost=10000000000.00..10000000017.20 rows=1 width=10)
Filter: (c1 = (floor((random() * '100000'::double precision)))::integer)
(3 rows)
Now, if I replace "cast(floor(random()*100000) as int)" with a number (any number) index is used:
explain update t1 set c2=to_char(4,'9999999') where c1=12345;
QUERY PLAN
-------------------------------------------------------------------------
Update on t1 (cost=0.15..8.17 rows=1 width=10)
-> Index Scan using t1_pkey on t1 (cost=0.15..8.17 rows=1 width=10)
Index Cond: (c1 = 12345)
(3 rows)
Questions are:
Why in first case (when random() is used) postgres doesn't use index?
How can I force Postgres to use index?
This is because random() is a volatile function (see PostgreSQL CREATE FUNCTION) which means it should be (re)evaluated per each row.
So you actually aren't updating one random row each time (as I understand you wanted) but a random number of rows (the number of rows where its own random generated number happens to match its id), which attending probabilities, it will tend to 0.
See it using a lower range for the random generated number:
test=# select * from t1 where c1=cast(floor(random()*10) as int);
c1 | c2
----+----
(0 rows)
test=# select * from t1 where c1=cast(floor(random()*10) as int);
c1 | c2
----+----------
3 | 3
(1 row)
test=# select * from t1 where c1=cast(floor(random()*10) as int);
c1 | c2
----+----------
4 | 4
9 | 9
(2 rows)
test=# select * from t1 where c1=cast(floor(random()*10) as int);
c1 | c2
----+----------
5 | 5
8 | 8
(2 rows)
If you want to retrieve only one random row, you need, at first, generate a single random id to compare against row id.
HINT: You can think that database planner is dumb and always perform sequential scan over all rows and calculates condition expressions one time per each row.
Then, under the hood, database planner is much more smart and, if he know that every time he calculate it (in the same transaction) the result will be the same, then he calculate it once and perform an index scan.
A tricky (but dirty) solution could be creating your own random_stable() function, declaring it as stable even it returns a random generated number.
...This will keep your query as simple as now is. But I think it is a dirty solution because it is faking the fact that the function is, in fact, volatile.
Then, a better solution (the right one for me) is to write the query in a form that it really generates the number single time.
For example:
test=# with foo as (select floor(random()*1000000)::int as bar) select * from t1 join foo on (t1.c1 = foo.bar);
c1 | c2 | bar
-----+----------+-----
929 | 929 | 929
(1 row)
...or a subquery solution like that provides #a_horse_with_no_name
NOTE: I used select queries instead of update ones for simplicity and readability, but the case is the same: Simply use the same where clause (with the subquery approach: Off course, using which would be a little more tricky...). Then, to check that index is used, you only need to prepend "explain" as you know.
Not sure why the index isn't used, maybe because of the definition of the random() function. If you use a sub-select for calling the function, then (at least for me with 9.5.3) Postgres uses the index:
explain
update t1
set c2=to_char(4,'9999999')
where c1= (select cast(floor(random()*100000) as int));
returns:
Update on t1 (cost=0.44..3.45 rows=1 width=10)
InitPlan 1 (returns $0)
-> Result (cost=0.00..0.01 rows=1 width=0)
-> Index Scan using t1_pkey on t1 (cost=0.43..3.44 rows=1 width=10)
Index Cond: (c1 = $0)
I have this query which takes 86 sec to execute.
select cust_id customer_id,
cust_first_name customer_first_name,
cust_last_name customer_last_name,
cust_prf customer_prf,
cust_birth_country customer_birth_country,
cust_login customer_login,
cust_email_address customer_email_address,
date_year ddyear,
sum(((stock_ls_price-stock_ws_price-stock_ds_price)+stock_es_price)/2) total_yr,
's' stock_type
from customer, stock, date
where customer_k = stock_customer_k
and stock_soldate_k = date_k
group by cust_id, cust_first_name, cust_last_name, cust_prf, cust_birth_country, cust_login, cust_email_address, date_year;
EXPLAIN ANALYZE RESULT:
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=639753.55..764040.06 rows=2616558 width=213) (actual time=81192.575..86536.398 rows=190581 loops=1)
Group Key: customer.cust_id, customer.cust_first_name, customer.cust_last_name, customer.cust_prf, customer.cust_birth_country, customer.cust_login, customer.cust_email_address, date.date_year
-> Sort (cost=639753.55..646294.95 rows=2616558 width=213) (actual time=81192.468..83977.960 rows=2685453 loops=1)
Sort Key: customer.cust_id, customer.cust_first_name, customer.cust_last_name, customer.cust_prf, customer.cust_birth_country, customer.cust_login, customer.cust_email_address, date.date_year
Sort Method: external merge Disk: 460920kB
-> Hash Join (cost=6527.66..203691.58 rows=2616558 width=213) (actual time=60.500..2306.082 rows=2685453 loops=1)
Hash Cond: (stock.stock_customer_k = customer.customer_k)
-> Merge Join (cost=1423.66..144975.59 rows=2744641 width=30) (actual time=8.820..1412.109 rows=2750311 loops=1)
Merge Cond: (date.date_k = stock.stock_soldate_k)
-> Index Scan using date_key_idx on date (cost=0.29..2723.33 rows=73049 width=8) (actual time=0.013..7.164 rows=37622 loops=1)
-> Index Scan using stock_soldate_k_index on stock (cost=0.43..108829.12 rows=2880404 width=30) (actual time=0.004..735.043 rows=2750312 loops=1)
-> Hash (cost=3854.00..3854.00 rows=100000 width=191) (actual time=51.650..51.650rows=100000 loops=1)
Buckets: 16384 Batches: 1 Memory Usage: 16139kB
-> Seq Scan on customer (cost=0.00..3854.00 rows=100000 width=191) (actual time=0.004..30.341 rows=100000 loops=1)
Planning time: 1.761 ms
Execution time: 86621.807 ms
I have work_mem=512MB. I have indexes created on
cust_id, customer_k, stock_customer_k, stock_soldate_k and date_k.
There are about 100,000 rows in customer, 3,000,000 rows in stock and 80,000 rows in date.
How can I make this query run faster?
I would appreciate any help!
TABLE DEFINITIONS
date
Column | Type | Modifiers
---------------------+---------------+-----------
date_k | integer | not null
date_id | character(16) | not null
date_date | date |
date_year | integer |
stock
Column | Type | Modifiers
-----------------------+--------------+-----------
stock_soldate_k | integer |
stock_soltime_k | integer |
stock_customer_k | integer |
stock_ds_price | numeric(7,2) |
stock_es_price | numeric(7,2) |
stock_ls_price | numeric(7,2) |
stock_ws_price | numeric(7,2) |
customer:
Column | Type | Modifiers
---------------------------+-----------------------+-----------
customer_k | integer | not null
customer_id | character(16) | not null
cust_first_name | character(20) |
cust_last_name | character(30) |
cust_prf | character(1) |
cust_birth_country | character varying(20) |
cust_login | character(13) |
cust_email_address | character(50) |
TABLE "stock" CONSTRAINT "st1" FOREIGN KEY (stock_soldate_k) REFERENCES date(date_k)
"st2" FOREIGN KEY (stock_customer_k) REFERENCES customer(customer_k)
Try this:
with stock_grouped as
(select stock_customer_k, date_year, sum(((stock_ls_price-stock_ws_price-stock_ds_price)+stock_es_price)/2) total_yr
from stock, date
where stock_soldate_k = date_k
group by stock_customer_k, date_year)
select cust_id customer_id,
cust_first_name customer_first_name,
cust_last_name customer_last_name,
cust_prf customer_prf,
cust_birth_country customer_birth_country,
cust_login customer_login,
cust_email_address customer_email_address,
date_year ddyear,
total_yr,
's' stock_type
from customer, stock_grouped
where customer_k = stock_customer_k
This query anticipates the grouping over the join.
A big performance penalty that you get is that about 450MB of intermediate data is stored externally: Sort Method: external merge Disk: 460920kB. This happens because the planner first needs to satisfy the join conditions between the 3 tables, including the possibly inefficient table customer, before the aggregation sum() can take place, even while the aggregation can be perfectly well performed on table stock alone.
Query
Because your tables are fairly large, you are better off reducing the number of eligible rows as soon as possible and preferably before any joining. In this case that means doing the aggregation on table stock in a sub-query and join that result to the other two tables:
SELECT c.cust_id AS customer_id,
c.cust_first_name AS customer_first_name,
c.cust_last_name AS customer_last_name,
c.cust_prf AS customer_prf,
c.cust_birth_country AS customer_birth_country,
c.cust_login AS customer_login,
c.cust_email_address AS customer_email_address,
d.date_year AS ddyear,
ss.total_yr,
's' stock_type
FROM (
SELECT
stock_customer_k AS ck,
stock_soldate_k AS sdk,
sum((stock_ls_price-stock_ws_price-stock_ds_price+stock_es_price)*0.5) AS total_yr
FROM stock
GROUP BY 1, 2) ss
JOIN customer c ON c.customer_k = ss.ck
JOIN date d ON d.date_k = ss.sdk;
The sub-query on stock will result in far fewer rows, depending on the average number of orders per customer per date. Also, in the sum() function, multiplying by 0.5 is far cheaper than dividing by 2 (although in the grand scheme of things it will be relatively marginal).
Data model
You should also look seriously at your data model.
In table customer you use data types like char(30), which will always take up 30 bytes in your row, even when you store just 'X'. Using a varchar(30) data type is much more efficient when many strings are shorter than the declared maximum width, because it takes up less space and thus requires fewer page reads (and writes on the intermediate data).
Table stock uses numeric(7,2) for prices. Use of the numeric data type may give accurate results when subjecting data to many, many repeated operations, but they are also very slow. The double precision data type will be much faster and equally accurate in your scenario. For presentation purposes you can round the value off to the desired precision.
As a suggestion, create a table stock_f with double precision data types instead of numeric, copy all data over from stock to stock_f and run the query on that table.