Postgres JSONB timestamp query very slow compared to timestamp column query - postgresql

I've got a Postgres 9.4.4 database with 1.7 million records with the following information stored in a JSONB column called data in a table called accounts:
data: {
"lastUpdated": "2016-12-26T12:09:43.901Z",
"lastUpdatedTimestamp": "1482754183"
}
}
The actual JSONB column stores much more information, but I've omitted the irrelevant data. The data format cannot be changed since this is legacy information.
I'm trying to efficiently obtain a count of all records where the lastUpdated value is greater or equal to some reference time (I'll use 2015-12-01T10:10:10Z in the following examples):
explain analyze SELECT count(*) FROM "accounts"
WHERE data->>'lastUpdated' >= '2015-12-01T10:10:10Z';
This takes over 22 seconds:
Aggregate (cost=843795.05..843795.06 rows=1 width=0) (actual time=22292.584..22292.584 rows=1 loops=1)
-> Seq Scan on accounts (cost=0.00..842317.05 rows=591201 width=0)
(actual time=1.410..22142.046 rows=1773603 loops=1)
Filter: ((data ->> 'lastUpdated'::text) >= '2015-12-01T10:10:10Z'::text)
Planning time: 1.234 ms
Execution time: 22292.671 ms
I've tried adding the following text index:
CREATE INDEX accounts_last_updated ON accounts ((data->>'lastUpdated'));
But the query is still rather slow, at over 17 seconds:
Aggregate (cost=815548.64..815548.65 rows=1 width=0) (actual time=17172.844..17172.845 rows=1 loops=1)
-> Bitmap Heap Scan on accounts (cost=18942.24..814070.64 rows=591201 width=0)
(actual time=1605.454..17036.081 rows=1773603 loops=1)
Recheck Cond: ((data ->> 'lastUpdated'::text) >= '2015-12-01T10:10:10Z'::text)
Heap Blocks: exact=28955 lossy=397518
-> Bitmap Index Scan on accounts_last_updated (cost=0.00..18794.44 rows=591201 width=0)
(actual time=1596.645..1596.645 rows=1773603 loops=1)
Index Cond: ((data ->> 'lastUpdated'::text) >= '2015-12-01T10:10:10Z'::text)
Planning time: 1.373 ms
Execution time: 17172.974 ms
I've also tried following the directions in Create timestamp index from JSON on PostgreSQL and have tried creating the following function and index:
CREATE OR REPLACE FUNCTION text_to_timestamp(text)
RETURNS timestamp AS
$$SELECT to_timestamp($1, 'YYYY-MM-DD HH24:MI:SS.MS')::timestamp; $$
LANGUAGE sql IMMUTABLE;
CREATE INDEX accounts_last_updated ON accounts
(text_to_timestamp(data->>'lastUpdated'));
But this doesn't give me any improvement, in fact it was slower, taking over 24 seconds for the query, versus 22 seconds for the unindexed version:
explain analyze SELECT count(*) FROM "accounts"
WHERE text_to_timestamp(data->>'lastUpdated') >= '2015-12-01T10:10:10Z';
Aggregate (cost=1287195.80..1287195.81 rows=1 width=0) (actual time=24143.150..24143.150 rows=1 loops=1)
-> Seq Scan on accounts (cost=0.00..1285717.79 rows=591201 width=0)
(actual time=4.044..23971.723 rows=1773603 loops=1)
Filter: (text_to_timestamp((data ->> 'lastUpdated'::text)) >= '2015-12-01 10:10:10'::timestamp without time zone)
Planning time: 1.107 ms
Execution time: 24143.183 ms
In one last act of desperation, I decided to add another timestamp column and update it to contain the same values as data->>'lastUpdated':
alter table accounts add column updated_at timestamp;
update accounts set updated_at = text_to_timestamp(data->>'lastUpdated');
create index accounts_updated_at on accounts(updated_at);
This has given me by far the best performance:
explain analyze SELECT count(*) FROM "accounts" where updated_at >= '2015-12-01T10:10:10Z';
Aggregate (cost=54936.49..54936.50 rows=1 width=0) (actual time=676.955..676.955 rows=1 loops=1)
-> Index Only Scan using accounts_updated_at on accounts
(cost=0.43..50502.48 rows=1773603 width=0) (actual time=0.026..552.442 rows=1773603 loops=1)
Index Cond: (updated_at >= '2015-12-01 10:10:10'::timestamp without time zone)
Heap Fetches: 0
Planning time: 4.643 ms
Execution time: 678.962 ms
However, I'd very much like to avoid adding another column just to improve the speed of ths one query.
This leaves me with the following question: is there any way to improve the performance of my JSONB query so it can be as efficient as the individual column query (the last query where I used updated_at instead of data->>'lastUpdated')? As it stands, it takes from 17 seconds to 24 seconds for me to query the JSONB data using data->>'lastUpdated', while it takes only 678 ms to query the updated_at column. It doesn't make sense that the JSONB query would be so much slower. I was hoping that by using the text_to_timestamp function that it would improve the performance, but it hasn't been the case (or I'm doing something wrong).

In your first and second try most execution time is spent on index recheck or filtering, which must read every json field index hits, reading json is expensive. If index hits a couple hundred rows, query will be fast, but if index hits thousands or hundreds of thousand rows - filtering/rechecking json field will take some serious time. In second try, using additionally another function makes it even worse.
JSON field is good for storing data, but are not intended to be used in analytic queries like summaries, statistics and its bad practice to use json objects to be used in where conditions, atleast as main filtering condition like in your case.
That last act of depression of yours is the right way to go :)
To improve query performance, you must add one or some several columns with key vales which will be used most in where conditions.

Related

postgresql improve the query scan/filtering

I have the following table for attributes of different objects
create table attributes(id serial primary key,
object_id int,
attribute_id text,
text_data text,
int_data int,
timestamp_data timestamp,
state text default 'active');
an object will have different type of attributes and attribute value will be in one column among text_data or int_data or timestamp_data , depending on attribute data type.
sample data is here
I want to retrieve the records, my query is
select * from attributes
where attribute_id = 55 and state='active'
order by text_data
which is very slow.
increased the work_mem to 1 GB for current session. using set command
SET work_mem TO '1 GB'; to improve the sort method from external merge Disk to quicksort
But no improvement in query execution. Query executed plan is
Gather Merge (cost=750930.58..1047136.19 rows=2538728 width=128) (actual time=18272.405..27347.556 rows=3462116 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=235635 read=201793
-> Sort (cost=749930.56..753103.97 rows=1269364 width=128) (actual time=14299.222..15494.774 rows=1154039 loops=3)
Sort Key: text_data
Sort Method: quicksort Memory: 184527kB
Worker 0: Sort Method: quicksort Memory: 266849kB
Worker 1: Sort Method: quicksort Memory: 217050kB
Buffers: shared hit=235635 read=201793
-> Parallel Seq Scan on attributes (cost=0.00..621244.50 rows=1269364 width=128) (actual time=0.083..3410.570 rows=1154039 loops=3)
Filter: ((attribute_id = 185) AND (state = 'active'))
Rows Removed by Filter: 8652494
Buffers: shared hit=235579 read=201793
Planning Time: 0.453 ms
Execution Time: 29135.285 ms
the query total runtime in 45 sec
Successfully run. Total query runtime: 45 secs 237 msec.
3462116 rows affected.
To improve filtering and query execution time, created index on attribute_id and state
create index attribute_id_state on attributes(attribute_id,state);
Sort (cost=875797.49..883413.68 rows=3046474 width=128) (actual time=47189.534..49035.361 rows=3462116 loops=1)
Sort Key: text_data
Sort Method: quicksort Memory: 643849kB
Buffers: shared read=406048
-> Bitmap Heap Scan on attributes (cost=64642.80..547711.91 rows=3046474 width=128) (actual time=981.857..10348.441 rows=3462116 loops=1)
Recheck Cond: ((attribute_id = 185) AND (state = 'active'))
Heap Blocks: exact=396586
Buffers: shared read=406048
-> Bitmap Index Scan on attribute_id_state (cost=0.00..63881.18 rows=3046474 width=0) (actual time=751.909..751.909 rows=3462116 loops=1)
Index Cond: ((attribute_id = 185) AND (state = 'active'))
Buffers: shared read=9462
Planning Time: 0.358 ms
Execution Time: 50388.619 ms
but query become very slow after creating index.
Table has 29.5 Million rows. text_data is null in 9 Million rows.
Query is returning almost 3 million records, which is 10% of table.
Is there any other index or the other way like changing parameter etc to improve the query ?
Some suggestions:
ORDER BY clauses can be accelerated by indexes. So if you put your ordering column in your compound index you may get things to go much faster.
CREATE INDEX attribute_id_state_data
ON attributes(attribute_id, state, text_data);
This index is redundant with the one in your question, so drop that one when you create this one.
You use SELECT *, a notorious performance and maintainability antipattern. You're much better off naming the columns you want. This is especially important when your result sets are large: why waste CPU and network resources on data you don't need in your application? So, let's assume you want to do this. If you don't need all those columns, remove some of them from this SELECT.
SELECT id, object_id, attribute_id, text_data, int_data,
timestamp_data, state ...
You can use the INCLUDE clause on your index so it covers your query, that is so the query can be satisfied entirely from the index.
CREATE INDEX attribute_id_state_data
ON attributes(attribute_id, state, text_data)
INCLUDE (id, object_id, int_data, timestamp_data, state)
When you use this BTREE index, your query is satisfied by random-accessing the index to the first eligible row and then scanning the index sequentially. There's no need for PostgreSQL to bounce back to the table's data. It doesn't get much faster than that for a big result set.
If you remove some columns from your SELECT clause, you can also remove them from the index's INCLUDE clause.
You ORDER BY a large-object TEXT column. That's a lot of data to sort in each record, whether during index creation or a query. It's stored out-of-line, so it's not as fast. Can you rework your application to use a limited-length VARCHAR column for this instead? It will be more efficient.

Postgres: how do you optimize queries on date column with low selectivity?

I have a table with 143 million rows (and growing), its current size is 107GB. One of the columns in the table is of type date and it has low selectivity. For any given date, its reasonable to assume that there are somewhere between 0.5 to 4 million records with the same date value.
Now, if someone tries to do something like this:
select * from large_table where date_column > '2020-01-01' limit 100
It will execute "forever", and if you EXPLAIN ANALYZE it, you can see that its doing a table scan. So the first (and only so far) idea is to try and make this into an index scan. If Postgres can scan a subsection of an index and return the "limit" number of records, it sounds fast to me:
create index our_index_on_the_date_column ON large_table (date_column DESC);
VACUUM ANALYZE large_table;
EXPLAIN ANALYZE select * from large_table where date_column > '2020-01-01' limit 100;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..37.88 rows=100 width=893) (actual time=0.034..13.520 rows=100 loops=1)
-> Seq Scan on large_table (cost=0.00..13649986.80 rows=36034774 width=893) (actual time=0.033..13.506 rows=100 loops=1)
Filter: (date_column > '2020-01-01'::date)
Rows Removed by Filter: 7542
Planning Time: 0.168 ms
Execution Time: 18.412 ms
(6 rows)
It still reverts to a sequential scan. Please disregard the execution time as this took 11 minutes before caching came into action. We can force it to use the index, by reducing the number of returned columns to what's being covered by the index:
select date_column from large_table where date_column > '2019-01-15' limit 100
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.57..3.42 rows=100 width=4) (actual time=0.051..0.064 rows=100 loops=1)
-> Index Only Scan using our_index_on_the_date_column on large_table (cost=0.57..907355.11 rows=31874888 width=4) (actual time=0.050..0.056 rows=100 loops=1)
Index Cond: (date_column > '2019-01-15'::date)
Heap Fetches: 0
Planning Time: 0.082 ms
Execution Time: 0.083 ms
(6 rows)
But this is of course a contrived example, since the table is very wide and covering all parts of the table in the index is not feasible.
So, anyone who can share some guidance on how to get some performance when using columns with low selectivity as predicates?

How to use ts_headline() in PostgreSQL while doing efficient full-text search? Comparing two query plans

I am experimenting with a full-text search system over my PostgreSQL database, where I am using tsvectors with ts_rank() to pull out relevant items to a user search query. In general this works really fantastic as a simple solution (i.e. no major overhead infrastructure). However, I am finding that the ts_headline() component (which gives users context for the relevance of the search results) is slowing down my queries significantly, by an order of about 10x. I wanted to inquire what is the best way to use ts_headline() without incurring computational expense.
To give an example, here is a very fast tsvector search that does not use ts_headline(). For context, my table has two relevant fields, search_text which has the natural-language text which is being searched against, and search_text_tsv which is a tsvector that is directly queried against (and also used to rank the item). When I use ts_headline(), it references the main search_text field in order to produce a user-readable headline. Further, the column search_text_tsv is indexed using GIN, which provides very fast lookups for ## websearch_to_tsquery('my query here').
Again, here is query #1:
SELECT
item_id,
title,
author,
search_text,
ts_rank(search_text_tsv, websearch_to_tsquery(unaccent('my query text here')), 1) as rank
FROM search_index
WHERE search_text_tsv ## websearch_to_tsquery(unaccent('my query text here'))
ORDER BY rank DESC
LIMIT 20 OFFSET 20
This gives me 20 top results very fast, running on my laptop about 50ms.
Now, query #2 uses ts_headline() to produce a user-readable headline. I found that this was very slow when it ran against all possible search results, so I used a sub-query to produce the top 20 results and then calculated the ts_headline() only for those top results (as opposed to, say, 1000 possible results).
SELECT *,
ts_headline(search_text,websearch_to_tsquery(unaccent('my query text here')),'StartSel=<b>,StopSel=</b>,MaxFragments=2,' || 'FragmentDelimiter=...,MaxWords=10,MinWords=1') AS headline
FROM (
SELECT
item_id,
title,
author,
search_text,
ts_rank(search_text_tsv, websearch_to_tsquery(unaccent('my query text here')), 1) as rank
FROM search_index
WHERE search_text_tsv ## websearch_to_tsquery(unaccent('my query text here'))
ORDER BY rank DESC
LIMIT 20 OFFSET 20) as foo
Basically, what this does is limits the # of results (as in the first query), and then uses that as a sub-query, returning all of the columns in the subquery (i.e. *) and also the ts_headline() calculation. However, this is very slow, by an order of magnitude of about 10, coming in at around 800ms on my laptop.
Is there anything I can do to speed up ts_headline()? It seems pretty clear that this is what is slowing down the second query.
For reference, here are the query plans being produced by Postgresql (from EXPLAIN ANALYZE):
Query plan 1: (straight full-text search)
Limit (cost=56.79..56.79 rows=1 width=270) (actual time=66.118..66.125 rows=20 loops=1)
-> Sort (cost=56.78..56.79 rows=1 width=270) (actual time=66.113..66.120 rows=40 loops=1)
Sort Key: (ts_rank(search_text_tsv, websearch_to_tsquery(unaccent('my search query here'::text)), 1)) DESC
Sort Method: top-N heapsort Memory: 34kB
-> Bitmap Heap Scan on search_index (cost=52.25..56.77 rows=1 width=270) (actual time=1.070..65.641 rows=462 loops=1)
Recheck Cond: (search_text_tsv ## websearch_to_tsquery(unaccent('my search query here'::text)))
Heap Blocks: exact=424
-> Bitmap Index Scan on idx_fts_search (cost=0.00..52.25 rows=1 width=0) (actual time=0.966..0.966 rows=462 loops=1)
Index Cond: (search_text_tsv ## websearch_to_tsquery(unaccent('my search query here'::text)))
Planning Time: 0.182 ms
Execution Time: 66.154 ms
Query plan 2: (full text search w/ subquery & ts_headline())
Subquery Scan on foo (cost=56.79..57.31 rows=1 width=302) (actual time=116.424..881.617 rows=20 loops=1)
-> Limit (cost=56.79..56.79 rows=1 width=270) (actual time=62.470..62.497 rows=20 loops=1)
-> Sort (cost=56.78..56.79 rows=1 width=270) (actual time=62.466..62.484 rows=40 loops=1)
Sort Key: (ts_rank(search_index.search_text_tsv, websearch_to_tsquery(unaccent('my search query here'::text)), 1)) DESC
Sort Method: top-N heapsort Memory: 34kB
-> Bitmap Heap Scan on search_index (cost=52.25..56.77 rows=1 width=270) (actual time=2.378..62.151 rows=462 loops=1)
Recheck Cond: (search_text_tsv ## websearch_to_tsquery(unaccent('my search query here'::text)))
Heap Blocks: exact=424
-> Bitmap Index Scan on idx_fts_search (cost=0.00..52.25 rows=1 width=0) (actual time=2.154..2.154 rows=462 loops=1)
Index Cond: (search_text_tsv ## websearch_to_tsquery(unaccent('my search query here'::text)))
Planning Time: 0.350 ms
Execution Time: 881.702 ms
Just encountered exactly the same issue. When collecting a list of search results (20-30 documents) and also getting their ts_headline highlight in the same query the execution time was x10 at least.
To be fair, Postgres documentation is warning about that [1]:
ts_headline uses the original document, not a tsvector summary, so it can be slow and should be used with care.
I ended up getting the list of documents first and then loading the highlights with ts_headline asynchronously one-by-one. Still slow single queries (>150ms) but better user experience then waiting multiple seconds for an initial load.
[1] https://www.postgresql.org/docs/15/textsearch-controls.html#TEXTSEARCH-HEADLINE
I think I can buy you a few more milliseconds. In your query, you're returning "SELECT *, ts_headline" which includes the full original document search_text in the return. When I limited my SELECT to everything but the "search_text" from the subquery (+ ts_headline as headline), my queries dropped from 500-800ms to 100-400ms. I'm also using AWS RDS so that might play a role on my end.

PostgreSQL: latest row in DISTINCT ON less performant than max row in GROUP BY

I have a situation that I would like to better understand:
I've a table t with two rows and one index:
CREATE TABLE t (
refid BIGINT NOT NULL,
created TIMESTAMPTZ NOT NULL
);
CREATE INDEX t_refid_created ON t (refid, created);
In order to get the latest (with the highest created value) row for each distinct refid, I composed two queries:
-- index only scan t_refid_created_desc_idx
SELECT DISTINCT ON (refid) * FROM t
ORDER BY refid, created DESC;
-- index scan t_refid_created_idx
SELECT refid, max(created) FROM t GROUP BY refid;
When t has about 16M rows and the variance in refid is about 500 different values, the second query returns substantially faster than the second one.
At first I figured that because I'm ordering by created DESC it needs to do a backwards index scan and when starting from a value with high variance (created). So I added the following index:
CREATE index t_refid_created_desc_idx ON t (refid, created DESC);
It was indeed used (instead of the backwards scan on the previous index) but there was no improvement.
If I understand correctly, the second query would aggregate by refid and then scan each aggregate to find the max created value. That sounds like a lot of work.
The first query, to the best of my understanding, should have simply iterated over the first part of the index, then for each refid it should have used the second part of the index, taking the first value.
Obviously it is not the case and SELECT DISTINCT query takes twice as long as GROUP BY.
What am I missing here?
Here are EXPLAIN ANALYZE outputs for the first and second queries:
Unique (cost=0.56..850119.78 rows=291 width=16) (actual time=0.103..13414.913 rows=469 loops=1)
-> Index Only Scan using t_refid_created_desc_idx on t (cost=0.56..808518.47 rows=16640527 width=16) (actual time=0.102..12113.454 rows=16640527 loops=1)
Heap Fetches: 16640527
Planning time: 0.157 ms
Execution time: 13415.047 ms
Finalize GroupAggregate (cost=599925.13..599932.41 rows=291 width=16) (actual time=3454.350..3454.884 rows=469 loops=1)
Group Key: refid
-> Sort (cost=599925.13..599926.59 rows=582 width=16) (actual time=3454.344..3454.509 rows=1372 loops=1)
Sort Key: refid
Sort Method: quicksort Memory: 113kB
-> Gather (cost=599837.29..599898.40 rows=582 width=16) (actual time=3453.194..3560.602 rows=1372 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Partial HashAggregate (cost=598837.29..598840.20 rows=291 width=16) (actual time=3448.225..3448.357 rows=457 loops=3)
Group Key: refid
-> Parallel Seq Scan on t (cost=0.00..564169.53 rows=6933553 width=16) (actual time=0.047..2164.459 rows=5546842 loops=3)
Planning time: 0.157 ms
Execution time: 3561.727 ms
The first query runs in about 10 seconds, while the second one achieves the same results in 2 seconds! And without even using the index!
I'm using PostgreSQL 10.5.
I cannot answer the riddle why the DISTINCT ON does not consider the second plan. From the cost estimates we see thst PostgreSQL considers it cheaper.
I guess that nobody has implemented pushing down DISTINCT into parallel plans. You could ask the mailing list.
However, the problem with the first query are the 16 million heap fetches. This means that this is actually a normal index scan! It looks like a bad misestimate on the side of the planner.
If I am right, a VACUUM on the table that cleans the visibility map should improve the first query considerably.

Moving from Influx to Postgres, need tips

I used Influx to store our time series data. It's cool when it worked, then after about one month, it stopped working and I couldn't figure out why. (Similiar to this issue https://github.com/influxdb/influxdb/issues/1386)
Maybe Influx will be great one day, but for now I need to use something that's more stable. I'm thinking about Postgres. Our data comes from many sensors, each sensor has a sensor id. So I'm thinking about structuring our data as this:
(pk), sensorId(string), time(timestamp), value(float)
Influx is built for time series data so it probably has some built in optimizations. Do I need to do optimizations myself to make Postgres efficient? More specifically, I have these questions:
Influx has has this notion of 'series' and it's cheap to create new series. So I had a separate series for each sensor. Should I create a separate Postgres table for each sensor?
How should I setup up indexes to make queries fast? A typical query is: select all data for sensor123 for the last 3 days.
Should I use timestamp or integer for the time column?
How do I set a retention policy? E.g. delete data that's older than one week automatically.
Will Postgres scale horizontally? Can I setup ec2 clusters for data replication and load balancing?
Can I downsample in Postgres? I have read in some articles that I can use date_trunc. But it seems that I can't date_trunc it to a specific interval e.g. 25 seconds.
Any other caveats I missed?
Thanks in advance!
Updates
Storing the time column as big integer is faster than storing it as timestamp. Am I doing something wrong?
storing it as timestamp:
postgres=# explain analyze select * from test where sensorid='sensor_0';
Bitmap Heap Scan on test (cost=3180.54..42349.98 rows=75352 width=25) (actual time=10.864..19.604 rows=51840 loops=1)
Recheck Cond: ((sensorid)::text = 'sensor_0'::text)
Heap Blocks: exact=382
-> Bitmap Index Scan on sensorindex (cost=0.00..3161.70 rows=75352 width=0) (actual time=10.794..10.794 rows=51840 loops=1)
Index Cond: ((sensorid)::text = 'sensor_0'::text)
Planning time: 0.118 ms
Execution time: 22.984 ms
postgres=# explain analyze select * from test where sensorid='sensor_0' and addedtime > to_timestamp(1430939804);
Bitmap Heap Scan on test (cost=2258.04..43170.41 rows=50486 width=25) (actual time=22.375..27.412 rows=34833 loops=1)
Recheck Cond: (((sensorid)::text = 'sensor_0'::text) AND (addedtime > '2015-05-06 15:16:44-04'::timestamp with time zone))
Heap Blocks: exact=257
-> Bitmap Index Scan on sensorindex (cost=0.00..2245.42 rows=50486 width=0) (actual time=22.313..22.313 rows=34833 loops=1)
Index Cond: (((sensorid)::text = 'sensor_0'::text) AND (addedtime > '2015-05-06 15:16:44-04'::timestamp with time zone))
Planning time: 0.362 ms
Execution time: 29.290 ms
storing it as big integer:
postgres=# explain analyze select * from test where sensorid='sensor_0';
Bitmap Heap Scan on test (cost=3620.92..42810.47 rows=85724 width=25) (actual time=12.450..19.615 rows=51840 loops=1)
Recheck Cond: ((sensorid)::text = 'sensor_0'::text)
Heap Blocks: exact=382
-> Bitmap Index Scan on sensorindex (cost=0.00..3599.49 rows=85724 width=0) (actual time=12.359..12.359 rows=51840 loops=1)
Index Cond: ((sensorid)::text = 'sensor_0'::text)
Planning time: 0.130 ms
Execution time: 22.331 ms
postgres=# explain analyze select * from test where sensorid='sensor_0' and addedtime > 1430939804472;
Bitmap Heap Scan on test (cost=2346.57..43260.12 rows=52489 width=25) (actual time=10.113..14.780 rows=31839 loops=1)
Recheck Cond: (((sensorid)::text = 'sensor_0'::text) AND (addedtime > 1430939804472::bigint))
Heap Blocks: exact=235
-> Bitmap Index Scan on sensorindex (cost=0.00..2333.45 rows=52489 width=0) (actual time=10.059..10.059 rows=31839 loops=1)
Index Cond: (((sensorid)::text = 'sensor_0'::text) AND (addedtime > 1430939804472::bigint))
Planning time: 0.154 ms
Execution time: 16.589 ms
You shouldn't create a table for each sensor. Instead you could add a field to your table that identifies what series it is in. You could also have another table that describes additional attributes about the series. If data points could belong to multiple series, then you'd need a different structure altogether.
For the query you described in q2, an index on your recorded_at column should work (time is a sql reserved keyword, so best avoid that as a name)
You should use TIMESTAMP WITH TIME ZONE as your time data type.
Retention is up to you.
Postgres has various options for sharding/replication. That's a big topic.
Not sure I understand your objective for #6, but I'm sure you can figure something out.