Question
I have a system with multiple linestringzm, where the data is structured the following way: [x, y, speed:int, time:int]. The data is structured this way to be able to use ST_SimplifyVW on the x, y and z dimensions, but I still want to be able to query the linestring based on the m dimension e.g. get all linestrings between a time interval.
Is this possible with PostGIS or am I structuring the data incorrectly for my use case?
Example
z = speed e.g. km/h
m = Unix epoch time
CREATE TABLE t (id int NOT NULL, geom geometry(LineStringZM,4326), CONSTRAINT t_pkey PRIMARY KEY (id));
INSERT INTO t VALUES (1, 'SRID=4326;LINESTRING ZM(30 10 5 1620980688, 30 15 10 1618388688, 30 20 15 1615710288, 30 25 20 1620980688)'::geometry);
INSERT INTO t VALUES (2, 'SRID=4326;LINESTRING ZM(50 10 5 1620980688, 50 15 10 1618388688, 50 20 15 1615710288, 50 25 20 1620980688)'::geometry);
INSERT INTO t VALUES (3, 'SRID=4326;LINESTRING ZM(20 10 5 1620980688, 20 15 10 1618388688, 20 20 15 1615710288, 20 25 20 1620980688)'::geometry);
Use case A: Simplify the geometry based on x, y and z
This can be accomplished by e.g. ST_SimplifyVW which keep the m dimension after simplification.
Use case B: Query geometry based on the m dimension
I have a set of linestringzm which I want to query based on my time dimension (m). The result is either the full geometry if all m is between e.g.1618388000 and 1618388700 or the part of the geometry which satisfies the predicate. What is the most efficient way to query the data?
If you want to check every single point of your LineString you could ST_DumpPoints them and get the M dimension with ST_M. After that extract the subset as a LineString containing the overlapping M values and apply ST_MakeLine with a GROUP BY:
WITH j AS (
SELECT id,geom,(ST_DumpPoints(geom)).geom AS p
FROM t
)
SELECT id,ST_AsText(ST_MakeLine(p))
FROM j
WHERE ST_M(p) BETWEEN 1618388000 AND 1618388700
GROUP BY id;
Demo: db<>fiddle
Note: Depending on your table and LineString sizes this query may become pretty slow, as values are being parsed in query time and therefore aren't indexed. Imho a more elegant alternative would be ..
.. 1) to create a tstzrange column
ALTER TABLE t ADD COLUMN line_interval tstzrange;
.. 2) to properly index it
CREATE INDEX idx_t_line_interval ON t USING gist (line_interval);
.. and 3) to populate it with the time of geom's first and last points:
UPDATE t SET line_interval =
tstzrange(
to_timestamp(ST_M(ST_PointN(geom,1))),
to_timestamp(ST_M(ST_PointN(geom,ST_NPoints(geom)))));
After that you can speed things up by checking wether the indexed column overlaps with a given interval. This will significantly improve query time:
SELECT * FROM t
WHERE line_interval && tstzrange(
to_timestamp(1618138148),
to_timestamp(1618388700));
Demo: db<>fiddle
Further reading:
ST_M
ST_PointN
ST_NPoints
PostgreSQL Built-in Range Types
Related
I have a query which I want to know relatively how many locations are up to 100 meters away (relate to all distances):
select person_tbl.tdm, sum((st_distance (person_tbl.geo, location_tbl.geo) < 100)::INT)::FLOAT / count(*)
from persons as person_tbl, locations as location_tbl
where person_tbl.geo is not null
group by person_tbl.tdm
The 2 tables contains geometry indexs:
create index idx on persons using gist(geo)
create index idx on locations using gist(geo)
The first table (persons) the values of geo field is POLYGON
The second table (locations) the values of geo field are POINT Z or POLYGON Z or MULTIPOLYGON Z
The first table persons contains ~2M rows and the second table locations contains ~500 rows
The query took too long (~2 hours).
The values of max_parallel_processes and max_parallel_workers is 8
Is there something I can do to optimize the query calculation time (2 hours seems too long) ?
Is there a better way to write the query ? or do I need to define the indexes in other way ?
Assume a table like this one:
a | b | total
--|---|------
1 | 2 | 3
4 | 7 | 11
…
CREATE TEMPORARY TABLE summedup (
a double precision DEFAULT 0
, b double precision DEFAULT 0
--, total double precision
);
INSERT INTO summedup (a, b) VALUES (1, 2);
INSERT INTO summedup (a, b) VALUES (4, 7);
SELECT a, b, a + b as total FROM summedup;
It's easy to sum up the first two columns on SELECT.
But does Postgres (9.6) also support the ability to define total as the sum of the other two columns? If so:
What is the syntax?
What is this type of operation called (aggregates typically sum up cells over multiple rows, not columns.)
What you are looking for is typically called a "computed column".
Postgres 9.6 does not support that (Postgres 12 - to be released in Q4 2019 - will).
But for such a simple sum, I wouldn't bother storing redundant information.
If you don't want to repeat the expression, create a view.
I think what you want is a View.
CREATE VIEW table_with_sum AS
SELECT id, a, b, a + b as total FROM summedup;
then you can query the view for the sum.
SELECT total FROM table_with_sum where id=5;
The View does not store the sum for each row, the totalcolumn is computed every time you query the View. If your goal is to make your query more efficient, this will not help.
There is an other way: add the column to the table and create triggers for update and insert that update the total column every time a row is modified.
id datetime new_column datetime_rankx
1 12.01.2015 18:10:10 12.01.2015 18:10:10 1
2 03.12.2014 14:44:57 03.12.2014 14:44:57 1
2 21.11.2015 11:11:11 03.12.2014 14:44:57 2
3 01.01.2011 12:12:12 01.01.2011 12:12:12 1
3 02.02.2012 13:13:13 01.01.2011 12:12:12 2
3 03.03.2013 14:14:14 01.01.2011 12:12:12 3
I want to make new column, which will have minimum datetime value for each row in group by id.
How could I do it in Power BI desktop using DAX query?
Use this expression:
NewColumn =
CALCULATE(
MIN(
Table[datetime]),
FILTER(Table,Table[id]=EARLIER(Table[id])
)
)
In Power BI using a table with your data it will produce this:
UPDATE: Explanation and EARLIER function usage.
Basically, EARLIER function will give you access to values of different row context.
When you use CALCULATE function it creates a row context of the whole table, theoretically it iterates over every table row. The same happens when you use FILTER function it will iterate on the whole table and evaluate every row against the filter condition.
So far we have two row contexts, the row context created by CALCULATE and the row context created by FILTER. Note FILTER use the EARLIER to get access to the CALCULATE's row context. Having said that, in our case for every row in the outer (CALCULATE's row context) the FILTER returns a set of rows that correspond to the current id in the outer context.
If you have a programming background it could give you some sense. It is similar to a nested loop.
Hope this Python code points the main idea behind this:
outer_context = ['row1','row2','row3','row4']
inner_context = ['row1','row2','row3','row4']
for outer_row in outer_context:
for inner_row in inner_context:
if inner_row == outer_row: #this line is what the FILTER and EARLIER do
#Calculate the min datetime using the filtered rows
...
...
UPDATE 2: Adding a ranking column.
To get the desired rank you can use this expression:
RankColumn =
RANKX(
CALCULATETABLE(Table,ALLEXCEPT(Table,Table[id]))
,Table[datetime]
,Hoja1[datetime]
,1
)
This is the table with the rank column:
Let me know if this helps.
Given a table of the following format in MATLAB:
userid | itemid | keywords
A = [ 3 10 'book'
3 10 'briefcase'
3 10 'boat'
12 20 'windows'
12 20 'picture'
12 35 'love'
4 10 'day'
12 10 'working day'
... ... ... ];
where A is a table of size (58000*3), I want to write the data in a csv file with the following format:
csv.file
itemid keywords
10 book, briefcase, boat, day, working day, ...
20 windows, picture, ...
35 love, ...
where we the list of itemids is stored in Iids = [10,20,35,...]
I would like to avoide using loops for this as you can imagine the matrix is big-sized. Any idea is appreciated.
I wasn't able to think of a solution without loops. But you can optimize your loop by:
using logical indexing
running such loop only M times (if M is the number of unique itemid elements) instead of N times (if N is the number of elements in your table).
The solution I come up with is this.
First of all, create your table
A=table([3;3;3;12;12;12;4;12], [10;10;10;20;20;35;10;10],{'book','briefcase','boat','windows','picture','love','day','working day'}','VariableNames',{'userid','itemid','keywords'});
which looks like
Select the unique values for column itemid (your Iids):
Iids=unique(A.itemid);
which looks like
Create a new, empty, table which will contain the results:
NewTable=table();
And now the minimal loop I've come up with:
for id=Iids'
% select rows with given itemid value
RowsWithGivenId=A(A.itemid==id,:);
% create new row in NewTable with the id and the (joined together) keywords from the selected rows
NewTable=[NewTable; table(id,{strjoin(RowsWithGivenId.keywords,', ')})];
end
Also, append the new column names in NewTable
NewTable.Properties.VariableNames = {'itemid','keywords'};
And now NewTable looks like:
Please note: due to the fact that the keywords in the new table are separated by comma, a csv file is not the format I recommend. By using writetable() as writetable(NewTable,'myfile.csv');
what you'll get is
As instead, by replacing ; instead of a separating comma (in strjoin()), you'll get a nicer format:
I need to divide with UPDATE command rows (selected from subselect) in PostgreSQL table into groups, these groups will be identified with integer value in one of its columns. These groups should be with the same size. Source table contains billions of records.
For example I need to divide 213 selected rows into groups, every group should contains 50 records. The result will be:
1 - 50. row => 1
51 - 100. row => 2
101 - 150. row => 3
151 - 200. row => 4
200 - 213. row => 5
There is no problem to do it with some loop (or use PostgreSQL window functions), but I need to do it very efficiently and quickly. I can't use sequence in id because there should be gaps in these ids.
I have an idea to use random integer number generator and set it as default value for a row. But this is not useable when I need to adjust group size.
The query below should display 213 rows with a group-number from 0-4. Just add 1 if you want 1-5
SELECT i, (row_number() OVER () - 1) / 50 AS grp
FROM generate_series(1001,1213) i
ORDER BY i;
create temporary sequence s minvalue 0 start with 0;
select *, nextval('s') / 50 grp
from t;
drop sequence s;
I think it has the potential to be faster than the row_number version #Richard. But the difference could be not relevant depending on the specifics.