PostgreSQL BETWEEN selects record when not fulfilled - postgresql

Why does this query returns a record?:
db2=> select * FROM series WHERE start <= '882001010000' AND "end" >= '882001010000' ORDER BY timestamp DESC LIMIT 1;
id | timestamp | start | end |
-------+---------------------+----------+-----------
23443 | 2016-12-23 17:10:05 | 88160000 | 88209999 |
or with BETWEEN:
db2=> select * FROM series WHERE '882001010000' BETWEEN start AND "end" ORDER BY timestamp DESC LIMIT 1;
id | timestamp | start | end |
-------+---------------------+----------+-----------
23443 | 2016-12-23 17:10:05 | 88160000 | 88209999 |
start and end are TEXT columns.

They are returning records because you are doing the comparisons as strings not as numbers.
Hence: '8' is between '7000000' and '9000', because the comparisons are one character at a time.
If you want numeric comparisons, you can cast the values to numbers. Or, better yet, represent the values as numerics. Postgres has the nice capability of very large precisions.

Related

Power Query: some values are counted from zero when count should start at a higher value

I have a column with values counting occurrences.
I am trying to continue the series in Power Query.
I am thus trying to increment 1 to the max of the given column..
The ID column has rows with letter tags : AB or BE. Following these letters, specific numeric ranges are associated. For both AB and BE, number ranges first from 0000 to 3000 and from 3000 to 6000.
I thus have the following possibilities: From AB0000 to AB3000 From AB3001 to AB6000 From BE0000 to BE3000 From BE3001 to AB6000
Each category match to the a specific item in my column geography, from the other workbook: From AB0000 to AB3000, it is ItalyZ From AB3001 to AB6000, it is ItalyB From BE0000 to BE3000, it is UKY From BE3001 to AB6000, it is UKM
I am thus trying to find the highest number associated to the first AB category, the second AB category, the first BE category, and the second.
My issue is that for some values, there is simply "nothing" yet in we source file.
This means that there is no occurrence yet of UKM for example.
Here is an example with no UKM or UKY:
|------------------|---------------------|
| Max | Geography |
|------------------|---------------------|
| 0562 | ItalyZ |
|------------------|---------------------|
| 0563 | ItalyZ |
|------------------|---------------------|
Hence, I have the following result:
|------------------|---------------------|
| Increment | Place |
|------------------|---------------------|
| 0564 | ItalyZ |
|------------------|---------------------|
| 0565 | ItalyZ |
|------------------|---------------------|
| 0565 | ItalyZ |
|------------------|---------------------|
| null | UKM |
|------------------|---------------------|
Here is the used power query code:
let
Source = #table({"Prefix", "Seq_Start", "Seq_End","GeoLocation"},{{"AB",0,2999,"ItalyZ"},{"AB",3000,6000,"ItalyB"},{"BC",0,299,"UKY"},{"BC",3000,6000,"UKM"}}),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Seq_Start", Int64.Type}, {"Seq_End", Int64.Type}}),
#"Merged Queries" = Table.NestedJoin(#"Changed Type", {"Prefix"}, HighestID, {"Prefix"}, "HighestID", JoinKind.LeftOuter),
#"Expanded HighestID" = Table.ExpandTableColumn(#"Merged Queries", "HighestID", {"Number"}, {"Number"}),
#"Filtered Rows" = Table.SelectRows(#"Expanded HighestID", each [Number] >= [Seq_Start] and [Number] <= [Seq_End]),
#"Grouped Rows" = Table.Group(#"Filtered Rows", {"Prefix", "Seq_Start", "Seq_End", "GeoLocation"}, {{"NextSeq", each List.Max([Number]) + 1, type number}})
in
#"Grouped Rows"
I would like to know how I could insure that when I have the first occurrence of a value, I would not have "null", but "0000" (or 0) and so on for the next occurrences.
Because, for example, if I have 0 occurrences of UKY before, I do not know why but the end results will be as follows:
|------------------|---------------------|
| Increment | Place |
|------------------|---------------------|
| 1 | UKM |
|------------------|---------------------|
| 2 | UKM |
|------------------|---------------------|
Which is not ideal because UKM should start at 30000. And because I had no values recorded before, it is starting with "null" only and then, 1, 2...rather than 3001 and 3002.

PostgreSQL timestamp with negative

I have a set of date and time rows in varchar type which looks like this:
| TimeLocal |
| 2017-11-06 12:13:55 -21:18 |
| 2017-11-06 12:18:50 -21:18 |
| 2017-11-06 12:13:09 -21:18 |
I want to perform a conversion of this column into timestamp
Select TimeLocal::timestamp as New_Time_Local From tb1
However I am getting this error
ERROR: time zone displacement out of range
This appears only to those datetime with -21 but for the other dates, it was converted successfully
Any help would be much appreciated
Thanks!

How to create a Postgres trigger that calculates values

How would you create a trigger that uses the values of the row being inserted to be calculated first so that a value being inserted gets transformed?
Let's say I have this table labor_rates,
+---------------+-----------------+--------------+------------+
| labor_rate_id | rate_per_minute | unit_minutes | created_at |
+---------------+-----------------+--------------+------------+
| bigint | numeric | numeric | timestamp |
+---------------+-----------------+--------------+------------+
Each time a new record is created, I need that the rate is calculated as rate/unit (the smallest unit here is a minute).
So example, when inserting a new record:
INSERT INTO labor_rates(rate, unit)
VALUES (60, 480);
It would create a new record with these values:
+---------------+-----------------+--------------+----------------------------+
| labor_rate_id | rate_per_minute | unit_minutes | created_at |
+---------------+-----------------+--------------+----------------------------+
| 1000000 | 1.1979 | 60 | 2017-03-16 01:59:47.208111 |
+---------------+-----------------+--------------+----------------------------+
One could argue that this should be left as a calculated field instead of storing the calculated value. But in this case, it would be best if the calculated value is stored.
I am fairly new to triggers so any help would be much appreciated.

Column-wise autocomplete

I have a table in a PostgreSQL database with four columns that contain increasingly more detailed information (think state->city->street->number), along with a column where everything is concatenated according to some simple formatting rules. Example:
| kommun | trakt | block | enhet | beteckning |
| Mora | Gislövs Läge | 9 | 16 | Mora Gislövs Läge 9:16 |
| Mora | Gisslaved | * | 8 | Mora Gisslaved 8 |
| Mora | Gisslaved | * | 9 | Mora Gisslaved 9 |
| Lilla Edet | Sanda | GA | 1 | Lilla Edet Sanda GA:1 |
A web service uses this table to implement a word-wise autocomplete, where the user gets input suggestions as they drill down. An input of mora gis will result in
["Mora Gislövs", "Mora Gisslaved"]
Currently, this is done by splitting the concatenated column by word in this query:
select distinct trim(substring(beteckning from '(^(\S+\s?){NUMPARTS})')) as bet
from beteckning_ac
where upper(beteckning) like upper('mora gis%')
order by bet
Where NUMPARTS is the number of words in the input - 2 in this case.
Now I want the autocomplete to be done column-wise rather than word-wise, so mora gis would now result in this instead:
["Mora Gislövs Läge", "Mora Gisslaved"]
Since the first two columns can contain an arbitrary number of words, I can no longer use the input to determine how many columns to include in the response. Is there a way to do this, or have I maybe gone about this autocomplete business all wrong?
CREATE OR REPLACE FUNCTION get_auto(text)
--$1 is here your input
RETURNS setof text
LANGUAGE plpgsql
AS $function$
declare
NUMPARTS int := array_length(regexp_split_to_array($1,' '), 1);
begin
return query
select
case
when (NUMPARTS = 1) then kommun
when (NUMPARTS = 2) then kommun||' '||trakt
when (NUMPARTS = 3) then kommun||' '||trakt||' '||block
when (NUMPARTS = 4) then kommun||' '||trakt||' '||block||' '||enhet
--alter if you want to
end
from
auto_complete --your tablename here
where
beteckning like $1||'%';
end;
$function$;

Calculate time range in org-mode table

Given a table that has a column of time ranges e.g.:
| <2015-10-02>--<2015-10-24> |
| <2015-10-05>--<2015-10-20> |
....
how can I create a column showing the results of org-evalute-time-range?
If I attempt something like:
#+TBLFM: $2='(org-evaluate-time-range $1)
the 2nd column is populated with
Time difference inserted
in every row.
It would also be nice to generate the same result from two different columns with, say, start date and end date instead of creating one column of time ranges out of those two.
If you have your date range split into 2 columns, a simple subtraction works and returns number of days:
| <2015-10-05> | <2015-10-20> | 15 |
| <2013-10-02 08:30> | <2015-10-24> | 751.64583 |
#+TBLFM: $3=$2-$1
Using org-evaluate-time-range is also possible, and you get a nice formatted output:
| <2015-10-02>--<2015-10-24> | 22 days |
| <2015-10-05>--<2015-10-20> | 15 days |
| <2015-10-22 Thu 21:08>--<2015-08-01> | 82 days 21 hours 8 minutes |
#+TBLFM: $2='(org-evaluate-time-range)
Note that the only optional argument that org-evaluate-time-range accepts is a flag to indicate insertion of the result in the current buffer, which you don't want.
Now, how does this function (without arguments) get the correct time range when evaluated is a complete mystery to me; pure magic(!)