The common function to get the minor value of a column is min(column), but what I want to do is to get the minor value of a row, based on the values of 3 columns. For example, using the following base table:
+------+------+------+
| col1 | col2 | col3 |
+------+------+------+
| 2 | 1 | 3 |
| 10 | 0 | 1 |
| 13 | 12 | 2 |
+------+------+------+
I want to query it as:
+-----------+
| min_value |
+-----------+
| 1 |
| 0 |
| 2 |
+-----------+
I found a solution as follows, but for SQL, not Postgresql. So I am not getting it to work in postgresql:
select
(
select min(minCol)
from (values (t.col1), (t.col2), (t.col3)) as minCol(minCol)
) as minCol
from t
I could write something using case statement but I would like to write a query like the above for postgresql. Is this possible?
You can use least() (and greatest() for the maximum)
select least(col1, col2, col3) as min_value
from the_table
Related
I've got a PostgreSQL database, one table with 2 text columns, stored data like this:
id| col1 | col2 |
------------------------------------------------------------------------------|
1 | value_1, value_2, value_3 | name_1(date_1), name_2(date_2), name_3(date_3)|
2 | value_4, value_5, value_6 | name_4(date_4), name_5(date_5), name_6(date_6)|
I need to parse rows in a new table like this:
id | col1 | col2 | col3 |
1 | value_1 | name_1 | date_1 |
1 | value_2 | name_2 | date_2 |
...| ... | ... | ... |
2 | value_6 | name_6 | date_6 |
How might I do this?
step-by-step demo:db<>fiddle
SELECT
id,
u_col1 as col1,
col2_matches[1] as col2, -- 5
col2_matches[2] as col3
FROM
mytable,
unnest( -- 3
regexp_split_to_array(col1, ', '), -- 1
regexp_split_to_array(col2, ', ') -- 2
) as u (u_col1, u_col2),
regexp_matches(u_col2, '(.+)\((.+)\)') as col2_matches -- 4
Split the data of your first column into an array
Split the data of your second column into an array of form {a(a), b(b), c(c)}
Transpose all array elements into own records
Split the elements of form a(b) into an array of form {a,b}
Show required columns. For the col2 and col3 show the first or the second array element from step 4
Take the following data and queries:
create table if not exists my_example(a_group varchar(1)
,the_date date
,metric numeric(4,3)
);
INSERT INTO my_example
VALUES ('1','2018-12-14',0.514)
,('1','2018-12-15',0.532)
,('2','2018-12-15',0.252)
,('3','2018-12-14',0.562)
,('3','2018-12-15',0.361);
select
t1.the_date
,t1.a_group
,t1.metric AS current_metric
,lag(t1.metric, 1) OVER (ORDER BY t1.a_group, t1.the_date) AS previous_metric
from
my_example t1;
Which yields the following results:
+------------+---------+----------------+-----------------+
| the_date | a_group | current_metric | previous_metric |
+------------+---------+----------------+-----------------+
| 2018-12-14 | 1 | 0.514 | NULL |
| 2018-12-15 | 1 | 0.532 | 0.514 |
| 2018-12-15 | 2 | 0.252 | 0.532 |
| 2018-12-14 | 3 | 0.562 | 0.252 |
| 2018-12-15 | 3 | 0.361 | 0.562 |
+------------+---------+----------------+-----------------+
I expected the value of previous_metric for the lone a_group==2 row to be NULL. However, as you can see, the value is showing as 0.532, which is being picked up from the previous row. How can I modify this query to yield a value of NULL as I expected?
You need to use LAG with a partition on a_group, since you want the lag values from a specific frame:
SELECT
t1.the_date,
t1.a_group,
t1.metric AS current_metric,
LAG(t1.metric, 1) OVER (PARTITION BY t1.a_group ORDER BY t1.the_date)
AS previous_metric
FROM my_example t1;
I have multiple tables with the same structure in a postgres database. I want to unify these tables into one table.
Table 1
| a | b |
----|----
| 1 | 2 |
Table 2
| a | b |
----|----
| 3 | 4 |
| 5 | 6 |
Table 3
| a | b |
----|----
| 7 | 8 |
Expected Output Table:
| a | b |
----|----
| 1 | 2 |
| 3 | 4 |
| 5 | 6 |
| 7 | 8 |
Can anyone help me in this issue?
This should be pretty simple.
Check out the union Command. Sql Union
Basically you want to append all the data in one table.
With the assumption that the tables are limited and you dont want any automation around that the easiest approach would be to build select union queries like below:
select a,b from table1
union
select a,b from table2
union
select a,b from table3
You can use the "insert into select" to create a new table with the outout of your result.
I need Postgres DISTINCT ON equivalent in HQL. For example consider the following.
SELECT DISTINCT ON (Col2) Col1, Col4 FROM tablename;
on table
Col1 | Col2 | Col3 | Col4
---------------------------------
AA1 | A | 2 | 1
AA2 | A | 4 | 2
BB1 | B | 2 | 3
BB2 | B | 5 | 4
Col2 will not be shown in the result as below
Col1 | Col4
------------
AA1 | 1
BB1 | 3
Can anyone give a solution in HQL. I need to use DISTINCT as it is part of a bigger query.
Sorry but I misread your question:
No, Hibernate does not support a DISTINCT ON query.
Here is possible duplicate of your question: Postgresql 'select distinct on' in hibernate
I have a table that looks like
+-------+-----------+
| value | timestamp |
+-------+-----------+
and I'm trying to build a query that gives a result like
+-------+-----------+------------+------------------------+
| value | timestamp | MAX(value) | timestamp of max value |
+-------+-----------+------------+------------------------+
so that the result looks like
+---+----------+---+----------+
| 1 | 1.2.1001 | 3 | 1.1.1000 |
| 2 | 5.5.1021 | 3 | 1.1.1000 |
| 3 | 1.1.1000 | 3 | 1.1.1000 |
+---+----------+---+----------+
but I got stuck on joining the column with the corresponding timestamps.
Any hints or suggestions?
Thanks in advance!
For further information (if that helps):
In the real project the max-values are grouped by month and day (with group by clause, which works btw), but somehow I got stuck on joining the timestamps for max-values.
EDIT
Cross joins are a good idea, but I want to have them grouped by month e.g.:
+---+----------+---+----------+
| 1 | 1.1.1101 | 6 | 1.1.1300 |
| 2 | 2.6.1021 | 5 | 5.6.1000 |
| 3 | 1.1.1200 | 6 | 1.1.1300 |
| 4 | 1.1.1040 | 6 | 1.1.1300 |
| 5 | 5.6.1000 | 5 | 5.6.1000 |
| 6 | 1.1.1300 | 6 | 1.1.1300 |
+---+----------+---+----------+
EDIT 2
I've added a fiddle for some sample data and and example of the current query.
http://sqlfiddle.com/#!1/efa42/1
How to add the corresponding timestamp to the maximum?
Try a cross join with two sub queries, the first one selects all records, the second one gets one row that represents the time_stamp of the max value, <3;"1000-01-01"> for example.
SELECT col_value,col_timestamp,max_col_value, col_timestamp_of_max_value FROM table1
cross join
(
select max(col_value) max_col_value ,col_timestamp col_timestamp_of_max_value from table1
group by col_timestamp
order by max_col_value desc
limit 1
) A --One row that represents the time_stamp of the max value, ie: <3;"1000-01-01">
Use the window cause you use with pg
Select *, max( value ) over (), max( timestamp ) over() from table
That gives you the max values from all values in every row
http://www.postgresql.org/docs/9.1/static/tutorial-window.html