combine different sorting result in one query - postgresql

tasktime
id | name | start_date | end_date ...
1 | a | 2016-12-22 | 2017-01-01
2 | b | 2016-05-01 | 2016-05-31
3 | c | 2016-06-01 | 2016-12-25
should I use group or ..
I tried below query get result: 1 2 3
even if I change start_date asc or end_date desc nothing happen,
SELECT
tt.*
FROM tasktime tt
ORDER BY tt.name asc NULLS LAST
, tt.start_date desc NULLS LAST
, tt.end_date asc NULLS LAST
UPDATE
I want combine different sorting result
SELECT
tt.*
FROM tasktime tt
ORDER BY tt.end_date asc NULLS LAST
then use above result
ORDER BY tt.start_date desc NULLS LAST
then use above result
ORDER BY tt.name asc NULLS LAST
please close this question ... I realised what I want , and this question is totally wrong

Like here?
t=# create table tasktime (id int, name text, start_date date, end_date date);
CREATE TABLE
t=# insert into tasktime values (1,'a','2016-12-22', '2017-01-01'), (2, 'b', '2016-05-01', '2016-05-31'), (3, 'c', '2016-06-01','2016-12-25');
INSERT 0 3
t=# SELECT
t-# tt.*
t-# FROM tasktime tt
t-# order by tt.end_date asc NULLS LAST
t-# , tt.start_date desc NULLS LAST
t-# , tt.name asc NULLS LAST;
id | name | start_date | end_date
----+------+------------+------------
2 | b | 2016-05-01 | 2016-05-31
3 | c | 2016-06-01 | 2016-12-25
1 | a | 2016-12-22 | 2017-01-01
(3 rows)

Related

how to drop rows if a variale is less than x, in sql

I have the following query code
query = """
with double_entry_book as (
SELECT to_address as address, value as value
FROM `bigquery-public-data.crypto_ethereum.traces`
WHERE to_address is not null
AND block_timestamp < '2022-01-01 00:00:00'
AND status = 1
AND (call_type not in ('delegatecall', 'callcode', 'staticcall') or call_type is null)
union all
-- credits
SELECT from_address as address, -value as value
FROM `bigquery-public-data.crypto_ethereum.traces`
WHERE from_address is not null
AND block_timestamp < '2022-01-01 00:00:00'
AND status = 1
AND (call_type not in ('delegatecall', 'callcode', 'staticcall') or call_type is null)
union all
)
SELECT address,
sum(value) / 1000000000000000000 as balance
from double_entry_book
group by address
order by balance desc
LIMIT 15000000
"""
In the last part, I want to drop rows where "balance" is less than, let's say, 0.02 and then group, order, etc. I imagine this should be a simple code. Any help will be appreciated!
We can delete on a CTE and use returning to get the id's of the rows being deleted, but they still exist until the transaction is comitted.
CREATE TABLE t (
id serial,
variale int);
insert into t (variale) values
(1),(2),(3),(4),(5);
✓
5 rows affected
with del as
(delete from t
where variale < 3
returning id)
select
t.id,
t.variale,
del.id ids_being_deleted
from t
left join del
on t.id = del.id;
id | variale | ids_being_deleted
-: | ------: | ----------------:
1 | 1 | 1
2 | 2 | 2
3 | 3 | null
4 | 4 | null
5 | 5 | null
select * from t;
id | variale
-: | ------:
3 | 3
4 | 4
5 | 5
db<>fiddle here

MySQL group by timestamp difference

I need to write mysql query which will group results by difference between timestamps.
Is it possible?
I have table with locations and every row has created_at (timestamp) and I want to group results by difference > 1min.
Example:
id | lat | lng | created_at
1. | ... | ... | 2020-05-03 06:11:35
2. | ... | ... | 2020-05-03 06:11:37
3. | ... | ... | 2020-05-03 06:11:46
4. | ... | ... | 2020-05-03 06:12:48
5. | ... | ... | 2020-05-03 06:12:52
Result of this data should be 2 groups (1,2,3) and (4,5)
It depends on what you actually want. If youw want to group together records that belong to the same minute, regardless of the difference with the previous record, then simple aggregation is enough:
select
date_format(created_at, '%Y-%m-%d %H:%i:00') date_minute,
min(id) min_id,
max(id) max_id,
min(created_at) min_created_at,
max(created_at) max_created_at,
count(*) no_records
from mytable
group by date_minute
On the other hand, if you want to build groups of consecutive records that have less than 1 minute gap in between, this is a gaps and islands problem. Here is on way to solve it using window functions (available in MySQL 8.0):
select
min(id) min_id,
max(id) max_id,
min(created_at) min_created_at,
max(created_at) max_created_at,
count(*) no_records
from (
select
t.*,
sum(case when created_at < lag_created_at + interval 1 minute then 0 else 1 end)
over(order by created_at) grp
from (
select
t.*,
lag(created_at) over(order by created_at) lag_created_at
from mytable t
) t
) t
group by grp

How to force query to return only first row from window?

I have data:
id | price | date
1 | 25 | 2019-01-01
2 | 35 | 2019-01-01
1 | 27 | 2019-02-01
2 | 37 | 2019-02-01
Is it possible to write such query which will return only first row from window? something like LIMIT 1 but for the window OVER( date )?
I expect next result:
id | price | date
1 | 25 | 2019-01-01
1 | 27 | 2019-02-01
Or ignore whole window if first window row has NULL:
id | price | date
1 | NULL | 2019-01-01
2 | 35 | 2019-01-01
1 | 27 | 2019-02-01
2 | 37 | 2019-02-01
result:
1 | 27 | 2019-02-01
Order the rows by date and id, and take only the first row per date.
Then remove those where the price is NULL.
SELECT *
FROM (SELECT DISTINCT ON (date)
id, price, date
FROM mytable
ORDER BY date, id
) AS q
WHERE price IS NOT NULL;
#Laurenz let me to provide a bit more explanation
select distinct on (<fldlist>) * from <table> order by <fldlist+>;
is equal to much more complex query:
select * from (
select row_number() over (partition by <fldlist> order by <fldlist+>) as rn,*
from <table>)
where rn = 1;
And here <fldlist> should be the beginning part (or equal) of <fldlist+>
As Myon on IRC said:
if you want to use a window function in WHERE, you need to put it into a subselect first
So the target query is:
select * from (
select
*
agg_function( my_field ) OVER( PARTITION BY other_field ) as agg_field
from sometable
) x
WHERE agg_field <condition>
In my case I have next query:
SELECT * FROM (
SELECT *,
FIRST_VALUE( p.price ) over( PARTITION BY crate.app_period ORDER BY st.DEPTH ) AS first_price,
ROW_NUMBER() over( PARTITION BY crate.app_period ORDER BY st.DEPTH ) AS row_number
FROM st
LEFT JOIN price p ON <COND>
LEFT JOIN currency_rate crate ON <COND>
) p
WHERE p.row_number = 1 AND p.first_price IS NOT null
Here I select only first rows from the group and where price IS NOT NULL

Iterate through rows, compare them against each other and store results in another table

I have a table that contains the following rows:
product_id | order_date
A | 12/04/12
A | 01/11/13
A | 01/21/13
A | 03/05/13
B | 02/14/13
B | 03/09/13
What I now need is an overview for each month, how many products have been bought for the first time (=have not been bought the month before), how many are existing products (=have been bought the month before) and how many have not been purchased within a given month. Taken the sample above as an input, the script should deliver the following result, regardless of what period of time is in the data:
month | new | existing | nopurchase
12/2012 | 1 | 0 | 0
01/2013 | 0 | 1 | 0
02/2013 | 1 | 0 | 1
03/2013 | 1 | 1 | 0
Would be great to get a first hint how this could be solved so I'm able to continue.
Thanks!
SQL Fiddle
with t as (
select product_id pid, date_trunc('month', order_date)::date od
from t
group by 1, 2
)
select od,
sum(is_new::integer) "new",
sum(is_existing::integer) existing,
sum(not_purchased::integer) nopurchase
from (
select od,
lag(t_pid) over(partition by s_pid order by od) is null and t_pid is not null is_new,
lag(t_pid) over(partition by s_pid order by od) is not null and t_pid is not null is_existing,
lag(t_pid) over(partition by s_pid order by od) is not null and t_pid is null not_purchased
from (
select t.pid t_pid, s.pid s_pid, s.od
from
t
right join
(
select pid, s.od
from
t
cross join
(
select date_trunc('month', d)::date od
from
generate_series(
(select min(od) from t),
(select max(od) from t),
'1 month'
) s(d)
) s
group by pid, s.od
) s on t.od = s.od and t.pid = s.pid
) s
) s
group by 1
order by 1

T-SQL: How to use GROUP BY and getting the value which excesses 60%?

sorry for the bad title, I don't know how to describe my problem.
I have the following table:
| ItemID | Date |
-------------------------
| 1 | 01.01.10 |
| 1 | 03.01.10 |
| 1 | 05.01.10 |
| 1 | 06.01.10 |
| 1 | 10.01.10 |
| 2 | 05.01.10 |
| 2 | 10.01.10 |
| 2 | 20.01.10 |
Now I want to GROUP BY ItemID and for the date I want to get the value, which excesses 60%. What I mean is, that for item 1 I've five rows, so each have a percentage of 20% and for item 2 I've three row, so each have a percentage of 33,33%. So for item 1 I need the 3rd and for item 2 the 2nd value, so that the result looks like that.
| ItemID | Date |
-------------------------
| 1 | 06.01.10 |
| 2 | 10.01.10 |
Is there a easy way so get this data? Maybe using OVER?
Thank you
Torben
with NumItems as
( select itemID, count(*) as numOfItems from table group by itemID)
),
rowNums as
(
select itemID,Date, row_number() over (partition by ItemID order by date asc) as rowNum
from table
)
select itemID, min(Date) from
rowNums a inner join NumItems b on a.itemID = b.ItemID
where cast(b.rowNum as float) / cast(numOfItems as float) >= 0.6
group by itemID
that should do it although I am certain It can be writter with only one table scan. That should work nice though.
The provided the script contained a few errors. Below is a working one:
with NumItems as
(
select itemID, count(*) as numOfItems from table group by itemID
),
rowNums as
(
select itemID, Date, row_number() over (partition by ItemID order by date asc) as rowNum
from table
)
select a.itemID, min(a.Date) from
rowNums a inner join NumItems b on a.itemID = b.ItemID
where cast(a.rowNum as float) / cast(numOfItems as float) >= 0.6
group by a.itemID