Calculate the first actual bought item and populate the first_actual_item column in tr2_invoice.
SELECT cust_id, total_amount, items, MIN(time_in)
FROM tr_invoice WHERE total_amount <> 0
GROUP BY cust_id;
ERROR: column "tr_invoice.total_amount" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT cust_id, total_amount, items, MIN...
I used AVG(), MIN(), MAX(), or ARRAY_AGG() as aggregations for total_amount and items, it would output differently from what I queried on MySQL. Any better solution to solve this?
the selected fields must appear in the GROUP BY clause.
1.
SELECT cust_id, total_amount, items, MIN(time_in) over( PARTITION by cust_id) as min_time_in
FROM tr_invoice WHERE total_amount <> 0 ;
2.
SELECT
b.cust_id ,
b.total_amount,
b.items ,
a.min_time_in
from
(
SELECT
cust_id,
MIN(time_in) as min_time_in
FROM
tr_invoice
WHERE
total_amount <> 0
GROUP BY
cust_id
)a
join
tr_invoice b
ON
a.cust_id=b.cust_id;
Please refer link
I have a table with an optional fields column of type jsonb[]. I am using a lateral unnest to break those fields out into rows, then an aggregate to combine them again in the order I want.
SELECT id, name, ARRAY_AGG(v ORDER BY v->'priority' DESC) as fields
FROM results, LATERAL UNNEST(fields) AS f(v)
GROUP BY 1, 2
But because fields is optional, not all rows have values to unnest to begin with. Is there a way to lateral unnest at least one row even if it is empty? Or is there a better way to apply an order to a jsonb[] column on the way out so I can avoid this lateral unnest all together?
use a left join lateral.
SELECT
id
, name
, ARRAY_AGG(v ORDER BY v->'priority' DESC) as fields
FROM results
LEFT JOIN LATERAL UNNEST(fields) AS f(v) ON TRUE
GROUP BY 1, 2
I have a query. this query is calculated percentage for every product. I created a virtual column on this query this columns name is 'yüzde'. After that, i want to transfer yüzde columns to another column in another table with update query if product ids are same.
I think I need to write a stored procedure. How can I do that?
SELECT [ProductVariantId] ,
count([ProductVariantId]) as bedensayısı,
count([ProductVariantId]) * 100.0 / (SELECT Top 1 Count(*) as Total
FROM [Live_ADL].[dbo].[_INV_ProductCombinationAttributes]
Where Size LIKE '%[^0-9]%' and [StockQuantity]>0
Group by [ProductVariantId]
order by Total Desc) as yüzde
FROM [Live_ADL].[dbo].[_INV_ProductCombinationAttributes]
Where Size LIKE '%[^0-9]%' and [StockQuantity]>0
group by [ProductVariantId]
order by yüzde desc
you don't really need a SP, you can do it in-line, using CTE for instance, something along these lines:
; with tabyuzde as
(
SELECT [ProductVariantId] ,
count([ProductVariantId]) as bedensayısı,
count([ProductVariantId]) * 100.0 / (SELECT Top 1 Count(*) as Total
FROM [Live_ADL].[dbo].[_INV_ProductCombinationAttributes]
Where Size LIKE '%[^0-9]%' and [StockQuantity]>0
Group by [ProductVariantId]
order by Total Desc) as yüzde
FROM [Live_ADL].[dbo].[_INV_ProductCombinationAttributes]
Where Size LIKE '%[^0-9]%' and [StockQuantity]>0
group by [ProductVariantId]
)
update x
set othertablevalue=yüzde
from
othertable x
join tabyuzde t on x.ProductVariantId=t.ProductVariantId
Using the example from this post: https://blogs.oracle.com/datawarehousing/entry/managing_overflows_in_listagg
The following statement:
SELECT
deptno,
LISTAGG(ename, ';') WITHIN GROUP (ORDER BY empno) AS namelist
FROM emp
GROUP BY deptno;
will generate the following output:
DEPTNO NAMELIST
---------- ----------------------------------------
10 CLARK;KING;MILLER
20 SMITH;JONES;SCOTT;ADAMS;FORD
30 ALLEN;WARD;MARTIN;BLAKE;TURNER;JAMES
Let’s assume that the above statement does not run and that we have a limit of 15 characters that can be returned by each row in our LISTAGG function. This is in actuality 65535 on Amazon Redshift.
We would want the following to be returned in this case:
DEPTNO NAMELIST
---------- ----------------------------------------
10 CLARK;KING
10 MILLER
20 SMITH;JONES
20 SCOTT;ADAMS
20 FORD
30 ALLEN;WARD
30 MARTIN;BLAKE
30 TURNER;JAMES
What would be the best way to recreate this result in Amazon Redshift to avoid any data loss and taking speed into consideration?
It's possible to achieve this with 2 subquery:
First:
SELECT id, field,
sum(length(field) + 1) over
(partition by id order by RANDOM() rows unbounded preceding) as total_length_now
from my_schema.my_table)
Initially we want to calculate how many chars we have for each id in our table. We can use a window function to calculate it incrementally for each row. In the 'order by' statement you can use any unique field that you have. If you don't have one, you can simply use random or an hash function, but is mandatory that the field is unique, if not, the function will not work as we want.
The '+1' in the length represent the semicolon that we will use in the listagg function.
Second:
SELECT id, field, total_length_now / 65535 as sub_id
FROM (sub_query_1)
Now we create a sub_id based on the length that we calculated before. If the total_length_now exceed the limit size (in this case 65535) the division's rest will return a new sub_id.
Last Step
SELECT id, sub_id, listagg(field, ';') as namelist
FROM (sub_query_2)
GROUP BY id, sub_id
ORDER BY id, sub_id
Now we can simply call the listagg function grouping by id and sub_id, since each group cannot exceed the size limit.
Complete query
SELECT id, sub_id, listagg(field, ';') as namelist
FROM (
SELECT id, field, total_length_now / 65535 as sub_id
FROM (SELECT id,
field,
sum(length(field) + 1) over
(partition by id order by field rows unbounded preceding) as total_length_now
from support.test))
GROUP BY id, sub_id
order by id, sub_id
Example with your data (with size limit = 10)
First and second query output:
id, field, total_length_now, sub_id
10,KING,5,0
10,CLARK,11,1
10,MILLER,18,1
20,ADAMS,6,0
20,SMITH,12,1
20,JONES,18,1
20,FORD,23,2
20,SCOTT,29,2
30,JAMES,6,0
30,BLAKE,12,1
30,WARD,17,1
30,MARTIN,24,2
30,TURNER,31,3
30,ALLEN,37,3
Final query output:
id,sub_id,namelist
10,0,KING
10,1,CLARK;MILLER
20,0,ADAMS
20,1,SMITH;JONES
20,2,FORD;SCOTT
30,0,JAMES
30,1,BLAKE;WARD
30,2,MARTIN
30,3,TURNER;ALLEN
It is possible to create a partial list, and then the rest of values as separate rows in one go, but if the number of rows is unconstrained you really need a loop statement to then convert that into a list, and the rows for remaining and so on.
So this is really a task for Apache Spark (or any other map-reduce technology).
With a query like this (simplified for clarity):
SELECT 'East' AS name, *
FROM events
WHERE event_timestamp BETWEEN '2015-06-14 06:15:00' AND '2015-06-21 06:15:00'
UNION
SELECT 'West' AS name, *
FROM events
WHERE event_timestamp BETWEEN '2015-06-14 06:15:00' AND '2015-06-21 06:15:00'
UNION
SELECT 'Both' AS name, *
FROM events
WHERE event_timestamp BETWEEN '2015-06-14 06:15:00' AND '2015-06-21 06:15:00'
I want to customise the order of the resulting rows. Something like:
ORDER BY name='East', name='West', name='Both'
Or
ORDER BY
CASE
WHEN name='East' THEN 1
WHEN name='West' THEN 2
WHEN name='Both' THEN 3
ELSE 4
END;
However, Postgres complains with:
ERROR: invalid UNION/INTERSECT/EXCEPT ORDER BY clause
DETAIL: Only result column names can be used, not expressions or functions.
HINT: Add the expression/function to every SELECT, or move the UNION into a FROM clause.
Do I have any alternative?
Wrap it in a derived table (which is what "HINT: .... or move the UNION into a FROM clause" is suggesting)
select *
from (
... your union goes here ...
) t
order by
CASE
WHEN name='East' THEN 1
WHEN name='West' THEN 2
WHEN name='Both' THEN 3
ELSE 4
END;
I'd add an extra column showing the desired ordering, then use ordinal column positions in the ORDER BY, e.g.
SELECT 1, 'East' AS name, *
...
UNION ALL
SELECT 2, 'West' AS name, *
...
ORDER BY 1
Note that you probably also want UNION ALL since your added columns ensure that every set in the union must be distinct anyway.
By adding an extra column for ordering purpose, however it makes the UNION clause to work exactly as a UNION ALL (it does not eliminate duplicate rows from the result).