I am trying to update multiple rows with a single query using Postgres. Here is what I am trying to do: If the sku is 0001, then i want to update field_1 to foo. Repeat with all the other skus.
When I run this code, this code correctly updates the correct row and field. BUT it turns all the other records' field_1 into null. What code should be added here to prevent that?
UPDATE table
SET field_1 = ( case WHEN sku = '0001' then 'foo'
WHEN sku = '0002' then 'bar'
WHEN sku = '0003' then 'baz'
END
)
BEFORE running the query
sku
field_1
0001
dummy_1
0002
dummy_2
0003
dummy_3
0004
dummy_4
0005
dummy_5
0006
dummy_6
AFTER running the query
sku
field_1
0001
foo
0002
bar
0003
baz
0004
null
0005
null
0006
null
Add a WHERE clause which restricts the SKUs targeted for update:
UPDATE table
SET field_1 = CASE sku WHEN '0001' THEN 'foo'
WHEN '0002' THEN 'bar'
WHEN '0003' THEN 'baz' END
WHERE sku IN ('0001', '0002', '0003');
Related
I'm trying to get all order ids that used a specific promo code (ABC123). However, I want to see all subsequent orders, rather than just all the ids. For example, if we have the following table:
Account_id
order_id
promo_code
1
123
NULL (no promo code used)
2
124
ABC123
3
125
HelloWorld!
2
125
NULL
1
126
ABC123
2
127
HelloWorld!
3
128
ABC123
Ideally, what I want to get is this (ordered by account_id):
Account_id
order_id
promo_code
1
126
ABC123
2
124
ABC123
2
125
NULL
2
127
HelloWorld!
3
128
ABC123
As you can see promo_code = ABC123 is like a placeholder in which all once that ID is found, I want all preceding order_ids.
So far to filer all the account_ids that used this promo_code is:
SELECT account_ids, order_id, promo_code
FROM orders
WHERE account_id IN (SELECT account_id FROM order WHERE promo_code = 'ABC123');
This allows me to get the account_ids that have an order where the desired promo_code was used.
Thanks in advance!
Extract all account_id-s that used 'ABC123' and the smallest corresponding order_number-s (the t CTE) then join these with the table and filter/order the result set.
with t as
(
select distinct on (account_id) account_id, order_id
from the_table where promo_code = 'ABC123'
order by account_id, order_id
)
select the_table.*
from the_table
inner join t on the_table.account_id = t.account_id
where the_table.order_id >= t.order_id -- the subsequent orders
order by the_table.account_id, the_table.order_id;
SQL Fiddle
I have a table that has data like:
Name
Item_1
Qty_1
Price_1
Item_2
Qty_2
Price_2
...
Item_50
Qty_50
Price_50
Bob
Apples
10
0.50
Pears
5
0.65
...
Lemons
12
0.25
Alice
Cherries
20
1.00
NULL
NULL
NULL
...
NULL
NULL
NULL
I need to process the data per-item, so the ideal form of the data would be:
Name
ItemNo
Item
Qty
Price
Bob
1
Apples
10
0.50
Bob
2
Pears
5
0.65
...
...
...
...
...
Bob
50
Lemons
12
0.25
Alice
1
Cherries
20
1.00
How can I convert between the two forms?
I have looked at the pivot command, but it seems to convert column names into data in a field, not split groups of columns into separate rows. It doesn't look like it will work for this application.
The current code looks something like:
( SELECT t1.Name, 1 AS ItemNo, t1.Item_1 AS Item, t1.Qty_1 AS Qty, t1.Price_1 AS Price FROM table t1
UNION ALL
SELECT t2.Name, 2 AS ItemNo, t2.Item_2 AS Item, t2.Qty_2 AS Qty, t2.Price_2 AS Price FROM table t2
UNION ALL
...
SELECT t50.Name, 50 AS ItemNo, t50.Item_50 AS Item, t50.Qty_50 AS Qty, t50.Price_50 AS Price FROM table t50
)
It works, but it seems hard to maintain. Is there a better way?
Hopefully the reason you want to do this is to fix your design. If not, then make the reason you're asking is to fix your design.
Anyway, one method is to use a VALUES table construct to unpivot the data:
SELECT YT.Name,
V.ItemNo,
V.Item,
V.Qty,
V.Price
FROM dbo.YourTable YT
CROSS APPLY (VALUES(1,YT.Item_1, YT.Qty_1, YT.Price1),
(2,YT.Item_2, YT.Qty_2, YT.Price2),
(3,YT.Item_3, YT.Qty_3, YT.Price3),
... --You get the idea
(49,YT.Item_49, YT.Qty_49, YT.Price49),
(50,YT.Item_50, YT.Qty_50, YT.Price50))V(ItemNo,Item,Qty,Price)
WHERE V.Item IS NOT NULL;
I have sql table that looks like this:
date id value type
2020-01-01 1 1.03 a
2020-01-01 1 1.02 a
2020-01-02 2 1.06 a
2020-01-02 2 1.2 a
2020-01-03 3 1.09 b
I need to build a query that groups by date,id, and type by multiplying the value column whereever type = 'a'.
what new table should look like:
date id value type
2020-01-01 1 1.0506 a
2020-01-02 2 1.272 a
2020-01-03 3 1.09 b
currently I am building this query,
select
date, id, value, type
from my_table
where date between 'some date' and 'some date'
and trying to fit in EXP(SUM(LOG(value)
but, how do I do the multiplication only where type = 'a' in a group by?
edit:
there are more than 2 values in the type column
I am using redshift. Not postgresql.
select date
, id
-- use the 'case' syntax to check if it is type 'a'
, case when type = 'a' then EXP(SUM(LOG(value::float))) -- your multiply logic
else max(value) -- use min or max to pick only one value
end as value
from my_table
where date between 'some date' and 'some date'
group
by date, id, type
I have a Database that has product names in column 1 and product release dates in column 2. I want to find 'old' products by their release date. However, I'm only interested in finding 'old' products that released a minimum of 1 year ago. I cannot make any edits to the original database infrastructure.
The table looks like this:
Product| Release_Day
A | 2018-08-23
A | 2017-08-23
A | 2019-08-21
B | 2018-08-22
B | 2016-08-22
B | 2017-08-22
C | 2018-10-25
C | 2016-10-25
C | 2019-08-19
I have already tried multiple versions of DISTINCT, MAX, BETWEEN, >, <, etc.
SELECT DISTINCT product,MAX(release_day) as most_recent_release
FROM Product_Release
WHERE
release_day between '2015-08-22' and '2018-08-22'
and release_day not between '2018-08-23' and '2019-08-22'
GROUP BY 1
ORDER BY MAX(release_day) DESC
The expected results should not contain any products found by this query:
SELECT DISTINCT product,MAX(release_day) as most_recent_release
FROM Product_Release
WHERE
release_day between '2018-08-23' and '2019-08-22'
AND product = A
GROUP BY 1
However, every check I complete returns a product from this date range.
This is the output of the initial query:
Product|Most_Recent_Release
A | 2018-08-23
B | 2018-08-22
C | 2015-10-25
And, for example, if I run the check query on Product A, I get this:
Product|Most_Recent_Release
A | 2019-08-21
Use HAVING to filter on most_recent_release
SELECT product, MAX(release_day) as most_recent_release
FROM Product_Release
GROUP BY product
HAVING most_recent_release < '2018-08-23'
ORDER BY most_recent_release DESC
There's no need to use DISTINCT when you use GROUP BY -- you can't get duplicates if there's only one row per product.
I have a table which is having tour details ie. empcode, tourId, tour place and tour date I want to check who is on tour on a given date ?
Employee X is on tour from 01-01-2016 to 10-01-2016
Employee Y is on tour from 07-01-2016 to 12-01-2016
Employee X is on tour from 12-01-2016 to 15-01-2016
Table is having following details
EmpCode TourId PlaceFrom PlaceTo TourDate
100 1 delhi jaipur 01-01-2016
100 1 jaipur mumbai 05-01-2016
100 1 mumbai delhi 10-01-2016
101 2 delhi pune 07-01-2016
101 2 pune delhi 12-01-2016
100 3 alwar jaipur 12-01-2016
100 3 jaipur udaipur 13-01-2016
100 3 udaipur alwar 15-01-2016
TourId denotes a unique tour
If I search who is on tour on 01-01-2016 it should return 100
If I search who is on tour on 08-01-2016 it should return 100 and 101
If I search who is on tour on 12-01-2016 it should return 102
If I search who is on tour on 14-01-2016 it should return 100
how to get the desired output in single query in postgresql.
Thanks in advance.
You can use aggregation functions:
select EmpCode, TourId
from t
group by EmpCode, TourId
having '2016-01-01' between min(TourDate) and max(TourDate);
TourId is optional in the select.
SELECT EmpCode
FROM TourDetails
GROUP BY EmpCode
HAVING '2016-01-14' BETWEEN MIN(TourDate) AND MAX(TourDate);
The EmpCode most likely should not be present in that table but in the upper-level Tours table.
Also, please consider using ISO 8601 for dates, or a TIMESTAMP field.