Is it possible to return records matching the first part of a where clause; however, if no results are found, then move to the second part of the where clause?
Example:
create table #Fruits (
Fruit varchar(20),
Color varchar(20),
Size varchar(20)
)
insert into #Fruits
values ('Apple', 'Red', 'Medium'),
('Pear', 'Green', 'Medium'),
('Banana', 'Yellow', 'Medium'),
('Grapes', 'Purple', 'Small')
select * from #Fruits
where Fruit in ('Apple', 'Grapes') or Color = 'Green'
This will obviously return Apple, Grapes and Pear. My goal is to only find Apple and Grapes if they exist, otherwise return the Fruits that are Green.
I've tried to refer to this similar question: SQL: IF clause within WHERE clause but am having trouble incorporating the where.
I've also tried using ##rowcount:
select * from #Fruits where Fruit in ('Apple', 'Grapes')
if ##rowcount = 0
select * from #Fruits where Color = 'Green'
But if the first select returns nothing, it still returns an empty table as a result.
Thanks.
We can express your logic using a union:
select * from #Fruits where Fruit in ('Apple', 'Grapes')
union all
select * from #Fruits where Color = 'Green' and
not exists (select 1 from #Fruits
where Fruit in ('Apple', 'Grapes'));
We might also be able to combine the logic into a single query:
select *
from #Fruits
where
Fruit in ('Apple', 'Grapes') or
(Color = 'Green' and
not exists (select 1 from #Fruits where Fruit in ('Apple', 'Grapes'));
Related
I'd like to create a lookup table within my query using a cte e.g.
food, category
apple, fruit
carrot, vegetable
grape, fruit
How can I just type values directly in this way? Tried:
with
lookup_table as (
select
'apple', 'carrot', 'grape' as fruits,
'fruit', 'vegetable', 'fruit' as category
)
select * from lookup_table;
Gives:
"?column?" "?column?" fruits "?column?" "?column?" category
apple carrot grape fruit vegetable fruit
How can I create a manual lookup directly in this way with 2 fields, fruit and category, as well as the 3 corresponding values for each?
One elegant way in Postgres is to use a VALUES clause.
WITH
lookup
AS
(
SELECT *
FROM (VALUES ('apple',
'fruit'),
...
('grape',
'fruit')) AS v
(fruit,
category)
)
SELECT *
FROM lookup;
I'm taking two JSONB arrays, unpacking them, and combing the results. I'm trying to add WITH ORDINALITY to the JSON array unpacking. I've been unable to figure out how to add WITH ORDINALITY. For some reason, I can't find WITH ORDINALITY in the documentation for Postgres 11's JSON tools:
https://www.postgresql.org/docs/11/functions-json.html
I've seen examples using jsonb_array_elements....WITH ORDINALITY, but haven't been able to get it to work. First, a functional example based on Postgres arrays:
WITH
first AS (
SELECT * FROM
UNNEST (ARRAY['Charles','Jane','George','Percy']) WITH ORDINALITY AS x(name_, index)
),
last AS (
SELECT * FROM
UNNEST (ARRAY['Dickens','Austen','Eliot']) WITH ORDINALITY AS y(name_, index)
)
SELECT first.name_ AS first_name,
last.name_ AS last_name
FROM first
JOIN last ON (last.index = first.index)
This gives the desired output:
first_name last_name
Charles Dickens
Jane Austen
George Eliot
I'm using the ORDINALITY index to make the JOIN, as I'm combining two lists for pair-wise comparison. I can assume my lists are equally sized.
However, my input is going to be a JSON array, not a Postgres array. I've got the unpacking working with jsonb_to_recordset, but have not got the ordinality generation working. Here's a sample that does the unpacking part correctly:
DROP FUNCTION IF EXISTS tools.try_ordinality (jsonb, jsonb);
CREATE OR REPLACE FUNCTION tools.try_ordinality (
base_jsonb_in jsonb,
comparison_jsonb_in jsonb)
RETURNS TABLE (
base_text citext,
base_id citext,
comparison_text citext,
comparison_id citext)
AS $BODY$
BEGIN
RETURN QUERY
WITH
base_expanded AS (
select *
from jsonb_to_recordset (
base_jsonb_in)
AS base_unpacked (text citext, id citext)
),
comparison_expanded AS (
select *
from jsonb_to_recordset (
comparison_jsonb_in)
AS comparison_unpacked (text citext, id citext)
),
combined_lists AS (
select base_expanded.text AS base_text,
base_expanded.id AS base_id,
comparison_expanded.text AS comparison_text,
comparison_expanded.id AS comparison_id
from base_expanded,
comparison_expanded
)
select *
from combined_lists;
END
$BODY$
LANGUAGE plpgsql;
select * from try_ordinality (
'[
{"text":"Fuzzy Green Bunny","id":"1"},
{"text":"Small Gray Turtle","id":"2"}
]',
'[
{"text":"Red Large Special","id":"3"},
{"text":"Blue Small","id":"4"},
{"text":"Green Medium Special","id":"5"}
]'
);
But that's a CROSS JOIN
base_text base_id comparison_text comparison_id
Fuzzy Green Bunny 1 Red Large Special 3
Fuzzy Green Bunny 1 Blue Small 4
Fuzzy Green Bunny 1 Green Medium Special 5
Small Gray Turtle 2 Red Large Special 3
Small Gray Turtle 2 Blue Small 4
Small Gray Turtle 2 Green Medium Special 5
I'm after a pair-wise result with only two rows:
Fuzzy Green Bunny 1 Red Large Special 3
Small Gray Turtle 2 Blue Small 4
I've tried switching to jsonb_array_elements, as in this snippet:
WITH
base_expanded AS (
select *
from jsonb_array_elements (
base_jsonb_in)
AS base_unpacked (text citext, id citext)
),
I get back
ERROR: a column definition list is only allowed for functions returning "record"
Is there a straightforward way to get ordinality on an unpacked JSON array? It's very easy with UNNEST on a Postgres array.
I'm happy to learn I've screwed up the syntax.
I can CREATE TYPE, if it's of any help.
I can convert to a Postgres array, if that's straightforward to do.
Thanks for any suggestions.
You do it exactly the same way.
with first as (
select *
from jsonb_array_elements('[
{"text":"Fuzzy Green Bunny","id":"1"},
{"text":"Small Gray Turtle","id":"2"}
]'::jsonb) with ordinality as f(element, idx)
), last as (
select *
from jsonb_array_elements('[
{"text":"Red Large Special","id":"3"},
{"text":"Blue Small","id":"4"},
{"text":"Green Medium Special","id":"5"}
]'::jsonb) with ordinality as f(element, idx)
)
SELECT first.element ->> 'text' AS first_name,
last.element ->> 'text' AS last_name
FROM first
JOIN last ON last.idx = first.idx
I want to return data to show rows as columns for example;
ref, description
123, bananas, apples, oranges etc
There could be more than one description item.
The list from the table is shown in table rows.
ref, description
123, bananas
123, apples
123, oranges
Any ideas/code appreciated.
Heres the CTE I've created but am happy to explore other robust solutions.
with PivotV as (
SELECT [CO-PERSON-VULNERABILITY].person_ref [Ref],
(case
when [CO_VULNERABLE-CODES].DESCRIPTION = 'Restricted Mobility' then 'RM'
when [CO_VULNERABLE-CODES].DESCRIPTION = 'Progressive or Long Term Illness' then 'PLTI'
when [CO_VULNERABLE-CODES].DESCRIPTION = 'ASB / Injunction Order' then 'AIO'
when [CO_VULNERABLE-CODES].DESCRIPTION = 'Beware possible Drug Paraphernalia' then 'BPDP'
when [CO_VULNERABLE-CODES].DESCRIPTION = '[Can''t] Manage Stairs' then 'CMS'
else NULL end) as [VunDesc]
--,[CO-PERSON-VULNERABILITY].vulnerable_code, [CO_VULNERABLE-CODES].[VULNERABLE-IND],
FROM [CO-PERSON-VULNERABILITY] INNER JOIN
[CO_VULNERABLE-CODES] ON [CO-PERSON-VULNERABILITY].vulnerable_code = [CO_VULNERABLE-CODES].[VULNERABLE-IND])
Unfortunately, before SQL Server 2017(there you can use STRING_AGG()) it's not that easy to do what you want.
SELECT t1.ref
, STUFF((
SELECT ',' + t2.description
FROM #t as t2
WHERE t2.ref = t1.ref
ORDER BY t2.description
FOR XML PATH('')), 1, LEN(','), '') as description
FROM #t as t1
GROUP BY t1.ref
For DB2
How do I get the column names that contain a 'Y' for each row and concatenate them into a comma-delimited list?
For example, when the base table looks like this:
person | apple | orange | grapes
--------------------------------
1 Y Y
2 Y
3 Y
The query result needs to look like this:
person | fruits
---------------------------
1 apple,orange
2 orange
3 grapes
I tried COALESCE, but that didn't work, as it would coalesce into Y.
I tried CASE WHEN f.apple ='Y' THEN 'apple'
WHEN f.orange = 'Y' THEN 'orange'
WHEN f.grapes = 'Y' THEN 'grapes'
END AS fruits
but the above would only give return one of the WHEN statements.
I tried CASE WHEN f.apple ='Y' THEN concat('apple,')
WHEN f.orange = 'Y' THEN concat('orange,')
when f.grapes = 'Y' THEN concat('grapes,')
END AS fruits
but that doesn't work obviously because it's a syntax error (rather new to SQL) and still, only one of the WHENs would work.
The LISTAGG aggregate function will make quick work of this task if the values for each person can be represented as a multi-row expression:
WITH fruitCTE (person, fruitname) AS (
SELECT person, 'apple' FROM originalTable WHERE apple = 'Y'
UNION ALL
SELECT person, 'orange' FROM originalTable WHERE orange = 'Y'
UNION ALL
SELECT person, 'grapes' FROM originalTable WHERE grapes = 'Y'
)
SELECT person,
LISTAGG( fruitname, ',' )
WITHIN GROUP ( ORDER BY fruitname ASC )
AS fruits
FROM fruitCTE
GROUP BY person
;
If your query also needs to show people who have no Y flags at all, you can modify the final/outer SELECT to perform a LEFT OUTER JOIN from originalTable to fruitCTE on the person column.
In SQL Server 2008 I have a Table- Fruits
Items Orders
Bananas 6
Bananas 2
Bananas 1
Mangos 4
Mangos 3
Apples 7
Apples 1
Apples 3
Apples 3
Using variables, how I can get below output. I am requesting variables because I would like perform several mathematical operations not described in this example.
Items Number of Orders Total Order Quantity Average Order Quantity
Bananas 3 9 3
Mangos 2 7 3.5
Apples 4 14 3.5
'Total Order Quantity' shows sum of all orders for given item
'Average Order Quantity' = 'Total Order Quantity'/'Number of Orders'
Many thanks.
Create table Fruits (Items varchar(10), Orders int)
insert into Fruits values ('Bananas',6)
insert into Fruits values ('Bananas',2)
insert into Fruits values ('Bananas',1)
insert into Fruits values ('Mangos',4)
insert into Fruits values ('Mangos',3)
insert into Fruits values ('Apples',7)
insert into Fruits values ('Apples',1)
insert into Fruits values ('Apples',3)
insert into Fruits values ('Apples',3)
select Items, count(Orders) as NumberOfOrders, sum(Orders) as TotalOrderQuantity, avg(Orders + 0.0) as AverageOrderQuantity
from Fruits
group by Items
Yes, really should steer clear of cursors when possible. Something like this approach would probably be best, storing the results of the query in a temporary table, and then running update statements to get your calculations:
declare #Table table
(
#Item varchar(10)
#OrderCount int
#QuantityTotal int
#AvgQuantity numeric(9, 2)
#Calc1 numeric(9, 2)
#Calc2 numeric(9, 2)
)
insert into #Table (#Item, #OrderCount, #QuantityTotal, #AvgQuantity)
select Items, count(Orders) as NumberOfOrders, sum(Orders) as TotalOrderQuantity, avg(Orders + 0.0) as AverageOrderQuantity
from Fruits
group by Items
order by 1
update #Table set #Calc1 = #OrderCount / #AvgQuantity,
#Calc2 = ...
select * from #Table
or if you can get all your calculations in a single row or joins with another table, you can do it in a single statement, like:
select *, (OrderCount / AvgQuantity) as Calc1, (... as Calc2)
from
(
select Items, count(Orders) as OrderCount, sum(Orders) as TotalQuantity, avg(Orders + 0.0) as AvgQuantity
from Fruits
group by Items
) t
declare csrCursor cursor for
select Items, count(Orders) as NumberOfOrders, sum(Orders) as TotalOrderQuantity, avg(Orders + 0.0) as AverageOrderQuantity
from Fruits
group by Items
order by 1
declare #Item varchar(10)
declare #OrderCount int
declare #QuantityTotal int
declare #AvgQuantity numeric(9, 2)
open csrCursor
fetch next from csrCursor into #Item, #OrderCount, #QuantityTotal, #AvgQuantity
while (##fetch_status = 0)
-- Do stuff with variables #Item, #OrderCount, #QuantityTotal, #AvgQuantity
-- Insert results in Temp Table
fetch next from csrCursor into #Item, #OrderCount, #QuantityTotal, #AvgQuantity
end
close csrCursor
deallocate csrCursor