I have a SQL 2008 database with a table having a field 'Category' of type nvarchar, which has values such as :
A1
A10
A2
B1/1
B1/2
B1/3
D1(1)
D1(2)
etc.
I know this is not good structure for the data, but we now need to somehow narrow down searches to this table such that the user will enter a start and end category (i.e. A1 to A15, or B1/1 to B/3) and we want to only return records with these category values.
Is there a way this can be done? The other thing is that we would also like to have the results sorted correctly (ie. A10 after A2).
c.
You can narrow down the return result using WHERE clauses. For example:
SELECT Col1
,Col2
...
,Coln
FROM SourceTable
WHERE Col1>'A1' AND Col1<'A15'
You can use this values ('A1' and 'A15') as input parameters of user defined function. Then it will be very easy to pass 'B1' and 'B15' or everything else instead them.
Also, if you want to display 'A10' after 'A2'
For more information about user defined functions - User Defined functions
Check this information for strings compression in T-SQL. Also you should spend some time on collisions - are you comparing strings case sensitive or case insensitive.
As to "The other thing is that we would also like to have the results sorted correctly (ie. A10 after A2)" you can add some logic in your function like this:
DECLARE #ParamOne NVARCHAR(2)='A2'
DECLARE #ParamTwo NVARCHAR(3)='A10'
SELECT CASE WHEN LEN(#ParamOne)=2
THEN CASE WHEN #ParamTwo<STUFF(#ParamOne,2,0,'0')
THEN 1
ELSE 2
END
ELSE
CASE WHEN #ParamTwo<#ParamOne
THEN 1
ELSE 2
END
END
Related
I'm trying to convert each row in a jsonb column to a type that I've defined, and I can't quite seem to get there.
I have an app that scrapes articles from The Guardian Open Platform and dumps the responses (as jsonb) in an ingestion table, into a column called 'body'. Other columns are a sequential ID, and a timestamp extracted from the response payload that helps my app only scrape new data.
I'd like to move the response dump data into a properly-defined table, and as I know the schema of the response, I've defined a type (my_type).
I've been referring to the 9.16. JSON Functions and Operators in the Postgres docs. I can get a single record as my type:
select * from jsonb_populate_record(null::my_type, (select body from data_ingestion limit 1));
produces
id
type
sectionId
...
example_id
example_type
example_section_id
...
(abbreviated for concision)
If I remove the limit, I get an error, which makes sense: the subquery would be providing multiple rows to jsonb_populate_record which only expects one.
I can get it to do multiple rows, but the result isn't broken into columns:
select jsonb_populate_record(null::my_type, body) from reviews_ingestion limit 3;
produces:
jsonb_populate_record
(example_id_1,example_type_1,example_section_id_1,...)
(example_id_2,example_type_2,example_section_id_2,...)
(example_id_3,example_type_3,example_section_id_3,...)
This is a bit odd, I would have expected to see column names; this after all is the point of providing the type.
I'm aware I can do this by using Postgres JSON querying functionality, e.g.
select
body -> 'id' as id,
body -> 'type' as type,
body -> 'sectionId' as section_id,
...
from reviews_ingestion;
This works but it seems quite inelegant. Plus I lose datatypes.
I've also considered aggregating all rows in the body column into a JSON array, so as to be able to supply this to jsonb_populate_recordset but this seems a bit of a silly approach, and unlikely to be performant.
Is there a way to achieve what I want, using Postgres functions?
Maybe you need this - to break my_type record into columns:
select (jsonb_populate_record(null::my_type, body)).*
from reviews_ingestion
limit 3;
-- or whatever other query clauses here
i.e. select all from these my_type records. All column names and types are in place.
Here is an illustration. My custom type is delmet and CTO t remotely mimics data_ingestion.
create type delmet as (x integer, y text, z boolean);
with t(i, j, k) as
(
values
(1, '{"x":10, "y":"Nope", "z":true}'::jsonb, 'cats'),
(2, '{"x":11, "y":"Yep", "z":false}', 'dogs'),
(3, '{"x":12, "y":null, "z":true}', 'parrots')
)
select i, (jsonb_populate_record(null::delmet, j)).*, k
from t;
Result:
i
x
y
z
k
1
10
Nope
true
cats
2
11
Yep
false
dogs
3
12
true
parrots
What is the argument type for the order by clause in Postgresql?
I came across a very strange behaviour (using Postgresql 9.5). Namely, the query
select * from unnest(array[1,4,3,2]) as x order by 1;
produces 1,2,3,4 as expected. However the query
select * from unnest(array[1,4,3,2]) as x order by 1::int;
produces 1,4,3,2, which seems strange. Similarly, whenever I replace 1::int with whatever function (e.g. greatest(0,1)) or even case operator, the results are unordered (on the contrary to what I would expect).
So which type should an argument of order by have, and how do I get the expected behaviour?
This is expected (and documented) behaviour:
A sort_expression can also be the column label or number of an output column
So the expression:
order by 1
sorts by the first column of the result set (as defined by the SQL standard)
However the expression:
order by 1::int
sorts by the constant value 1, it's essentially the same as:
order by 'foo'
By using a constant value for the order by all rows have the same sort value and thus aren't really sorted.
To sort by an expression, just use that:
order by
case
when some_column = 'foo' then 1
when some_column = 'bar' then 2
else 3
end
The above sorts the result based on the result of the case expression.
Actually I have a function with an integer argument which indicates the column to be used in the order by clause.
In a case when all columns are of the same type, this can work: :
SELECT ....
ORDER BY
CASE function_to_get_a_column_number()
WHEN 1 THEN column1
WHEN 2 THEN column2
.....
WHEN 1235 THEN column1235
END
If columns are of different types, you can try:
SELECT ....
ORDER BY
CASE function_to_get_a_column_number()
WHEN 1 THEN column1::varchar
WHEN 2 THEN column2::varchar
.....
WHEN 1235 THEN column1235::varchar
END
But these "workarounds" are horrible. You need some other approach than the function returning a column number.
Maybe a dynamic SQL ?
I would say that dynamic SQL (thanks #kordirko and the others for the hints) is the best solution to the problem I originally had in mind:
create temp table my_data (
id serial,
val text
);
insert into my_data(id, val)
values (default, 'a'), (default, 'c'), (default, 'd'), (default, 'b');
create function fetch_my_data(col text)
returns setof my_data as
$f$
begin
return query execute $$
select * from my_data
order by $$|| quote_ident(col);
end
$f$ language plpgsql;
select * from fetch_my_data('val'); -- order by val
select * from fetch_my_data('id'); -- order by id
In the beginning I thought this could be achieved using case expression in the argument of the order by clause - the sort_expression. And here comes the tricky part which confused me: when sort_expression is a kind of identifier (name of a column or a number of a column), the corresponding column is used when ordering the results. But when sort_expression is some value, we actually order the results using that value itself (computed for each row). This is #a_horse_with_no_name's answer rephrased.
So when I queried ... order by 1::int, in a way I have assigned value 1 to each row and then tried to sort an array of ones, which clearly is useless.
There are some workarounds without dynamic queries, but they require writing more code and do not seem to have any significant advantages.
I have the following case statement to prepare as a dynamic as shown below:
Example:
I have the case statement:
case cola
when cola between '2001-01-01' and '2001-01-05' then 'G1'
when cola between '2001-01-10' and '2001-01-15' then 'G2'
when cola between '2001-01-20' and '2001-01-25' then 'G3'
when cola between '2001-02-01' and '2001-02-05' then 'G4'
when cola between '2001-02-10' and '2001-02-15' then 'G5'
else ''
end
Note: Now I want to create dynamic case statement because of the values dates and name passing as a parameter and it may change.
Declare
dates varchar = '2001-01-01to2001-01-05,2001-01-10to2001-01-15,
2001-01-20to2001-01-25,2001-02-01to2001-02-05,
2001-02-10to2001-02-15';
names varchar = 'G1,G2,G3,G4,G5';
The values in the variables may change as per the requirements, it will be dynamic. So the case statement should be dynamic without using loop.
You may not need any function for this, just join to a mapping data-set:
with cola_map(low, high, value) as (
values(date '2001-01-01', date '2001-01-05', 'G1'),
('2001-01-10', '2001-01-15', 'G2'),
('2001-01-20', '2001-01-25', 'G3'),
('2001-02-01', '2001-02-05', 'G4'),
('2001-02-10', '2001-02-15', 'G5')
-- you can include as many rows, as you want
)
select table_name.*,
coalesce(cola_map.value, '') -- else branch from case expression
from table_name
left join cola_map on table_name.cola between cola_map.low and cola_map.high
If your date ranges could collide, you can use DISTINCT ON or GROUP BY to avoid row duplication.
Note: you can use a simple sub-select too, I used a CTE, because it's more readable.
Edit: passing these data (as a single parameter) can be achieved by passing a multi-dimensional array (or an array of row-values, but that requires you to have a distinct, predefined composite type).
Passing arrays as parameters can depend on the actual client (& driver) you use, but in general, you can use the array's input representation:
-- sql
with cola_map(low, high, value) as (
select d[1]::date, d[2]::date, d[3]
from unnest(?::text[][]) d
)
select table_name.*,
coalesce(cola_map.value, '') -- else branch from case expression
from table_name
left join cola_map on table_name.cola between cola_map.low and cola_map.high
// client pseudo code
query = db.prepare(sql);
query.bind(1, "{{2001-01-10,2001-01-15,G2},{2001-01-20,2001-01-25,G3}}");
query.execute();
Passing each chunk of data separately is also possible with some clients (or with some abstractions), but this is highly depends on your driver/orm/etc. you use.
It is very common for a web page to have multiple ordering options for a table. Right now I have a case where there are 12 options (ordenable columns). The easiest (that I know of) way to do it is to build the SQL query concatenating strings. But I'm wondering if it is the best approach. The string concatenation is something like this (python code):
order = {
1: "c1 desc, c2",
2: "c2, c3",
...
12: "c10, c9 desc"
}
...
query = """
select c1, c2
from the_table
order by %(order)s
"""
...
cursor.execute(query, {'order': AsIs(order[order_option])})
...
My alternative solution until now is to place a series of cases in the order by clause:
select c1, c2
from the_table
order by
case %(order_option)s
when 1 then array[c1 * -1, c2]
when 2 then array[c2, c3]
else [0.0, 0.0]
end
,
case %(order_option)s
when 3 then c4
else ''
end
,
...
,
case when %(order_option)s < 1 or %(order_option)s > 12 then c5 end
;
What is the best practice concerning multiple ordering choices? What happens with index utilization in my alternative code?
First of all, #order is not valid PostgreSQL syntax. You probably borrowed the syntax style from MS SQL Server or MySQL. You cannot use variables in a plain SQL query like that.
In PostgreSQL you would probably create a function. You can use variables there, just drop the #.
Sorting by ARRAY is generally rather slow - and not necessary in your case. You could simplify to:
ORDER BY
CASE _order
WHEN 1 THEN c2
WHEN 2 THEN c3 * -1
ELSE NULL -- undefined!
END
, c1
However, a CASE expression like this cannot use plain indexes. So, if you are looking for performance, one way (of several) would be a plpgsql function like this:
CREATE OR REPLACE FUNCTION foo(int)
RETURNS TABLE(c1 int, c2 int) AS
$BODY$
BEGIN
CASE $1
WHEN 1 THEN
RETURN QUERY
SELECT t.c1, t.c2
FROM tbl t
ORDER BY t.c2, t.c1;
WHEN 2 THEN
RETURN QUERY
SELECT t.c1, t.c2
FROM tbl t
ORDER BY t.c3 DESC, t.c1;
ELSE
RAISE WARNING 'Unexpected parameter: "%"', $1;
END CASE;
END;
$BODY$
LANGUAGE plpgsql STABLE;
This way, even plain indexes can be used.
If you actually only have two alternatives for ORDER BY, you could also just write two
functions.
Create multi-column indexes on (c2, c1) and (c3 DESC, c1) for maximum performance. But be aware that maintaining indexes carries a cost, too, especially if your table sees a lot of write operations.
Additional answer for rephrased question
As I said, the CASE construct will not use plain indexes. Indexes on expressions would be an option, but what you have in your example is outside the scope.
So, if you want performance, build the query in your app (your first approach) or write a server side function (possibly with dynamic SQL and EXECUTE) that does something similar inside PostgreSQL. The WHERE clause with a complex CASE statement works, but is slower.
I have a lengthy query here, and wondering whether it could be refactor?
Declare #A1 as int
Declare #A2 as int
...
Declare #A50 as int
SET #A1 =(Select id from table where code='ABC1')
SET #A2 =(Select id from table where code='ABC2')
...
SET #A50 =(Select id from table where code='ABC50')
Insert into tableB
Select
Case when #A1='somevalue' Then 'x' else 'y' End,
Case when #A2='somevalue' Then 'x' else 'y' End,
..
Case when #A50='somevalue' Then 'x' else 'y' End
From tableC inner join ......
So as you can see from above, there is quite some redundant code. But I can not think of a way to make it simpler.
Any help is appreciated.
If you need the variables assigned, you could pivot your table...
SELECT *
FROM
(
SELECT Code, Id
FROM Table
) t
PIVOT
(MAX(Id) FOR Code IN ([ABC1],[ABC2],[ABC3],[ABC50])) p /* List them all here */
;
...and then assign them accordingly.
SELECT #A1 = [ABC1], #A2 = [ABC2]
FROM
(
SELECT Code, Id
FROM Table
) t
PIVOT
(MAX(Id) FOR Code IN ([ABC1],[ABC2],[ABC3],[ABC50])) p /* List them all here */
;
But I doubt you actually need to assign them at all. I just can't really picture what you're trying to achieve.
Pivotting may help you, as you can still use the CASE statements.
Rob
Without taking the time to develop a full answer, I would start by trying:
select id from table where code in ('ABC1', ... ,'ABC50')
then pivot that, to get one row result set of columns ABC1 through ABC50 with ID values.
Join that row in the FROM.
If 'somevalue', 'x' and 'y' are constant for all fifty expressions. Then start from:
select case id when 'somevalue' then 'x' else 'y' end as XY
from table
where code in ('ABC1', ... ,'ABC50')
I am not entirely sure from your example, but it looks like you should be able to do one of a few things.
Create a nice look up table that will tell you for a given value of the select statement what should be placed there. This would be much shorter and should be insanely fast.
Create a simple for loop in your code and generate a list of 50 small queries.
Use sub-selects or generate a list of selects with one round trip to retrieve your #a1-#A50 values and then generate the query with them already in place.
Jacob