Below is my query
SELECT
/*+ ORDERED */
F.*,
SDO_NN_DISTANCE(1) dist
FROM NEW_TABLE F
WHERE SDO_NN(F.LOC_GEOM, SDO_GEOMETRY( 2001, 8307, SDO_POINT_TYPE(-12.1254, 22.1545,NULL), NULL, NULL ), 'SDO_BATCH_SIZE=0 DISTANCE=60 UNIT=MILE', 1)='TRUE'
ORDER BY dist;
In the above query the value of distance will be changing.
'SDO_BATCH_SIZE=0 DISTANCE=60 UNIT=MILE'
So can I construct the request parameter dynamically by adding the value eg., 60 to the parameter using mybatis/ibatis?
Using a simple Oracle concatenation operator "||" answered my question.
Replaced 'SDO_BATCH_SIZE=0 DISTANCE=60 UNIT=MILE'
with the below in the mybatis query
'SDO_BATCH_SIZE=0 DISTANCE=' || #{input_distance} || ' UNIT=MILE'
Related
I have a jsonb column it has many keys. If any key has a particular value which I am looking for it should return true.
As keys doen't matter for my requirement, my idea is to check against an array of extracted values. So is there a way to get all the values of a jsonb into an array?
if json_array_length(tms_hlpr_usr_has_authority_fr_srvc_requests(usr_id_,org_id_)) > 0 then
_extra_where = _extra_where ||
' and ' || quote_literal(usr_id_) || ' = any(srvc_req.form_data->>[how to check all keys here]) and srvc_req.is_deleted is not true ';
end if;
Assuming you are building an SQL command to dynamically execute with EXECUTE in a Postgres function.
To check whether any top-level key has the given value, the most elegant way will be an SQL/JSON path expression. This can be supported with an index and should be decently fast. No need to expensively extract all values into an array first.
Basic example to check for the value 2:
SELECT jsonb_path_exists(jsonb '{"a":1, "b":2}', '$.* ? (# == 2)');
To pass in your variable usr_id_ of unknown type (in place of the numeric value 2 of the basic example):
SELECT jsonb_path_exists(jsonb '{"a":1, "b":2}'
, '$.* ? (# == $usr_id)'
, jsonb_build_object('usr_id', 1));
Concatenating the WHERE clause in your PL/pgSQL code block can look like this:
IF json_array_length(tms_hlpr_usr_has_authority_fr_srvc_requests(usr_id_,org_id_)) > 0 THEN
_extra_where = _extra_where
|| $j$ AND jsonb_path_exists(srvc_req.form_data, '$.* ? (# == $usr_id)', $1) AND srvc_req.is_deleted IS NOT TRUE $j$;
END IF;
Using dollar-quoting to allow for nested single quotes. See:
Insert text with single quotes in PostgreSQL
Supply the value for the parameter symbol $1 in a USING clause to the EXECUTE command. Like:
EXECUTE 'SELECT * FROM srvc_req ' || _extra_where
USING jsonb_build_object('usr_id', usr_id_); -- $1 !
Related:
Returning JSON arrary with particular property using Postgres
(This would be much easier to demonstrate if you had provided your actual minimal function.)
I'm trying to parameterize my postgresql query in order to prevent SQL injection in my ruby on rails application. The SQL query will sum a different value in my table depending on the input.
Here is a simplified version of my function:
def self.calculate_value(value)
calculated_value = ""
if value == "quantity"
calculated_value = "COALESCE(sum(amount), 0)"
elsif value == "retail"
calculated_value = "COALESCE(sum(amount * price), 0)"
elsif value == "wholesale"
calculated_value = "COALESCE(sum(amount * cost), 0)"
end
query = <<-SQL
select CAST(? AS DOUBLE PRECISION) as ? from table1
SQL
return Table1.find_by_sql([query, calculated_value, value])
end
If I call calculate_value("retail"), it will execute the query like this:
select location, CAST('COALESCE(sum(amount * price), 0)' AS DOUBLE PRECISION) as 'retail' from table1 group by location
This results in an error. I want it to execute without the quotes like this:
select location, CAST(COALESCE(sum(amount * price), 0) AS DOUBLE PRECISION) as retail from table1 group by location
I understand that the addition of quotations is what prevents the sql injection but how would I prevent it in this case? What is the best way to handle this scenario?
NOTE: This is a simplified version of the queries I'll be writing and I'll want to use find_by_sql.
Prepared statement can not change query structure: table or column names, order by clause, function names and so on. Only literals can be changed this way.
Where is SQL injection? You are not going to put a user-defined value in the query text. Instead, you check the given value against the allowed list and use only your own written parts of SQL. In this case, there is no danger of SQL injection.
I also want to link to this article. It is safe to create a query text dynamically if you control all parts of that query. And it's much better for RDBMS than some smart logic in query.
original query looks like this :
UPDATE reponse_question_finale t1, reponse_question_finale t2 SET
t1.nb_question_repondu = (9-(ISNULL(t1.valeur_question_4)+ISNULL(t1.valeur_question_6)+ISNULL(t1.valeur_question_7)+ISNULL(t1.valeur_question_9))) WHERE t1.APPLICATION = t2.APPLICATION;
I know you cannot update 2 tables in a single query so i tried this :
UPDATE reponse_question_finale t1
SET nb_question_repondu = (9-(COALESCE(t1.valeur_question_4,'')::int+COALESCE(t1.valeur_question_6,'')::int+COALESCE(t1.valeur_question_7)::int+COALESCE(t1.valeur_question_9,'')::int))
WHERE t1.APPLICATION = t1.APPLICATION;
But this query gaves me an error : invalid input syntax for integer: ""
I saw that the Postgres equivalent to MySQL is COALESCE() so i think i'm on the good way here.
I also know you cannot add varchar to varchar so i tried to cast it to integer to do that. I'm not sure if i casted it correctly with parenthesis at the good place and regarding to error maybe i cannot cast to int with coalesce.
Last thing, i can certainly do a co-related sub-select to update my two tables but i'm a little lost at this point.
The output must be an integer matching the number of questions answered to a backup survey.
Any thoughts?
Thanks.
coalesce() returns the first non-null value from the list supplied. So, if the column value is null the expression COALESCE(t1.valeur_question_4,'') returns an empty string and that's why you get the error.
But it seems you want something completely different: you want check if the column is null (or empty) and then subtract a value if it is to count the number of non-null columns.
To return 1 if a value is not null or 0 if it isn't you can use:
(nullif(valeur_question_4, '') is null)::int
nullif returns null if the first value equals the second. The IS NULL condition returns a boolean (something that MySQL doesn't have) and that can be cast to an integer (where false will be cast to 0 and true to 1)
So the whole expression should be:
nb_question_repondu = 9 - (
(nullif(t1.valeur_question_4,'') is null)::int
+ (nullif(t1.valeur_question_6,'') is null)::int
+ (nullif(t1.valeur_question_7,'') is null)::int
+ (nullif(t1.valeur_question_9,'') is null)::int
)
Another option is to unpivot the columns and do a select on them in a sub-select:
update reponse_question_finale
set nb_question_repondu = (select count(*)
from (
values
(valeur_question_4),
(valeur_question_6),
(valeur_question_7),
(valeur_question_9)
) as t(q)
where nullif(trim(q),'') is not null);
Adding more columns to be considered is quite easy then, as you just need to add a single line to the values() clause
I use this sql to execute sql:
v_sql4 :='
INSERT INTO public.rebatesys(head,contract_no,history_no,f_sin,line_no,s_line_no,departmentcd,catagorycd,jan,seriescd,f_exclude, f_del,ins_date,ins_time,ins_user_id,ins_func_id,ins_ope_id,upd_date,upd_time,upd_user_id,upd_func_id,upd_ope_id)
VALUES (0, '''||v_contract_no||''', '||v_history_no||',1, '||v_line_no||', '||v_down_s_line_no||', '||coalesce(v_deptCD,null)||', '||0||', '''||v_singleJan||''','''||0||''','||v_fExclude||',
0, current_date, current_time, '||v_ins_user_id||', 0, 0,
current_date,current_time,'||v_upd_user_id||',0, 0);';
RAISE NOTICE 'v_sql4 IS : %', v_sql4;
EXECUTE v_sql4;
But when field "v_deptCD" is null,the whole sql is null,even I use coalesce,I still can't do id, the out put is :
NOTICE: v_sql4 IS : <NULL>
How to fix it?
When v_deptCD is null, you want to replace it by the string 'null', not the keyword.
', '||coalesce(v_deptCD,'null')||', '
You can use this
case when v_deptCD notnull then v_deptCD else null end
or use this for string concatination inside sql
concat(field1, ', ', field2)
Alternative approach to JGH solution is to use function format(your_string, list, of, values), it can ignore NULL values, but has the option to display them as NULL if you use %L in your format string. It will however single quote numbers if you use that format specifier, requiring casting in some cases.
Format arguments according to a format string. This function is similar to the C function sprintf. See Section 9.4.1.
But in my opinion best solution is to use USING clause and pass values in there. It looks kinda like prepared statement, protects you from SQL Injection, but does not cache plans like prepared statements. There are simple examples on how to do this in documentation for executing dynamic commands.
EXECUTE 'SELECT count(*) FROM mytable WHERE inserted_by = $1 AND inserted <= $2'
INTO c
USING checked_user, checked_date;
What is the argument type for the order by clause in Postgresql?
I came across a very strange behaviour (using Postgresql 9.5). Namely, the query
select * from unnest(array[1,4,3,2]) as x order by 1;
produces 1,2,3,4 as expected. However the query
select * from unnest(array[1,4,3,2]) as x order by 1::int;
produces 1,4,3,2, which seems strange. Similarly, whenever I replace 1::int with whatever function (e.g. greatest(0,1)) or even case operator, the results are unordered (on the contrary to what I would expect).
So which type should an argument of order by have, and how do I get the expected behaviour?
This is expected (and documented) behaviour:
A sort_expression can also be the column label or number of an output column
So the expression:
order by 1
sorts by the first column of the result set (as defined by the SQL standard)
However the expression:
order by 1::int
sorts by the constant value 1, it's essentially the same as:
order by 'foo'
By using a constant value for the order by all rows have the same sort value and thus aren't really sorted.
To sort by an expression, just use that:
order by
case
when some_column = 'foo' then 1
when some_column = 'bar' then 2
else 3
end
The above sorts the result based on the result of the case expression.
Actually I have a function with an integer argument which indicates the column to be used in the order by clause.
In a case when all columns are of the same type, this can work: :
SELECT ....
ORDER BY
CASE function_to_get_a_column_number()
WHEN 1 THEN column1
WHEN 2 THEN column2
.....
WHEN 1235 THEN column1235
END
If columns are of different types, you can try:
SELECT ....
ORDER BY
CASE function_to_get_a_column_number()
WHEN 1 THEN column1::varchar
WHEN 2 THEN column2::varchar
.....
WHEN 1235 THEN column1235::varchar
END
But these "workarounds" are horrible. You need some other approach than the function returning a column number.
Maybe a dynamic SQL ?
I would say that dynamic SQL (thanks #kordirko and the others for the hints) is the best solution to the problem I originally had in mind:
create temp table my_data (
id serial,
val text
);
insert into my_data(id, val)
values (default, 'a'), (default, 'c'), (default, 'd'), (default, 'b');
create function fetch_my_data(col text)
returns setof my_data as
$f$
begin
return query execute $$
select * from my_data
order by $$|| quote_ident(col);
end
$f$ language plpgsql;
select * from fetch_my_data('val'); -- order by val
select * from fetch_my_data('id'); -- order by id
In the beginning I thought this could be achieved using case expression in the argument of the order by clause - the sort_expression. And here comes the tricky part which confused me: when sort_expression is a kind of identifier (name of a column or a number of a column), the corresponding column is used when ordering the results. But when sort_expression is some value, we actually order the results using that value itself (computed for each row). This is #a_horse_with_no_name's answer rephrased.
So when I queried ... order by 1::int, in a way I have assigned value 1 to each row and then tried to sort an array of ones, which clearly is useless.
There are some workarounds without dynamic queries, but they require writing more code and do not seem to have any significant advantages.