Postgresql if null in field,the whole sql is null - postgresql

I use this sql to execute sql:
v_sql4 :='
INSERT INTO public.rebatesys(head,contract_no,history_no,f_sin,line_no,s_line_no,departmentcd,catagorycd,jan,seriescd,f_exclude, f_del,ins_date,ins_time,ins_user_id,ins_func_id,ins_ope_id,upd_date,upd_time,upd_user_id,upd_func_id,upd_ope_id)
VALUES (0, '''||v_contract_no||''', '||v_history_no||',1, '||v_line_no||', '||v_down_s_line_no||', '||coalesce(v_deptCD,null)||', '||0||', '''||v_singleJan||''','''||0||''','||v_fExclude||',
0, current_date, current_time, '||v_ins_user_id||', 0, 0,
current_date,current_time,'||v_upd_user_id||',0, 0);';
RAISE NOTICE 'v_sql4 IS : %', v_sql4;
EXECUTE v_sql4;
But when field "v_deptCD" is null,the whole sql is null,even I use coalesce,I still can't do id, the out put is :
NOTICE: v_sql4 IS : <NULL>
How to fix it?

When v_deptCD is null, you want to replace it by the string 'null', not the keyword.
', '||coalesce(v_deptCD,'null')||', '

You can use this
case when v_deptCD notnull then v_deptCD else null end
or use this for string concatination inside sql
concat(field1, ', ', field2)

Alternative approach to JGH solution is to use function format(your_string, list, of, values), it can ignore NULL values, but has the option to display them as NULL if you use %L in your format string. It will however single quote numbers if you use that format specifier, requiring casting in some cases.
Format arguments according to a format string. This function is similar to the C function sprintf. See Section 9.4.1.
But in my opinion best solution is to use USING clause and pass values in there. It looks kinda like prepared statement, protects you from SQL Injection, but does not cache plans like prepared statements. There are simple examples on how to do this in documentation for executing dynamic commands.
EXECUTE 'SELECT count(*) FROM mytable WHERE inserted_by = $1 AND inserted <= $2'
INTO c
USING checked_user, checked_date;

Related

How to properly parameterize my postgresql query

I'm trying to parameterize my postgresql query in order to prevent SQL injection in my ruby on rails application. The SQL query will sum a different value in my table depending on the input.
Here is a simplified version of my function:
def self.calculate_value(value)
calculated_value = ""
if value == "quantity"
calculated_value = "COALESCE(sum(amount), 0)"
elsif value == "retail"
calculated_value = "COALESCE(sum(amount * price), 0)"
elsif value == "wholesale"
calculated_value = "COALESCE(sum(amount * cost), 0)"
end
query = <<-SQL
select CAST(? AS DOUBLE PRECISION) as ? from table1
SQL
return Table1.find_by_sql([query, calculated_value, value])
end
If I call calculate_value("retail"), it will execute the query like this:
select location, CAST('COALESCE(sum(amount * price), 0)' AS DOUBLE PRECISION) as 'retail' from table1 group by location
This results in an error. I want it to execute without the quotes like this:
select location, CAST(COALESCE(sum(amount * price), 0) AS DOUBLE PRECISION) as retail from table1 group by location
I understand that the addition of quotations is what prevents the sql injection but how would I prevent it in this case? What is the best way to handle this scenario?
NOTE: This is a simplified version of the queries I'll be writing and I'll want to use find_by_sql.
Prepared statement can not change query structure: table or column names, order by clause, function names and so on. Only literals can be changed this way.
Where is SQL injection? You are not going to put a user-defined value in the query text. Instead, you check the given value against the allowed list and use only your own written parts of SQL. In this case, there is no danger of SQL injection.
I also want to link to this article. It is safe to create a query text dynamically if you control all parts of that query. And it's much better for RDBMS than some smart logic in query.

invalid input syntax for integer: "9Na_(2)SO_(4)"

I'm trying to insert a alphanumeric value in a table:
INSERT INTO solution (solution, nextsolution) VALUES
('9Na_(2)SO_(4)', NULL), ('2Ni(OH)_(3)', (SELECT id FROM solution WHERE solution='9Na_(2)SO_(4)' & nextsolution=null));
solution is of type text and nextsolution is an integer. Unfortunately postgresql doesn't allow me to do the WHERE clause. It gives me the error:
ERROR: invalid input syntax for integer: "9Na_(2)SO_(4)"
LINE 9: ...OH)_(3)', (SELECT id FROM solution WHERE solution='9Na_(2)SO...
How can I solve this?
The issue is that the statement in the where clause: '9Na_(2)SO_(4)' & nextsolution=null tries to do a bitwise and (&) operation on the string and this won't work (and probably isn't what you want anyway).
Looking at your query I think what you want is to first insert the value '9Na_(2)SO_(4)' and then the value '2Ni(OH)_(3)' with the id of the previous inserted row.
You need to do this as two statements and use a different syntax. This should do what you want:
INSERT INTO solution (solution, nextsolution) VALUES (
'9Na_(2)SO_(4)',
NULL
);
INSERT INTO solution (solution, nextsolution) VALUES (
'2Ni(OH)_(3)',
(SELECT id FROM solution WHERE solution='9Na_(2)SO_(4)' and nextsolution is null)
);
You need to use AND instead of & to join your WHERE clause - an ampersand (&) is used for bitwise operations.

Escaping formula for Grails derived properties

Grails offers derived properties to generate a field from a SQL expression using the formula mapping parameter:
static mapping = {
myfield formula: "field1 + field2"
}
I'm trying to use the formula parameter with a PostgreSQL database to make a concatenated field. The syntax is a little strange since PostgreSQL 8.4 doesn't yet support concat_ws:
static mapping = {
myfield formula: "array_to_string(array[field1, field2],' ')"
}
The produced SQL shown with loggingSql = true in the DataSource config has the table prefix inserted into some strange places:
select table0_.field1 as field1_19_0_,
table0_.field2 as field2_19_0_,=
array_to_string(table0_.array[field1, table0_.field2], ' ') as formula0_0_
from test_table table0_ where table0_.id=?
The table prefix errantly appears before array but not before field1 in the derived formula. Is there a way to escape the prefix or correct this behavior more explicitly?
This is just an issue with parsing the formula syntax. GORM tries to insert the table prefix for unquoted expressions not followed by parens, so the ARRAY[] notation trips it up.
My solution was to define the concat_ws function:
CREATE OR REPLACE FUNCTION concat_ws(separator text, variadic str text[])
RETURNS text as $$
SELECT array_to_string($2, $1);
$$ LANGUAGE sql;
The GORM formula parameter can now avoid the ARRAY[] syntax, and works as expected.
myfield formula: "concat_ws(' ', field1, field2)"
I had a very similar problem and solved it by adding single-quotes around the things that GORM was trying to prefix:
static mapping =
{
dayOfYear formula: " EXTRACT('DOY' FROM observed) "
}
GORM then produced this, which worked:
select
EXTRACT('DOY' FROM observed) as y1_
This may not work in all cases, but I hope it helps somebody.

UDTF returning a Table on DB2 V5R4 with Dynamic SQL

I must to write a UDF returning a Table. I’ve done it with Static SQL.
I’ve created Procedures preparing a Dynamic and Complex SQL sentence and returning a cursor.
But now I must to create a UDF with Dynamic SQL and return a table to be used with an IN clause inside other select.
It is possible on DB2 v5R4? Do you have an example?
Thanks in advance...
I don't have V5R4, but I have i 6.1 and V5R3. I have a 6.1 example, and I poked around in V5R3 to find how to make the same example work there. I can't guarantee V5R4, but this ought to be extremely close. Generating the working V5R3 code into 'Run SQL Scripts' gives this:
DROP SPECIFIC FUNCTION SQLEXAMPLE.DYNTABLE ;
SET PATH "QSYS","QSYS2","SYSPROC","SYSIBMADM","SQLEXAMPLE" ;
CREATE FUNCTION SQLEXAMPLE.DYNTABLE (
SELECTBY VARCHAR( 64 ) )
RETURNS TABLE (
CUSTNBR DECIMAL( 6, 0 ) ,
CUSTFULLNAME VARCHAR( 12 ) ,
CUSTBALDUE DECIMAL( 6, 0 ) )
LANGUAGE SQL
NO EXTERNAL ACTION
MODIFIES SQL DATA
NOT FENCED
DISALLOW PARALLEL
CARDINALITY 100
BEGIN
DECLARE DYNSTMT VARCHAR ( 512 ) ;
DECLARE GLOBAL TEMPORARY TABLE SESSION.TCUSTCDT
( CUSTNBR DECIMAL ( 6 , 0 ) NOT NULL ,
CUSTNAME VARCHAR ( 12 ) ,
CUSTBALDUE DECIMAL ( 6 , 2 ) )
WITH REPLACE ;
SET DYNSTMT = 'INSERT INTO Session.TCustCDt SELECT t2.CUSNUM , (t2.INIT CONCAT '' '' CONCAT t2.LSTNAM) as FullName , t2.BALDUE FROM QIWS.QCUSTCDT t2 ' CONCAT CASE WHEN SELECTBY = '' THEN '' ELSE SELECTBY END ;
EXECUTE IMMEDIATE DYNSTMT ;
RETURN SELECT * FROM SESSION . TCUSTCDT ;
END ;
COMMENT ON SPECIFIC FUNCTION SQLEXAMPLE.DYNTABLE
IS 'UDTF returning dynamic table' ;
And in 'Run SQL Scripts', the function can be called like this:
SELECT t1.* FROM TABLE(sqlexample.dyntable('WHERE STATE = ''TX''')) t1
The example is intended to work over IBM's sample QCUSCDT table in library QIWS. Most systems will have that table available. The table function returns values from two QCUSCDT columns, CUSNUM and BALDUE, directly through two of the table function's columns, CUSTNBR and CUSTBALDUE. The third table function column, CUSTFULLNAME, gets its value by a concatenation of INIT and LSTNAM from QCUSTCDT.
However, the part that apparently relates to the question is the SELECTBY parameter of the function. The usage example shows that a WHERE clause is passed in and used to help built a dynamic 'INSERT INTO... SELECT...statement. The example shows that rows containingSTATE='TX'` will be returned. A more complex clause could be passed in or the needed condition(s) could be retrieved from somewhere else, e.g., from another table.
The dynamic statement inserts rows into a GLOBAL TEMPORARY TABLE named SESSION.TCUSTCDT. The temporary table is defined in the function. The temporary column definitions are guaranteed (by the developer) to match the 'RETURNS TABLE` columns of the table function because no dynamic changes can be made to any of those elements. This allows SQL to handle reliably columns returned from the function, and that lets it compile the function.
The RETURN statement simply returns whatever rows are in the temporary table after the dynamic statement completes.
The various field definitions take into account the somewhat unusual definitions in the QCUSTCDT file. Those don't make great sense, but they're useful enough.

SQL invalid conversion return null instead of throwing error

I have a table with a varchar column, and I want to find values that match a certain number. So lets say that column contains the following entries (except with millions of rows in real life):
123456789012
2345678
3456
23 45
713?2
00123456789012
So I decide I want all the rows which are numerically 123456789012 write a statement that looks something like this:
SELECT * FROM MyTable WHERE CAST(MyColumn as bigint) = 123456789012
It should return the first and last row, but instead the whole query blows up because it can't convert the "23 45" and "713?2" to bigint.
Is there another way to do the conversion that will return NULL for values that can't convert?
SQL Server does NOT guarantee boolean operator short-circuit, see On SQL Server boolean operator short-circuit. So all solution using ISNUMERIC(...) AND CAST(...) are fundamentally flawed (they may work, but hey can arbitrarily fail later dependiong on the generated plan). A better solution is using CASE, as Thomas suggests: CASE ISNUMERIC(...) WHEN 1 THEN CAST(...) ELSE NULL END. But, as gbn pointed out, ISNUMERIC is notoriously finicky in identifying what 'numeric' means and many cases where one would expect it to return 0 it returns 1. So mixing the CASE with the LIKE:
CASE WHEN MyRow NOT LIKE '%[^0-9]%' THEN CAST(MyRow as bigint) ELSE NULL END
But the real problem is that if you have millions of rows and you have to search them like this, you'll always end up scanning end-to-end since the expression is not SARG-able (no matter how we rewrite it). The real issue here is data purity, and should be addressed at the appropriate level, where the data is populated. Another thing to consider is if is possible to create a persisted computed column with this expression and create a filtered index on it which eliminates NULL (ie. non-numeric). That would speed up things a little.
If you are using SQL Server 2012 you can use the 2 new methods:
TRY_CAST()
TRY_CONVERT()
Both methods are equivalent. They return a value cast to the specified data type if the cast succeeds; otherwise, returns null. The only difference is that CONVERT is SQL Server specific, CAST is ANSI. using CAST will make your code more portable (although not sure if any other database provider implements TRY_CAST)
ISNUMERIC will accept empty string and values like 1.23 or 5E-04 so could be unreliable.
And you don't know what order things will be evaluated in so it could still fail (SQL is declarative, not procedural, so the WHERE clause probably won't be evaluated left to right)
So:
you want to accept value that consist only of the characters 0-9
you need to materialise the "number" filter so it's applied before CAST
Something like:
SELECT
*
FROM
(
SELECT TOP 2000000000 *
FROM MyTable
WHERE MyColumn NOT LIKE '%[^0-9]%' --double negative rejects anything except 0-9
ORDER BY MyColumn
) foo
WHERE
CAST(MyColumn as bigint) = 123456789012 --applied after number check
Edit: quick example that fails.
CREATE TABLE #foo (bigintstring varchar(100))
INSERT #foo (bigintstring )VALUES ('1.23')
INSERT #foo (bigintstring )VALUES ('1 23')
INSERT #foo (bigintstring )VALUES ('123')
SELECT * FROM #foo
WHERE
ISNUMERIC(bigintstring) = 1
AND
CAST(bigintstring AS bigint) = 123
SELECT *
FROM MyTable
WHERE ISNUMERIC(MyRow) = 1
AND CAST(MyRow as float) = 123456789012
The ISNUMERIC() function should give you what you need.
SELECT * FROM MyTable
WHERE ISNUMERIC(MyRow) = 1
AND CAST(MyRow as bigint) = 123456789012
And to add a case statement like Thomas suggested:
SELECT * FROM MyTable
WHERE CASE(ISNUMERIC(MyRow)
WHEN 1 THEN CAST(MyRow as bigint)
ELSE NULL
END = 123456789012
http://msdn.microsoft.com/en-us/library/ms186272.aspx
SELECT *
FROM MyTable
WHERE (ISNUMERIC(MyColumn) = 1) AND (CAST(MyColumn as bigint) = 123456789012)
Additionally you can use a CASE statement in order to get null values.
SELECT
CASE
WHEN (ISNUMERIC(MyColumn) = 1) THEN CAST(MyColumn as bigint)
ELSE NULL
END AS 'MyColumnAsBigInt'
FROM tableName
If you require additional filtering, for numerics which are not valid to be cast to bigint, you can use the following instead of ISNUMERIC:
PATINDEX('%[^0-9]%',MyColumn)) = 0
If you need decimal values instead of integers, cast to float instead and change the regex to '%[^0-9.]%'