Postgres function passing array of strings - postgresql

I have the Postgres function below to return some info from my DB. I need the p_ic parameter to be able to take an array of
strings.
CREATE OR REPLACE FUNCTION eddie.getinv(
IN p_ic character varying[],
IN p_id character varying)
RETURNS TABLE(cnt bigint, actualid text, actualcompany text, part text, daysinstock double precision, condition text,
ic text, price numeric, stock text, quantity bigint, location text, comments text) AS
$
BEGIN
RETURN QUERY
WITH cte AS (
SELECT
CASE WHEN partnerslist IS NULL OR partnerslist = '' THEN
'XX99'
ELSE
partnerslist
END AS a
FROM support.members WHERE id = p_id
), ctegroup AS
(
SELECT
u.id AS actualid,
(SELECT m.company || ' (' || m.id ||')' FROM support.members m WHERE m.id = u.id) AS actualcompany,
u.itemname AS part,
DATE_PART('day', CURRENT_TIMESTAMP - u.datein::timestamp) AS daysinstock,
TRIM(u.grade)::character varying AS condition,
u.vstockno::text AS stock,
u.holl::text AS ic,
CASE WHEN u.rprice > 0 THEN
u.rprice
ELSE
NULL
END AS price,
u.quantity,
u.location,
u.comments::text
FROM public.net u
WHERE u.holl in (p_ic)
AND visibledate <= now()
AND u.id = ANY(REGEXP_SPLIT_TO_ARRAY(p_id ||','|| (SELECT a FROM cte), ','))
ORDER BY u.itemname, u.id
)
SELECT
COUNT(ctegroup.ic) OVER(PARTITION BY ctegroup.ic ORDER BY ctegroup.ic) AS cnt,
actualid,
MAX(actualcompany) AS actualcompany,
MAX(part) AS part,
MAX(daysinstock) AS daysinstock,
STRING_AGG(condition,',') AS condition,
MAX(ic) AS ic,
MAX(price) AS price,
STRING_AGG(stock,',') AS stock,
SUM(quantity) AS qty,
STRING_AGG(location,',') AS location,
STRING_AGG(comments,';') AS comments
FROM ctegroup
GROUP BY part, actualid, ic
ORDER BY actualid;
END; $
LANGUAGE 'plpgsql';
I am calling it from the pgAdminIII Query window like this:
SELECT * FROM eddie.getinv(array['536-01036','536-01033L','536-01037'], 'N40')
But it is returning this error:
ERROR: operator does not exist: text = character varying[]`
LINE 28: WHERE u.holl in (p_ic)`
How do I fix this, or am I calling it incorrectly? I will be calling it from a PHP API function similar to this:
$id = 'N40';
$ic = array('536-01036','536-01033L','536-01037');
$sql = "SELECT * FROM eddie.getinv(array['". implode("','",$ic)."'], '".$id."');";
try
{
$results = pg_query($sql);
if(pg_num_rows($results) == 0) {
$rows = [];
}
else
{
$data = pg_fetch_all($results);
foreach($data as $item)
{
$rows[$item["ic"]][] = $item;
}
}
pg_free_result($results);
}
catch (Exception $e)
{
$err = array("message"=>$e->getMessage(), "code"=> $e->getCode(), "error"=>$e->__toString().",\n".print_r($_REQUEST, true));
echo json_encode($err);
}
echo json_encode($rows);

It looks like your array is being passed to the function just fine. The problem is in your query.
IN () clauses expect a comma-separated list of values. When you put an array in there, it's interpreted as a one-element list, where the value is the whole array. In other words, u.holl in (p_ic) will check if u.holl is equal to p_ic, and the comparison fails due to the type mismatch.
If you want to test the value against the contents of the array, use u.holl = ANY(p_ic).

Related

Is it possible in Snowflake to automate a merge?

Currently I have a script that merges between my source and target table but updating and inserting. Both of these tables update daily through a task created on snowflake. I would like to preform this merge daily too. Is it possible to automate this merge through either a task or something else on snowflake?
Thanks
If your script contains only SQL commands (or commands that can be written in JS), you can create a stored procedure to call them, and then create a task to run this procedure on every day.
https://docs.snowflake.com/en/sql-reference/stored-procedures-usage.html
https://docs.snowflake.com/en/user-guide/tasks-intro.html
-- Here is prerequisite for running automerge procedure that is pasted at the back ---
1 --Create Log Table:
--EDWH_DEV.WS_EA_DNATA_DEV.GEN_LOG definition
create or replace TABLE GEN_LOG (
LOG_ID NUMBER(38,0) autoincrement,
"number of rows inserted" NUMBER(38,0),
"number of rows updated" NUMBER(38,0),
PROC_NAME VARCHAR(100),
FINISHED TIMESTAMP_NTZ(9),
USER_NAME VARCHAR(100),
USER_ROLE VARCHAR(100),
STATUS VARCHAR(50),
MESSAGE VARCHAR(2000)
);
2 --Data is loaded based on an existing table structure which must match source file columns count.
--Example:
--EDWH_DEV.WS_EA_DNATA_DEV.AIRLINES definition
create or replace TABLE AIRLINES (
CONSOLIDATED_AIRLINE_CODE VARCHAR(80),
POSSIBLE_CUSTOMER_NAME VARCHAR(100),
CUSTOMER_TYPE VARCHAR(70),
CONSOLIDATED_AIRLINE_NAME VARCHAR(90),
constraint CONSOLIDATED_AIRLINE_CODE unique (CONSOLIDATED_AIRLINE_CODE),
constraint CUSTOMER_TYPE unique (CUSTOMER_TYPE)
);
3 --File in stage is AIRLINES.CSV has same column number in same order, not necessary has to have same headers as they will be aliased automatically to created table column names as above.
4 --Make sure you have required file format set or use default ones(refer to SF documentation)
--ALTER FILE FORMAT "EDWH_DEV"."WS_EA_DNATA_DEV".CSV SET COMPRESSION = 'AUTO' FIELD_DELIMITER = ',' RECORD_DELIMITER = '\n' SKIP_HEADER = 1 FIELD_OPTIONALLY_ENCLOSED_BY = '\042' TRIM_SPACE = FALSE ERROR_ON_COLUMN_COUNT_MISMATCH = ----TRUE ESCAPE = 'NONE' ESCAPE_UNENCLOSED_FIELD = '\134' DATE_FORMAT = 'AUTO' TIMESTAMP_FORMAT = 'AUTO' NULL_IF = ('\\N');
5 --Tables must be appended to have constraints which then will be used for MERGE ON clause in merge statement. Constraint name must match Column name.
ALTER TABLE AIRLINES ADD CONSTRAINT CONSOLIDATED_AIRLINE_CODE UNIQUE (CONSOLIDATED_AIRLINE_CODE);
ALTER TABLE AIRLINES ADD CONSTRAINT CUSTOMER_TYPE UNIQUE (CUSTOMER_TYPE);
6 --You have stage set up and you can view files in it.
list #my_stage;
7 -- this view is used to pull unique fields for on clause in merge
CREATE OR REPLACE VIEW CONSTRAINS_VW AS
SELECT
tbl.table_schema,
tbl.table_name,
con.constraint_name,
col.data_type
FROM EDWH_DEV.information_schema.table_constraints con
INNER JOIN EDWH_DEV.information_schema.tables tbl
ON con.table_name = tbl.table_name
AND con.constraint_schema = tbl.table_schema
INNER JOIN EDWH_DEV.information_schema.columns col
ON tbl.table_name = col.table_name
AND con.constraint_name = col.column_name
AND con.constraint_schema = col.table_schema
;
WHERE con.constraint_type in ('PRIMARY KEY', 'UNIQUE');
------ the general procedure code compline once use many times :) ---
CREATE OR REPLACE PROCEDURE "MERGER_BUILDER_GEN"("TABLE_NAME" VARCHAR(200), "SCHEMA_NAME" VARCHAR(200), "STAGE_NAME" VARCHAR(200))
RETURNS VARCHAR(32000)
LANGUAGE JAVASCRIPT
EXECUTE AS CALLER
AS $$
var result;
snowflake.execute( {sqlText: "begin transaction;"});
var my_sql_command = `SELECT
0 AS "number of rows inserted"
, 0 as "number of rows updated"
,'` + TABLE_NAME + `' AS proc_name
,CURRENT_TIMESTAMP() AS FINISHED
,CURRENT_USER() AS USER_NAME
,CURRENT_ROLE() USER_ROLE
,'Failed' as status`;
var statement1 = snowflake.createStatement( {sqlText: my_sql_command} );
var result_set1 = statement1.execute();
result_set1.next();
var column1 = result_set1.getColumnValue(1);
var column2 = result_set1.getColumnValue(2);
var column3 = result_set1.getColumnValue(3);
var column4 = result_set1.getColumnValue(4);
var column5 = result_set1.getColumnValue(5);
var column6 = result_set1.getColumnValue(6);
var column7 = result_set1.getColumnValue(7);
try {
var v_sql_stmt = `CREATE OR REPLACE temporary TABLE vars_of_merger_dyn00 AS
SELECT
COL_NAMES_SELECT
,REPLACE(listagg (distinct' nvl(tgt."'||cons.constraint_name||'",'
||CASE WHEN cons.data_type ='FLOAT' THEN '0'
WHEN cons.data_type ='NUMBER' THEN '0'
WHEN cons.data_type ='DATE' THEN '''1900-12-01'''
WHEN cons.data_type ='TIMESTAMP_NTZ' THEN '''1900-12-01 00:00:00'''
ELSE '-999999' END||') = nvl(src."'
||cons.constraint_name ||'",'
||CASE WHEN cons.data_type ='FLOAT' THEN '0'
WHEN cons.data_type ='NUMBER' THEN '0'
WHEN cons.data_type ='DATE' THEN '''1900-12-01'''
WHEN cons.data_type ='TIMESTAMP_NTZ' THEN '''1900-12-01 00:00:00'''
ELSE '-999999' END ,') and \n') ||')','-999999','''''') AS dd
,REPLACE(COL_NAMES_WHEN,'-999999','''''') AS COL_NAMES_WHEN
,COL_NAMES_SET
,COL_NAMES_INS
,COL_NAMES_INS1
FROM (
SELECT
InTab.TABLE_NAME
,listagg (' cast($' ||InTab.ORDINAL_POSITION || ' as ' || intab.DATA_TYPE || ') as "' ||InTab.COLUMN_NAME,'", \n') WITHIN GROUP ( ORDER BY ORDINAL_POSITION asc ) ||'"' AS Col_Names_select
,listagg (' nvl(tgt."' || CASE WHEN intab.CM IS NULL THEN InTab.COLUMN_NAME ELSE NULL end || '", '
||CASE WHEN intab.data_type ='FLOAT' THEN '0'
WHEN intab.data_type ='NUMBER' THEN '0'
WHEN intab.data_type ='DATE' THEN '''1900-12-01'''
WHEN intab.data_type ='TIMESTAMP_NTZ' THEN '''1900-12-01 00:00:00''' ELSE '-999999' END
||') != nvl(src."' ||InTab.COLUMN_NAME||'",'||
CASE WHEN intab.data_type ='FLOAT' THEN '0'
WHEN intab.data_type ='NUMBER' THEN '0'
WHEN intab.data_type ='DATE' THEN '''1900-12-01'''
WHEN intab.data_type ='TIMESTAMP_NTZ' THEN '''1900-12-01 00:00:00''' ELSE '-999999' END
,') OR\n') WITHIN GROUP ( ORDER BY ORDINAL_POSITION asc ) ||')' AS Col_Names_when
,listagg (' tgt."' ||CASE WHEN intab.CM IS NULL THEN InTab.COLUMN_NAME ELSE NULL end || '"= src."' ||InTab.COLUMN_NAME , '",\n') WITHIN GROUP ( ORDER BY ORDINAL_POSITION asc ) ||'"' AS Col_Names_set
,listagg ( '"'||InTab.COLUMN_NAME,'",\n') WITHIN GROUP ( ORDER BY ORDINAL_POSITION asc ) ||'"' AS Col_Names_ins
,listagg ( ' src."' ||InTab.COLUMN_NAME,'",\n') WITHIN GROUP ( ORDER BY InTab.ORDINAL_POSITION asc ) ||'"' AS Col_Names_ins1
,listagg (ORDINAL_POSITION,',') WITHIN GROUP ( ORDER BY ORDINAL_POSITION asc ) ORDINAL_POSITION
FROM (
SELECT
InTab.TABLE_NAME
,InTab.COLUMN_NAME
,InTab.ORDINAL_POSITION
,intab.DATA_TYPE
,cons.CONSTRAINT_NAME AS CM
FROM INFORMATION_SCHEMA.COLUMNS InTab
LEFT JOIN constrains_vw cons ON cons.table_name = intab.table_name AND InTab.COLUMN_NAME = cons.CONSTRAINT_NAME
where intab.TABLE_SCHEMA = '`+ SCHEMA_NAME +`'
AND intab.TABLE_NAME = '`+ TABLE_NAME +`'
GROUP BY
InTab.TABLE_NAME
,InTab.COLUMN_NAME
,InTab.COLUMN_NAME
,InTab.ORDINAL_POSITION
,intab.DATA_TYPE
,CONSTRAINT_NAME
ORDER BY InTab.TABLE_NAME,InTab.ORDINAL_POSITION ) InTab
GROUP BY TABLE_NAME
ORDER BY TABLE_NAME,ORDINAL_POSITION
) tt
LEFT JOIN constrains_vw cons ON cons.table_name = tt.table_name
GROUP BY
COL_NAMES_SELECT
,COL_NAMES_WHEN
,COL_NAMES_SET
,COL_NAMES_INS
,COL_NAMES_INS1;` ;
var rs_clip_name = snowflake.execute ({sqlText: v_sql_stmt});
var my_sql_command1 = `SELECT Col_Names_select,dd,Col_Names_when,Col_Names_set,Col_Names_ins,Col_Names_ins1 FROM vars_of_merger_dyn00;`;
var statement2 = snowflake.createStatement( {sqlText: my_sql_command1} );
var result_set = statement2.execute();
result_set.next();
var Col_Names_select = result_set.getColumnValue(1);
var dd = result_set.getColumnValue(2);
var Col_Names_when = result_set.getColumnValue(3);
var Col_Names_set = result_set.getColumnValue(4);
var Col_Names_ins = result_set.getColumnValue(5);
var Col_Names_ins1 = result_set.getColumnValue(6);
if (Col_Names_set == '"')
{
var my_sql_command2 = `MERGE INTO EDWH_DEV.`+ SCHEMA_NAME +`.`+ TABLE_NAME +` AS tgt
USING
( select
`+ Col_Names_select +`
from
#` + STAGE_NAME + `/` + TABLE_NAME + `.csv (file_format => 'CSV') )
AS src
ON ( `+ dd +`
)
WHEN NOT MATCHED
THEN INSERT ( `+ Col_Names_ins +`)
VALUES
(`+ Col_Names_ins1 +`); `;
var rs_clip_name2 = snowflake.execute ({sqlText: my_sql_command2});
snowflake.createStatement( { sqlText: `INSERT INTO GEN_LOG
("number of rows inserted", "number of rows updated", proc_name , FINISHED, USER_NAME, USER_ROLE, STATUS, MESSAGE)
SELECT "number of rows inserted", 0 as "number of rows updated", '` + TABLE_NAME + `' AS proc_name , sysdate(), CURRENT_USER() ,CURRENT_ROLE(),'done' as status ,'' AS message
FROM TABLE (RESULT_SCAN(LAST_QUERY_ID()));`} ).execute();
}
else
{
var my_sql_command2 = `MERGE INTO EDWH_DEV.`+ SCHEMA_NAME +`.`+ TABLE_NAME +` AS tgt
USING
( select
`+ Col_Names_select +`
from
#` + STAGE_NAME + `/` + TABLE_NAME + `.csv (file_format => 'CSV') )
AS src
ON ( `+ dd +`
)
WHEN MATCHED
AND `+ Col_Names_when +`
THEN UPDATE SET
`+ Col_Names_set +`
WHEN NOT MATCHED
THEN INSERT ( `+ Col_Names_ins +`)
VALUES
(`+ Col_Names_ins1 +`); `;
var rs_clip_name2 = snowflake.execute ({sqlText: my_sql_command2});
snowflake.createStatement( { sqlText: `INSERT INTO GEN_LOG
("number of rows inserted", "number of rows updated", proc_name , FINISHED, USER_NAME, USER_ROLE, STATUS, MESSAGE)
SELECT "number of rows inserted","number of rows updated", '` + TABLE_NAME + `' AS proc_name , sysdate(), CURRENT_USER() ,CURRENT_ROLE(),'done' as status ,'' AS message
FROM TABLE (RESULT_SCAN(LAST_QUERY_ID()));`} ).execute();
}
snowflake.execute( {sqlText: "commit;"} );
result = "Succeeded" + my_sql_command2 ;
} catch (err) {
snowflake.execute({
sqlText: `insert into GEN_LOG VALUES (DEFAULT,?,?,?,?,?,?,?,?)`
,binds: [column1, column2, column3 ,column4 , column5 , column6 ,column7 , err.code + " | State: " + err.state + "\n Message: " + err.message + "\nStack Trace:\n" + err.stackTraceTxt ]
});
snowflake.execute( {sqlText: "commit;"} );
return 'Failed.' + my_sql_command2 ;
}
return result;
$$;
now you can stop here and use proc as : CALL MERGER_BUILDER_GEN('MY_TABLE','MY_SCHEMA','MY_STAGE'); example --- all case senssitive
So what it does in a nut shell it writes a proper merge statement for any table ddl that you created in schema and feeded to proc, it looks up file and creates dynamically select out of it for merge select , then other little bits like "on clause", "when matched and nvl(everything) and when not matched then insert" also it does cast to different data types on the fly, kind of like what "copy into" does but in my humble opinion merge is better for non perfect deltas, so if you don't want to have data lake with partitioned files over dates and then stitch together via external tables or god forbid in a union view then give this a shot.
Also you can use little set up to run as many tables as you like with automerge 1 by 1
create or replace TABLE PROC_LIST (
PROC_PRIORIT_ID NUMBER(38,0) autoincrement,
PROC_NAME VARCHAR(150)
);
INSERT INTO PROC_LIST (PROC_NAME) VALUES ('TABLE1'); with 50 columns
INSERT INTO PROC_LIST (PROC_NAME) VALUES ('TABLE2');
INSERT INTO PROC_LIST (PROC_NAME) VALUES ('TABLE3');
INSERT INTO PROC_LIST (PROC_NAME) VALUES ('TABLE4');
INSERT INTO PROC_LIST (PROC_NAME) VALUES ('TABLE5'); with 500 columns
INSERT INTO PROC_LIST (PROC_NAME) VALUES ('TABLE6');
INSERT INTO PROC_LIST (PROC_NAME) VALUES ('TABLE7');
INSERT INTO PROC_LIST (PROC_NAME) VALUES ('TABLE8'); limit dyn sql is 32000 chars go crazy
INSERT INTO PROC_LIST (PROC_NAME) VALUES ('TABLE9');
--CREATEed SOME nice LIST OF TABLES TO be loaded 1 BY 1 USING AUTO merge !
CREATE OR REPLACE VIEW PROC_LOAD_CONTROL AS
select
metadata$filename
,REPLACE(REPLACE(metadata$filename,'.csv',''),'path/to/your_table_ifnot_inmain_stage_location/','') AS file_name
,pl.PROC_NAME AS table_name
,'MY_SCHEMA' as schema_name
,'MY_STAGE' AS stage_name
from #MY_STAGE
inner JOIN PROC_LIST pl ON pl.PROC_NAME = REPLACE(REPLACE(metadata$filename,'.csv',''),'path/to/your_table_ifnot_inmain_stage_location/','')
GROUP BY metadata$filename,pl.proc_name
ORDER BY REPLACE(REPLACE(metadata$filename,'.csv',''),'path/to/your_table_ifnot_inmain_stage_location/','') asc;
--this will make sure that your TABLES MATCH names WITH actual FILES IN your STAGE, please look FOR requisite TO make this thing WORK smoothly
CREATE OR REPLACE PROCEDURE "PROJECT_REFRESH_MRG"()
RETURNS VARCHAR(1000)
LANGUAGE JAVASCRIPT
EXECUTE AS OWNER
AS $$
try {
var v_sql_stmt = `SELECT
table_name
,schema_name
,stage_name
FROM PROC_LOAD_CONTROL;`;
var rs_proc_name = snowflake.execute ({sqlText: v_sql_stmt});
var v_table_name = '';
var v_schema_name = '';
var v_stage_name = '';
//loop throgh all the external table and refresh
while (rs_proc_name.next()) {
v_table_name = rs_proc_name.getColumnValue(1);
v_schema_name = rs_proc_name.getColumnValue(2);
v_stage_name = rs_proc_name.getColumnValue(3);
//refresh the external table
v_sql_stmt = `call MERGER_BUILDER_GEN('`+v_table_name+`','`+v_schema_name+`','`+v_stage_name+`')`;
snowflake.execute ({sqlText: v_sql_stmt});
}
return "Success: " + v_sql_stmt;
}
catch (err)
{
//error log here
return "Failed" + err; // Return a success/error indicator
}
$$;
--- So this will create a list of tables with stage and schema vars and pass in while loop to generic merger builder.

Error while trying to run SQL queries with one variable having values with comma as a string

I am getting the error in the below sql queries. For osdsId variable, I will get values as a list from the UI. That's why I hardcoded the value for testing. But it is displaying error as 'Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.' However, it works if I just assign just one value. Thank you.
declare #osdsId VARCHAR(max) = '4292, 4293',
#pqrId VARCHAR(max) = NULL,
#queryOrderBy VARCHAR(max) = 'DATE_INSERTED ASC',
#rowLimit INT = 0,
#startRow INT = 0,
#endRow INT = 0
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY #queryOrderBy ) AS ROWNUM,
S.OSDS_ID,
S.PQR_ID,
S.DATE_INSERTED,
S.BUY_CURRENCY,
S.SELL_CURRENCY,
S.BUY_EXCHANGE_RATE,
S.SELL_EXCHANGE_RATE,
S.BUY_PERCENT,
S.SELL_PERCENT
FROM
table1 S
WHERE
1=1
AND S.OSDS_ID IN (COALESCE((SELECT TXT_VALUE FROM
DBO.FN_PARSETEXT2TABLE_TEXTONLY(#osdsId, ',') ), S.OSDS_ID))
AND S.PQR_ID IN (COALESCE((SELECT TXT_VALUE FROM
DBO.FN_PARSETEXT2TABLE_TEXTONLY(#pqrId, ',') ), S.PQR_ID))
)x
WHERE ROWNUM BETWEEN
CASE WHEN (#rowLimit > 0) THEN #startRow ELSE ROWNUM END
AND CASE WHEN (#rowLimit > 0) THEN #endRow ELSE ROWNUM END
I believe it's because your FN_PARSETEXT2TABLE_TEXTONLY is returning a table, but COALESCE expects its arguments to be a single value. So, when two values are returned from the table, that's where the error comes from. You could add a TOP 1, but that would defeat the purpose.
What I would do: declare a table variable and run the UDF on it outside of your query. It's inefficient to run that within a sub-query anyway since it's consistent throughout execution.
I'm not clear on why you are using the COALESCE though. Is that for if the parsing function fails and returns a NULL, you still want it to return something? Because it will return everything in that case.
So, assuming that your FN_PARSETEXT2TABLE_TEXTONLY returns a table with a single integer column:
declare #osdsId VARCHAR(max) = '4292, 4293',
#pqrId VARCHAR(max) = NULL,
#queryOrderBy VARCHAR(max) = 'DATE_INSERTED ASC',
#rowLimit INT = 0,
#startRow INT = 0,
#endRow INT = 0,
#osdsTbl TABLE (oid INT),
#pqrTbl TABLE (pid INT);
SET #osdsTbl = DBO.FN_PARSETEXT2TABLE_TEXTONLY(#osdsId, ',');
SET #pqrTbl = DBO.FN_PARSETEXT2TABLE_TEXTONLY(#pqrId, ',');
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY #queryOrderBy ) AS ROWNUM,
S.OSDS_ID,
S.PQR_ID,
S.DATE_INSERTED,
S.BUY_CURRENCY,
S.SELL_CURRENCY,
S.BUY_EXCHANGE_RATE,
S.SELL_EXCHANGE_RATE,
S.BUY_PERCENT,
S.SELL_PERCENT
FROM
table1 S
WHERE
1=1
AND (#osdsId IS NULL OR (#osdsId IS NOT NULL AND S.OSDS_ID IN (SELECT * FROM #osdsTbl))
AND (#pqrId IS NULL OR (#pqrId IS NOT NULL AND S.PQR_ID IN (SELECT * FROM #pqrTbl))
)x
WHERE ROWNUM BETWEEN
CASE WHEN (#rowLimit > 0) THEN #startRow ELSE ROWNUM END
AND CASE WHEN (#rowLimit > 0) THEN #endRow ELSE ROWNUM END

How can I set a value after Select query in PostgreSQL?

I use simple select query:
SELECT
transactiontype
FROM
posfeed
LIMIT 100;
I result I have some rows with empty values.
I need set some default value in SELECT query result for some filed.
Something like:
if (transactiontype = '') {
//SET SOME VALUE HERE
} else {
//LEAVE
}
and can I somehow use it in 'where' clause
To use a column alias in the where clause you must wrap the select into a derived table:
select *
from (
SELECT CASE transactiontype
WHEN '' THEN 'some_default_value'
ELSE transactiontype
END AS transactiontype
FROM the_table
) t
where transactiontype = '...';
But I don't see the reason to do that, if you want to find those where the some_default_value is returned just run:
SELECT CASE transactiontype
WHEN '' THEN 'some_default_value'
ELSE transactiontype
END AS transaction_type
FROM the_table
where transactiontype = '';
But that then doesn't really makes sense, because the above is equivalent to:
SELECT 'some_default_value'
FROM the_table
where transactiontype = '';
So I guess there is something you did not include in the question or your comments.

Return a value if select returned null

I need to return a value if select returned null. however I found a solution here by putting a query in a sub-query
SELECT COALESCE((SELECT id FROM tbl WHERE id = 9823474), 4) AS id FROM RDB$DATABASE;
The query above would return Null because the value 9823474 does not exist in the table but I want to return a value in that case (for ex 4) so I found the only solution to use select inside sub query and then COALESCE would work, If I did not do that COALESCE will also return Null.
Is it the only solution ?
No, that is not an only way for example
Select first 1 id from (
Select id FROM tbl WHERE id = 9823474
Union All
Select 4 from rdb$database)
Or you can use anonymous procedure http://firebirdsql.su/doku.php?id=execute_block
EXECUTE BLOCK RETURNS ( id integer )
AS
BEGIN
IF ( EXISTS (SELECT * FROM tbl WHERE id = 9823474) )
THEN id = 9823474;
ELSE id = 4;
SUSPEND;
END
... there always are many methods there

Test for null in function with varying parameters

I have a Postgres function:
create function myfunction(integer, text, text, text, text, text, text) RETURNS
table(id int, match text, score int, nr int, nr_extra character varying, info character varying, postcode character varying,
street character varying, place character varying, country character varying, the_geom geometry)
AS $$
BEGIN
return query (select a.id, 'address' as match, 1 as score, a.ad_nr, a.ad_nr_extra,a.ad_info,a.ad_postcode, s.name as street, p.name place , c.name country, a.wkb_geometry as wkb_geometry from "Addresses" a
left join "Streets" s on a.street_id = s.id
left join "Places" p on s.place_id = p.id
left join "Countries" c on p.country_id = c.id
where c.name = $7
and p.name = $6
and s.name = $5
and a.ad_nr = $1
and a.ad_nr_extra = $2
and a.ad_info = $3
and ad_postcode = $4);
END;
$$
LANGUAGE plpgsql;
This function fails to give the right result when one or more of the variables entered are NULL because ad_postcode = NULL will fail.
What can I do to test for NULL inside the query?
I disagree with some of the advice in other answers. This can be done with PL/pgSQL and I think it is mostly far superior to assembling queries in a client application. It is faster and cleaner and the app only sends the bare minimum across the wire in requests. SQL statements are saved inside the database, which makes it easier to maintain - unless you want to collect all business logic in the client application, this depends on the general architecture.
PL/pgSQL function with dynamic SQL
CREATE OR REPLACE FUNCTION func(
_ad_nr int = NULL
, _ad_nr_extra text = NULL
, _ad_info text = NULL
, _ad_postcode text = NULL
, _sname text = NULL
, _pname text = NULL
, _cname text = NULL)
RETURNS TABLE(id int, match text, score int, nr int, nr_extra text
, info text, postcode text, street text, place text
, country text, the_geom geometry)
LANGUAGE plpgsql AS
$func$
BEGIN
-- RAISE NOTICE '%', -- for debugging
RETURN QUERY EXECUTE concat(
$$SELECT a.id, 'address'::text, 1 AS score, a.ad_nr, a.ad_nr_extra
, a.ad_info, a.ad_postcode$$
, CASE WHEN (_sname, _pname, _cname) IS NULL THEN ', NULL::text' ELSE ', s.name' END -- street
, CASE WHEN (_pname, _cname) IS NULL THEN ', NULL::text' ELSE ', p.name' END -- place
, CASE WHEN _cname IS NULL THEN ', NULL::text' ELSE ', c.name' END -- country
, ', a.wkb_geometry'
, concat_ws('
JOIN '
, '
FROM "Addresses" a'
, CASE WHEN NOT (_sname, _pname, _cname) IS NULL THEN '"Streets" s ON s.id = a.street_id' END
, CASE WHEN NOT (_pname, _cname) IS NULL THEN '"Places" p ON p.id = s.place_id' END
, CASE WHEN _cname IS NOT NULL THEN '"Countries" c ON c.id = p.country_id' END
)
, concat_ws('
AND '
, '
WHERE TRUE'
, CASE WHEN $1 IS NOT NULL THEN 'a.ad_nr = $1' END
, CASE WHEN $2 IS NOT NULL THEN 'a.ad_nr_extra = $2' END
, CASE WHEN $3 IS NOT NULL THEN 'a.ad_info = $3' END
, CASE WHEN $4 IS NOT NULL THEN 'a.ad_postcode = $4' END
, CASE WHEN $5 IS NOT NULL THEN 's.name = $5' END
, CASE WHEN $6 IS NOT NULL THEN 'p.name = $6' END
, CASE WHEN $7 IS NOT NULL THEN 'c.name = $7' END
)
)
USING $1, $2, $3, $4, $5, $6, $7;
END
$func$;
Call:
SELECT * FROM func(1, '_ad_nr_extra', '_ad_info', '_ad_postcode', '_sname');
SELECT * FROM func(1, _pname := 'foo');
Since all function parameters have default values, you can use positional notation, named notation or mixed notation at your choosing in the function call. See:
Functions with variable number of input parameters
More explanation for basics of dynamic SQL:
Refactor a PL/pgSQL function to return the output of various SELECT queries
The concat() function is instrumental for building the string. It was introduced with Postgres 9.1.
The ELSE branch of a CASE statement defaults to NULL when not present. Simplifies the code.
The USING clause for EXECUTE makes SQL injection impossible as values are passed as values and allows to use parameter values directly, exactly like in prepared statements.
NULL values are used to ignore parameters here. They are not actually used to search.
You don't need parentheses around the SELECT with RETURN QUERY.
Simple SQL function
You could do it with a plain SQL function and avoid dynamic SQL. For some cases this may be faster. But I wouldn't expect it in this case. Planning the query without unnecessary joins and predicates typically produces best results. Planning cost for a simple query like this is almost negligible.
CREATE OR REPLACE FUNCTION func_sql(
_ad_nr int = NULL
, _ad_nr_extra text = NULL
, _ad_info text = NULL
, _ad_postcode text = NULL
, _sname text = NULL
, _pname text = NULL
, _cname text = NULL)
RETURNS TABLE(id int, match text, score int, nr int, nr_extra text
, info text, postcode text, street text, place text
, country text, the_geom geometry)
LANGUAGE sql AS
$func$
SELECT a.id, 'address' AS match, 1 AS score, a.ad_nr, a.ad_nr_extra
, a.ad_info, a.ad_postcode
, s.name AS street, p.name AS place
, c.name AS country, a.wkb_geometry
FROM "Addresses" a
LEFT JOIN "Streets" s ON s.id = a.street_id
LEFT JOIN "Places" p ON p.id = s.place_id
LEFT JOIN "Countries" c ON c.id = p.country_id
WHERE ($1 IS NULL OR a.ad_nr = $1)
AND ($2 IS NULL OR a.ad_nr_extra = $2)
AND ($3 IS NULL OR a.ad_info = $3)
AND ($4 IS NULL OR a.ad_postcode = $4)
AND ($5 IS NULL OR s.name = $5)
AND ($6 IS NULL OR p.name = $6)
AND ($7 IS NULL OR c.name = $7)
$func$;
Identical call.
To effectively ignore parameters with NULL values:
($1 IS NULL OR a.ad_nr = $1)
To actually use NULL values as parameters, use this construct instead:
($1 IS NULL AND a.ad_nr IS NULL OR a.ad_nr = $1) -- AND binds before OR
This also allows for indexes to be used.
For the case at hand, replace all instances of LEFT JOIN with JOIN.
db<>fiddle here - with simple demo for all variants.
Old sqlfiddle
Asides
Don't use name and id as column names. They are not descriptive and when you join a bunch of tables (like you do to a lot in a relational database), you end up with several columns all named name or id, and have to attach aliases to sort the mess.
Please format your SQL properly, at least when asking public questions. But do it privately as well, for your own good.
If you can modify the query, you could do something like
and (ad_postcode = $4 OR $4 IS NULL)
You can use
c.name IS NOT DISTINCT FROM $7
It will return true if c.name and $7 are equal or both are null.
Or you can use
(c.name = $7 or $7 is null )
It will return true if c.name and $7 are equal or $7 is null.
Several things...
First, as side note: the semantics of your query might need a revisit. Some of the stuff in your where clauses might actually belong in your join clauses, like:
from ...
left join ... on ... and ...
left join ... on ... and ...
When they don't, you most should probably be using an inner join, rather than a left join.
Second, there is a is not distinct from operator, which can occasionally be handy in place of =. a is not distinct from b is basically equivalent to a = b or a is null and b is null.
Note, however, that is not distinct from does NOT use an index, whereas = and is null actually do. You could use (field = $i or $i is null) instead in your particular case, and it will yield the optimal plan if you're using the latest version of Postgres:
https://gist.github.com/ddebernardy/5884267