how to implement contains operator in Postgres - postgresql

How to implement contains operator for strings which returns true if left string contains in right string.
Operator name can be any. I tried ## and code below but
select 'A' ## 'SAS'
returns false.
How to fix ?
CREATE OR REPLACE FUNCTION public.contains(searchFor text, searchIn text)
RETURNS bool
AS $BODY$ BEGIN
RETURN position( searchFor in searchIn)<>0;
END; $BODY$ language plpgsql immutable RETURNS NULL ON NULL INPUT;
CREATE OPERATOR public.## (
leftarg = text,
rightarg = text,
procedure = public.contains
);
Using Postgres 9.1 and above in windows and linux.
select contains('A' , 'SAS' )
returns true as expected.
Update
I tried in 9.1 code from answer:
CREATE OR REPLACE FUNCTION public.contains(searchFor text, searchIn text)
RETURNS bool
LANGUAGE sql IMMUTABLE
AS $BODY$
SELECT position( searchFor in searchIn )<>0;
$BODY$;
CREATE OPERATOR public.<# (
leftarg = text,
rightarg = text,
procedure = public.contains
);
but got error
ERROR: column "searchin" does not exist
LINE 5: SELECT position( searchFor in searchIn )<>0;
How to make it work in 9.1 ? In 9.3 it works.
Using
"PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit"

PostgreSQL already defines an ## operator on (text,text) in the pg_catalog schema, which is defined as:
regress=> \do ##
List of operators
Schema | Name | Left arg type | Right arg type | Result type | Description
------------+------+---------------+----------------+-------------+-------------------
pg_catalog | ## | text | text | boolean | text search match
That is taking precedence over the ## you defined in the public schema.
I suggest using the operator <# for contains, because that's consistent with the array contains and contained-by operators. There's no usage of it in pg_catalog for text ... but there's no guarantee one won't be added in a future version or by an extension.
If you want to guarantee that your operators take precedence over those in pg_catalog, you need to put them in a different schema and put it first on the search path, explicitly before pg_catalog. So something like:
CREATE SCHEMA my_operators;
CREATE OR REPLACE FUNCTION my_operators.contains(searchFor text, searchIn text)
RETURNS bool
LANGUAGE sql IMMUTABLE
AS $BODY$
SELECT position( searchFor in searchIn)<>0;
$BODY$;
CREATE OPERATOR my_operators.<# (
leftarg = text,
rightarg = text,
procedure = public.contains
);
then
SET search_path = 'my_operators, pg_catalog, public';
which you can do with ALTER USER and/or ALTER DATABASE to make it a default.

Related

How to pass schema name dynamically in a function?

I have function called list_customers, taking i_entity_id, i_finyear as input params. The schema name is built from i_finyear, I need to execute the query based on the given schema.
I tried the below code:
CREATE OR REPLACE FUNCTION list_customers(i_entity_id integer,
i_finyear integer)
RETURNS TABLE(entity_id integer, client_id
integer, financial_yr integer) LANGUAGE 'plpgsql' AS
$BODY$
declare finyear integer := i_finyear;
schema_1 text := 'tds'||''||i_finyear;
begin
set search_path to schema_1;
return query select
d.entity_id, d.client_id, d.financial_yr
from schema_1.deductor d where d.entity_id = 1331;
end;
$BODY$;
Then:
select tds2020.list_customers(1331,2022);
imagelink
You need dynamic SQL with EXECUTE:
CREATE OR REPLACE FUNCTION list_customers(i_entity_id int, i_finyear int)
RETURNS TABLE (entity_id int, client_id int, financial_yr int)
LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY EXECUTE
'SELECT d.entity_id, d.client_id, d.financial_yr
FROM tds' || i_finyear || '.deductor d
WHERE d.entity_id = $1'
USING i_entity_id;
END
$func$;
Since the input parameter i_finyear is type integer, there is no danger of SQL injection and you can use plain concatenation to concatenate your schema name like "tbl2016". Else, you'd use format() to defend against that. See:
Table name as a PostgreSQL function parameter
You can also concatenate (properly quoted) values, but it's safer and more efficient to pass the value with the USING keyword. See:
Replace double quotes with single quotes in Postgres (plpgsql)
No need to change the search_path additionally. That would just add an expensive context switch.

How to check if value in input array in PostgreSQL function

I have a pure SQL function like this:
CREATE OR REPLACE FUNCTION get_buildings_by_type(
building_types TEXT
)
RETURNS TABLE (bldg_id TEXT, "SurfArea" FLOAT, geom GEOMETRY) AS
$$
SELECT
bldg."OBJECTID"::TEXT AS bldg_id,
bldg."SurfArea"::FLOAT,
bldg.geom
FROM
static.buildings AS bldg
WHERE
bldg."LandUse" = $1
$$
LANGUAGE SQL;
And it behaves as expected, everything is working. However, I would like to have it work with an input array of building_types, rather than a single values. When I try this instead:
CREATE OR REPLACE FUNCTION get_buildings_by_type(
building_types TEXT[]
)
RETURNS TABLE (bldg_id TEXT, "SurfArea" FLOAT, geom GEOMETRY) AS
$$
SELECT
bldg."OBJECTID"::TEXT AS bldg_id,
bldg."SurfArea"::FLOAT,
bldg.geom
FROM
static.buildings AS bldg
WHERE
bldg."LandUse" IN $1
$$
LANGUAGE SQL;
I get a syntax error:
ERROR: syntax error at or near "$1"
LINE 15: bldg."LandUse" IN $1
Any ideas?
The version is 9.6 if that is relevant.
The equivalent of an IN operator for arrays is the any operator:
You need to use:
WHERE
bldg."LandUse" = any($1);

How to create a function in postgresql that accepts array of parameters and returns a table

Here is how my function is looks:
create or replace function datafabric.test(psi_codes text[])
returns table (asset character varying (255), parent_asset character varying (255))
as
$$
select asset, parent_asset
from americas.asset a
left join americas.prod_int p on a.row_id = p.row_id
where root_asset_id in (select root_asset_id from americas.asset where p.name = ANY($1))
$$ LANGUAGE 'SQL' VOLATILE;
However, the problem is that I am getting this
ERROR: function datafabric.test() does not exist SQL state: 42883 DURING the CREATION OF THE FUNCTION.
Please note that this function works. However, I want to output the results on pgadmin screen. I am not able to do that now.
Please help. I am using postgresql 8.3 version.
PostgreSQL 8.3 doesn't support RETURNS TABLE for functions. You also have to specify the language of a function without the quotes.
You can achieve a similar behaviour through the following:
create or replace function
datafabric.test(psi_codes text[],
OUT asset character varying (255),
OUT parent_asset character varying (255))
RETURNS SETOF RECORD as $$
select asset, parent_asset
from americas.asset a
left join americas.prod_int p on a.row_id = p.row_id
where root_asset_id in (select root_asset_id from americas.asset where p.name = ANY($1))
$$ LANGUAGE SQL VOLATILE;

PostgreSQL modifying fields dynamically in NEW record in a trigger function

I have a user table with IDs and usernames (and other details) and several other tables referring to this table with various column names (CONSTRAINT some_name FOREIGN KEY (columnname) REFERENCES "user" (userid)). What I need to do is add the usernames to the referring tables (in preparation for dropping the whole user table). This is of course easily accomplished with a single ALTER TABLE and UPDATE, and keeping these up-to-date with triggers is also (fairly) easy. But it's the trigger function that is causing me some annoyance. I could have used individual functions for each table, but this seemed redundant, so I created one common function for this purpose:
CREATE OR REPLACE FUNCTION public.add_username() RETURNS trigger AS
$BODY$
DECLARE
sourcefield text;
targetfield text;
username text;
existing text;
BEGIN
IF (TG_NARGS != 2) THEN
RAISE EXCEPTION 'Need source field and target field parameters';
END IF;
sourcefield = TG_ARGV[0];
targetfield = TG_ARGV[1];
EXECUTE 'SELECT username FROM "user" WHERE userid = ($1).' || sourcefield INTO username USING NEW;
EXECUTE format('SELECT ($1).%I', targetfield) INTO existing USING NEW;
IF ((TG_OP = 'INSERT' AND existing IS NULL) OR (TG_OP = 'UPDATE' AND (existing IS NULL OR username != existing))) THEN
CASE targetfield
WHEN 'username' THEN
NEW.username := username;
WHEN 'modifiername' THEN
NEW.modifiername := username;
WHEN 'creatorname' THEN
NEW.creatorname := username;
.....
END CASE;
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;
And using the trigger function:
CREATE TRIGGER some_trigger_name BEFORE UPDATE OR INSERT ON my_schema.my_table FOR EACH ROW EXECUTE PROCEDURE public.add_username('userid', 'username');
The way this works is the trigger function receives the original source field name (for example userid) and the target field name (username) via TG_ARGV. These are then used to fill in the (possibly) missing information. All this works nice enough, but how can I get rid of that CASE-mess? Is there a way to dynamically modify the values in the NEW record when I don't know the name of the field in advance (or rather it can be a lot of things)? It is in the targetfield parameter, but obviously NEW.targetfield does not work, nor something like NEW[targetfield] (like Javascript for example).
Any ideas how this could be accomplished? Besides using for instance PL/Python..
There are not simple plpgsql based solutions. Some possible solutions:
Using hstore extension.
CREATE TYPE footype AS (a int, b int, c int);
postgres=# select row(10,20,30);
row
------------
(10,20,30)
(1 row)
postgres=# select row(10,20,30)::footype #= 'b=>100';
?column?
-------------
(10,100,30)
(1 row)
hstore based function can be very simple:
create or replace function update_fields(r anyelement,
variadic changes text[])
returns anyelement as $$
select $1 #= hstore($2);
$$ language sql;
postgres=# select *
from update_fields(row(10,20,30)::footype,
'b', '1000', 'c', '800');
a | b | c
----+------+-----
10 | 1000 | 800
(1 row)
Some years ago I wrote a extension pl toolbox. There is a function record_set_fields:
pavel=# select * from pst.record_expand(pst.record_set_fields(row(10,20),'f1',33));
name | value | typ
------+-------+---------
f1 | 33 | integer
f2 | 20 | integer
(2 rows)
Probably you can find some plpgsql only solutions based on some tricks with system tables and arrays like this, but I cannot to suggest it. It is too less readable and for not advanced user just only black magic. hstore is simple and almost everywhere so it should be preferred way.
On PostgreSQL 9.4 (maybe 9.3) you can try to black magic with JSON manipulations:
postgres=# select json_populate_record(NULL::footype, jo)
from (select json_object(array_agg(key),
array_agg(case key when 'b'
then 1000::text
else value
end)) jo
from json_each_text(row_to_json(row(10,20,30)::footype))) x;
json_populate_record
----------------------
(10,1000,30)
(1 row)
So I am able to write function:
CREATE OR REPLACE FUNCTION public.update_field(r anyelement,
fn text, val text,
OUT result anyelement)
RETURNS anyelement
LANGUAGE plpgsql
AS $function$
declare jo json;
begin
jo := (select json_object(array_agg(key),
array_agg(case key when 'b' then val
else value end))
from json_each_text(row_to_json(r)));
result := json_populate_record(r, jo);
end;
$function$
postgres=# select * from update_field(row(10,20,30)::footype, 'b', '1000');
a | b | c
----+------+----
10 | 1000 | 30
(1 row)
JSON based function should not be terrible fast. hstore should be faster.
UPDATE/caution: Erwin points out that this is currently undocumented, and the docs indicates it should not be possible to alter records this way.
Use Pavel's solution or hstore.
The json based solution is almost as fast as hstore when simplified. json_populate_record() modifies existing records for us, so we only have to create a json object from the keys we want to change.
See my similar answer, where you'll find benchmarks that compares the solutions.
The simplest solution requires Postgres 9.4:
SELECT json_populate_record (
record
,json_build_object('key', 'new-value')
);
But if you only have Postgres 9.3, you can use casting instead of json_object:
SELECT json_populate_record(
record
, ('{"'||'key'||'":"'||'new-value'||'"}')::json
);

"function does not exist," but I really think it does

Am I crazy or just plain dumb?
dev=# \df abuse_resolve
List of functions
-[ RECORD 1 ]-------+------------------------------------------------------------------------------------------------------------------------------------
Schema | public
Name | abuse_resolve
Result data type | record
Argument data types | INOUT __abuse_id bigint, OUT __msg character varying
Type | normal
dev=# select abuse_resolve('30'::bigint);
ERROR: function abuse_resolve(bigint) does not exist
LINE 1: select abuse_resolve('30'::bigint);
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Here's the CREATE FUNCTION, I've omitted the meat of the code, but that should be irrelevant:
CREATE OR REPLACE FUNCTION abuse_resolve(INOUT __abuse_id bigint, OUT __msg character varying) RETURNS record AS $_$
DECLARE
__abuse_status VARCHAR;
BEGIN
...snip...
UPDATE abuse SET abuse_status = __abuse_status,
edate = now(),
closed_on = now()
WHERE abuse_id = __abuse_id;
__msg = 'SUCCESS';
END;
$_$ LANGUAGE plpgsql SECURITY DEFINER;
And just for giggles:
GRANT ALL ON FUNCTION abuse_resolve(INOUT __abuse_id, OUT __msg character varying) TO PUBLIC;
GRANT ALL ON FUNCTION abuse_resolve(INOUT __abuse_id, OUT __msg character varying) TO myuser;
That function seems like it exists. What could I be missing?
This is resolved, the answer is: I'm dumb. I had improperly defined the arguments originally, but my code was using the correct ones. There was an extra bigint that had no business being there.
Well, something is odd. I did:
steve#steve#[local] =# create function abuse_resolve(inout __abuse_id bigint,
out __msg text) returns record language plpgsql as
$$ begin __msg = 'ok'; end; $$;
CREATE FUNCTION
steve#steve#[local] =# \df abuse_resolve
List of functions
-[ RECORD 1 ]-------+----------------------------------------
Schema | so9679418
Name | abuse_resolve
Result data type | record
Argument data types | INOUT __abuse_id bigint, OUT __msg text
Type | normal
steve#steve#[local] =# select abuse_resolve('30'::bigint);
-[ RECORD 1 ]-+--------
abuse_resolve | (30,ok)
Have you had any other issues with this database? Can you copy it with dump/restore and try this on the new copy? Does explicitly qualifying the function name with the "public" schema help? Which version of PostgreSQL are you using?
update: sql function
It also worked fine for me using:
create function abuse_resolve(inout __abuse_id bigint, out __msg text)
language sql as $$ select $1, 'ok'::text $$;
If you can and if is that problem. I recommend to use
"set search_path = mainSchemaName, secondOnes"
to set correct schema where function is created or in a place where you call it directly specify the schema name
select schemaName.abuse_resolve('30'::bigint);
Try this syntax:
SELECT * FROM abuse_resolve('30'::bigint);
I had everything but no usage on the schema. Granting usage on schema fixed it.