Limit cast scope only to schema in PostgreSQL - postgresql

Funambol in its administration documentation has that for running on newer PostgreSQL instances which are more strict with types and casting you have to add those casts:
CREATE FUNCTION pg_catalog.text(bigint) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int8out($1));';
CREATE CAST (bigint AS text) WITH FUNCTION pg_catalog.text(bigint) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int4out($1));';
CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS IMPLICIT;
The problem is that in the same database (in PostgreSQL terminology) I have also other schemas which applications broke because of those casts (with "operator is not unique: unknown || integer" and hint "Could not choose a best candidate operator. You might need to add explicit type casts.") while they worked before.
So one solution is of course to define additional database and have only Funambol in there. But I am wondering if is there a way to define those casts so that they take effect only in Funambol's schema and not in whole database.

No, it's not possible in the way you imagine it. Casts are identified by source and target type, and so if both types are one of the built-in types, all users of the database will see the same casts between them. The only workaround along that line would be to create clones of the built-in data types, but don't go there. ;-)
So you either need to seek a fix with Funambol, or separate your applications into different databases, and perhaps link them back together with something like dblink.

Related

Can we have Table Declaration in Snowflake similar to Oracle SQL?

I am trying to make use of Table declaration syntax and creating the same in Snowflake SQL.
I've gone through the snowflake community & documentations and all I could find is Table variable directly being assigned similar to SQL table variable or making use of a Table function.
generally Snowflake doesn't allow indexes nor type declarations.
other than that, you have two options:
use JS stored procedures, but they are not meant to return a range of values, all you can do is make a temporary table and put your set in there
use UDFs, that are (or well, can be) SQL and can return a table, but have a limited set of functionality

How to create a synonym for a table in PostgreSQL

I am migrating this Oracle command to PostgreSQL:
CREATE SYNONYM &user..emp FOR &schema..emp;
Please suggest to me how I can migrate the above command.
PostgreSQL does not support SYNOSYM or ALIAS. Synonym is a non SQL 2003 feature implemented by Microsoft SQL 2005 (I think). While it does provide an interesting abstraction, but due to lack of relational integrity, it can be considered a risk.
That is, you can create a synonym, advertise it to you programmers, the code is written around it, including stored procedures, then one day the backend of this synonym (or link or pointer) is changed/deleted/etc leading to a run time error. I don't even think a prepare would catch that.
It is the same trap as the symbolic links in unix and null pointers in C/C++.
But rather create a DB view and it is already updateable as long as it contains one table select only.
CREATE VIEW schema_name.synonym_name AS SELECT * FROM schema_name.table_name;
You don't need synonyms.
There are two approaches:
using the schema search path:
ALTER DATABASE xyz SET search_path = schema1, schema2, ...;
Put the schema that holds the table on the search_path of the database (or user), then it can be used without schema qualification.
using a view:
CREATE VIEW dest_schema.tab AS SELECT * FROM source_schema.tab;
The first approach is good if you have a lot of synonyms for objects in the same schema.

Dynamic SQL for trigger or rules for computed columns?

I would like to keep some computed columns in several tables and columns (PostgreSQL). The calculation is the same but there might be practically any number of tables and columns. So if the input is table1.field1 then the calculated field is table1.field1_calculated . And so forth. I was thinking on using triggers to maintain this, one trigger per field. This would require passing in the table and the field name as arguments into the function, assembling a SQL statement and executing it. I do not want to rely on plpgsql as that might not be available everywhere and everything I can find about dynamic SQL in PostgreSQL is a) old b) says you need plpgsql for that.
So
Is there a way in PostgreSQL 9.4+ or perhaps 9.5+ to create and execute dynamic SQL without plpgsql?
Is there a better way than triggering this function containing said dynamic SQL?
Ps. I am aware of the function/column notation trick but that doesn't really work here. For 2. I am suspecting rules might work but can't figure out how.

why use stored procedure instead of query directly to db?

My company doing new policy because my company would have certification of some international standards. That policy is, the DBA not allowed to query directly into database, like :
select * from some_table, update some_table, etc.
We have to use stored procedure to do that queries.
Regarding my last question in here : Postgres pl/pgsql ERROR: column "column_name" does not exist
I'm wondering, do we have to create a stored procedure per table, or per condition?
Is there any way to create stored procedures more efficiently?
Thanks for your answer before..
and sorry for my bad english.. :D
Some reasons to use stored procedures are:
They have presumably undergone some testing to ensure that they do
not allow business rules to be broken, as well as some optimization
for performance.
They ensure consistency in results. Every time you are asked to
perform task X, you run the stored procedure associate with task X.
If you write the query, you may not write it the same way every time;
maybe one day you forget something silly like forcing text to the
same case before a comparison and something gets missed.
They start off taking somewhat longer to write than just a query, but
running that stored procedure takes less time than writing the query
again. Run it enough times and it becomes more efficient to have
written the stored procedure.
They reduce or eliminate the need to know the relationships of
underlying tables.
You can grant permissions to execute the stored procedures (with
security definer), but deny permissions on the underlying tables.
Programmers (if you separate DBAs and programmers) can be provided an
API, and that’s all they need to know. So long as you maintain the
API while changing the database, you can make any changes necessary
to the underlying relations without breaking their software; indeed,
you don’t even need to know what they have done with your API.
You will likely end up making one stored procedure per query you would otherwise execute.
I'm not sure why you consider this inefficient, or particularly time-consuming as compared to just writing the query. If all you are doing is putting the query inside of a stored procedure, the extra work should be minimal.
CREATE OR REPLACE FUNCTION aSchema.aProcedure (
IN var1 text,
IN var2 text,
OUT col1 text,
OUT col2 text
)
RETURNS setof record
LANGUAGE plpgsql
VOLATILE
CALLED ON NULL INPUT
SECURITY DEFINER
SET search_path = aSchema, pg_temp
AS $body$
BEGIN
RETURN QUERY /*the query you would have written anyway*/;
END;
$body$;
GRANT EXECUTE ON FUNCTION aSchema.aProcedure(text, text) TO public;
As you used in your previous question, the function can be even more dynamic by passing columns/tables as parameters and using EXECUTE (though this increases how much the person executing the function needs to know about how the function works, so I try to avoid it).
If the "less efficient" is coming from additional logic that is included in the function, then the comparison to just using queries isn't fair, as the function is doing additional work.

Postgresql - Edit function signature

POSTGRESQL 8.4.3 - i created a function with this signature
CREATE OR REPLACE FUNCTION logcountforlasthour()
RETURNS SETOF record AS
realised i wanted to change it to this
CREATE OR REPLACE FUNCTION logcountforlasthour()
RETURNS TABLE(ip bigint, count bigint) record AS
but when i apply that change in the query tool it isnt accepted or rather it is accepted, there is no syntax error, but the text of the function has not been changed.
even if i run "DROP FUNCTION logcountforlasthour()" between edits the old syntax comes back
if i edit the body of the function, thats fine, it changes but not the signature
is there something i'm missing
thanks
From the PostgreSQL 8.4 manual:
To replace the current definition of
an existing function, use CREATE OR
REPLACE FUNCTION. It is not possible
to change the name or argument types
of a function this way (if you tried,
you would actually be creating a new,
distinct function). Also, CREATE OR
REPLACE FUNCTION will not let you
change the return type of an existing
function. To do that, you must drop
and recreate the function. (When using
OUT parameters, that means you cannot
change the names or types of any OUT
parameters except by dropping the
function.)
If you drop and then recreate a
function, the new function is not the
same entity as the old; you will have
to drop existing rules, views,
triggers, etc. that refer to the old
function. Use CREATE OR REPLACE
FUNCTION to change a function
definition without breaking objects
that refer to the function. Also,
ALTER FUNCTION can be used to change
most of the auxiliary properties of an
existing function.
The user that creates the function
becomes the owner of the function.
and also note:
...
PostgreSQL allows function overloading; that is, the same name can be used for several
different functions so long as they have distinct argument types.