So I have a PG function create_order (language is PL/PGSQL) which accepts a quite a few arguments.
I have noticed that every time I modify argument name, its type, or if I add a new argument, I have to drop the function (CREATE OR REPLACE does not work)
So I have been thinking what if I just accept one single argument of type jsonb and call it day...So the signature would look like create_order(args jsonb)
My questions are
is this considered bad practice in PG "world" (affects performance or some other aspect) or bad practice from a programming standpoint
if #1 is bad approach, would it better to create custom composite type and use it as an argument to a function?
I don't see a big problem with a jsonb function parameter, except maybe that individual parameters make it more obvious what the input values are. But that's nothing that can't be fixed with documentation.
On the other hand, I also see no problem with dropping and recreating functions when the signature changes. It may serve as a reminder that calling sites need to be updated.
I'd say that you should go with the approach that fits you and the problem at hand best – it does not matter from the PostgreSQL point of view.
Related
I have a pretty simple stored procedure to just update the status value for a bunch of records. I send in an unknown value of record IDs and it works.
I love using Insight.Database and wouldn't want to use anything else if possible.
The problem is our DBA's created multiple User-Defined Table Types to handle situations. But their naming conventions are identical.
We have a [IntTable] with column [IntValue]
and another [TinyIntTable] with column [TinyIntValue]
Insight appears to inspect the type of UDT that could work. And sometimes it chooses [TinyIntTable] (I am guessing because the values in the array being sent are all small enough to fit into a tinyInt. But that [TinyIntTable] isn't compatible with the stored procedure. How do I force Insight.Database to always use the [IntTable]?
Is there an attribute I could use on my c# object definition?
If you’re using stored procedures, then insight will check the type of the parameter and use the correct one. i.e. it should just work.
So your problem is likely something else. Check out the table type section on the wiki, and if that doesn’t solve it for you, please post a simple example on github and we’ll help you out.
How to get column name and data type returned by a custom query in postgres? We have inbuilt functions for table/views but not for custom queries. For more clarification I would say that I need a postgres function which will take sql string as parameter and will return colnames and their datatype.
I don't think there's any built-in SQL function which does this for you.
If you want to do this purely at the SQL level, the simplest and cheapest way is probably to CREATE TEMP VIEW AS (<your_query>), dig the column definitions out of the catalog tables, and drop the view when you're done. However, this can have a non-trivial overhead depending on how often you do it (as it needs to write view definitions to the catalogs), can't be run in a read-only transaction, and can't be done on a standby server.
The ideal solution, if it fits your use case, is to build a prepared query on the client side, and make use of the metadata returned by the server (in the form of a RowDescription message passed as part of the query protocol). Unfortunately, this depends very much on which client library you're using, and how much of this information it chooses to expose. For example, libpq will give you access to everything, whereas the JDBC driver limits you to the public methods on its ResultSetMetadata object (though you could probably pull more information from its private fields via reflection, if you're determined enough).
If you want a read-only, low-overhead, client-independent solution, then you could also write a server-side C function to prepare and describe the query via SPI. Writing and building C functions comes with a bit of a learning curve, but you can find numerous examples on PGXN, or within Postgres' own contrib modules.
I wan't to be sure that not a empty text can be stored into my table. Therefore I created a domain type:
CREATE DOMAIN non_empty_text AS TEXT CHECK( VALUE ~ '\S' ); and changed all text types to non_empty_text.
So far so good. But would it be more efficient when I would change the type back to text and create a UNIQUE index and a row with empty values?
You of course need to benchmark this, but offhand, I'd say you should change your approach.
The current domain type logic you have evaluates the string in memory. The second approach requires accessing an index and looking for a block that may or may not be in the cache. Accessing the storage, even if it doesn't happen all the time, is so expensive compared to an in-memory operation, that this probably isn't a good idea.
I think your approach with the domain is correct. The alternative with the unique constraint is an interesting idea, but I would consider it premature optimization.
When I define a CHECK CONSTRAINT on a table, I find the condition clause stored can be different than what I entered.
Example:
Alter table T1 add constraint C1 CHECK (field1 in (1,2,3))
Looking at what is stored:
select cc.Definition from sys.check_constraints cc
inner join sys.objects o on o.object_id = cc.parent_object_id
where cc.type = 'C' and cc.name = 'T1';
I see:
([field1]=(3) OR [field1]=(2) OR [field1]=(1))
Whilst these are equivalent, they are not the same text.
(A similar behaviour occurs when using a BETWEEN clause).
My reason for wishing this did not happen is that I am trying to programatically ensure that all my CHECK constraints are correct by comparing the text I would use to define
the constraint with that stored in sys.check_constraints - and if different then drop and recreate the constraint.
However, in these cases, they are always different and so the program would always think it needs to recreate the constraint.
Question is:
Is there any known reason why SQL Server does this translation? Is it just removing a bit of syntactic sugar and storing the clause in a simpler form?
Is there a way to avoid the behaviour (other than to write my constraint clauses in the long form to match what SQL Server would change it to)?
Is there another way to tell if my check constraint is 'out of date' and needs recreating?
Is there any known reason why SQL Server does this translation? Is it just removing a bit of syntactic sugar and storing the clause in a simpler form?
I'm not aware of any reasons documented in the Books Online, or elsewhere. However, my guess is that it's normalized for some purposes that are internal to SQL Server. It might allow SQL Server to be a bit lenient in defining the expression (such as using Database for a column name), but guaranteeing that the column names are always appropriately escaped for whatever engine needs to parse the expression (ie, [Database]).
Is there a way to avoid the behaviour (other than to write my constraint clauses in the long form to match what SQL Server would change it to)?
Probably not. But if your constraints aren't terribly complicated, is re-writing the constraint clauses in the long form such a bad idea?
Is there another way to tell if my check constraint is 'out of date' and needs recreating?
Before I answer this directly, I'd point out that there's a bit of programming philosophy involved here. The API that SQL Server provides for the text of a CHECK constraint only guarantees that you'll get something equivalent to the original expression. While you could certainly build some fancy methods to try to ensure that you'll always be able to reproduce SQL Server's normalized version of the expression, there's no guarantee that Microsoft won't change its normalization rules in the future. And indeed, there's probably no guarantee that two equivalent expressions will always be normalized identically!
So, I'd first advise you to re-examine your architecture, and see if you can accomplish the same result without having to rely on undocumented API behavior.
Having said that, there are a number of methods outlined in this question (and answer).
Another alternative, which is a bit more brute-force but perhaps acceptable, would be to always assume that the expression is "out of date" and simply drop/re-create the constraint every time you check. Unless you're expecting these constraints to frequently become out-of-date (or the tables are quite large), it seems this would be a decent solution. You could probably even run it in a transaction, so that if the new constraint is already violated, simply roll-back the transaction and report the error.
I want to use record type as parameter but I got message that function cannot have record type parameters. I have a Dao function which perform various operation on a Arraylist passed through parameter and I need to implement it in stored procedure. So any help will be greatly appreciated. thanks!
The function m looking for is something like:
CREATE OR REPLACE FUNCTION est_fn_get_emp_report(rec record,...)
I am new using postgresql but have used stored functions before but never have to use record type parameters.
The simple issue is you can't specify a record. You can specify some polymorphic types (ANYARRAY, ANYELEMENT) as an input of a function but it needs to have a structure known at planning time and this can lead to issues with polymorphic types as input args on even on a good day. The problem with a record is that PostgreSQL wont necessarily know what the internal structure is when it is passed in. ROW(1, 'test') is not useful in a functional context.
Instead you want to specify complex types. You can actually take this very far in terms of relying on PostgreSQL. This allows you to specify a specific type of record when passing it in.