PostgreSQL function call - postgresql

I have PostgreSQL function named test(integer) taking an integer parameter and an overloaded function of the same name test(character varying).
When calling this function with a null value, Postgres always executes the function taking an integer parameter. Why does this happen? Why doesn't Postgres chose the function with a varchar parameter?
Function call example:
select test(null);

That's decided by the rules of Function Type Resolution. Detailed explanation in the manual. Related:
Is there a way to disable function overloading in Postgres
NULL without explicit type cast starts out as type "unknown":
SELECT pg_typeof(NULL)
pg_typeof
-----------
unknown
Actually, I got suspicious and ran a quick test, just to find different results in Postgres 9.3 and 9.4. varchar is picked over integer (which oddly contradicts your findings):
SQL Fiddle.
I would think the according rule is point 4e in the list (none of the earlier points decide the match):
At each position, select the string category if any candidate accepts
that category. (This bias towards string is appropriate since an
unknown-type literal looks like a string.)
If you added another function with input type text to the overloaded mix, text would be picked over varchar.
Personally I almost always use text instead of varchar. While being binary compatible (so almost but not quite the same), text is closer to the heart of Postgres in every respect.
I added that to the fiddle, as well as another example where Postgres cannot decide and throws a tantrum.
If you want to pick a particular function, add an explicit type cast (that's the way to go here!):
select test(null::int) AS func_int
, test(null::varchar) AS func_vc;

Related

Function createtopology(unknown, integer, integer) does not exist. Not able run PostGIS function in Postgres

I am trying to run this function
SELECT public.CreateTopology('topo1',4326,0);
which gives me
ERROR: function public.createtopology(unknown, integer, integer) does not exist
LINE 1: select public.CreateTopology('topo1',4326,0);
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
SQL state: 42883
Character: 8
I can use other PostGIS function without trouble. This one however does not work. Note, there are many schems in my database. Why is that?
The third parameter is optional, but if present is data type double precision, if omitted defaults to 0. (see Postgis documentation CreateTopology. So try
SELECT public.CreateTopology('topo1',4326,0::double precision);
OR
SELECT public.CreateTopology('topo1',4326,0.0);
OR
SELECT public.CreateTopology('topo1',4326);
UPDATE: As posted 'topo1' is a string literal. But it needs to be Topology. You need to correct the data type.

Postgres: getting "... is out of range for type integer" when using NULLIF

For context, this issue occurred in a Go program I am writing using the default postgres database driver.
I have been building a service to talk to a postgres database which has a table similar to the one listed below:
CREATE TABLE object (
id SERIAL PRIMARY KEY NOT NULL,
name VARCHAR(255) UNIQUE,
some_other_id BIGINT UNIQUE
...
);
I have created some endpoints for this item including an "Install" endpoint which effectively acts as an upsert function like so:
INSERT INTO object (name, some_other_id)
VALUES ($1, $2)
ON CONFLICT name DO UPDATE SET
some_other_id = COALESCE(NULLIF($2, 0), object.some_other_id)
I also have an "Update" endpoint with an underlying query like so:
UPDATE object
SET some_other_id = COALESCE(NULLIF($2, 0), object.some_other_id)
WHERE name = $1
The problem:
Whenever I run the update query I always run into the error, referencing the field "some_other_id":
pq: value "1010101010144" is out of range for type integer
However this error never occurs on the "upsert" version of the query, even when the row already exists in the database (when it has to evaluate the COALESCE statement). I have been able to prevent this error by updating COALESCE statement to be as follows:
COALESCE(NULLIF($2, CAST(0 AS BIGINT)), object.some_other_id)
But as it never occurrs with the first query I wondered if this inconsitency had come from me doing something wrong or something that I don't understand? And also what the best practice is with this, should I be casting all values?
I am definitely passing in a 64 bit integer to the query for "some_other_id", and the first query works with the Go implementation even without the explicit type cast.
If any more information (or Go implementation) is required then please let me know, many thanks in advance! (:
Edit:
To eliminate confusion, the queries are being executed directly in Go code like so:
res, err := s.db.ExecContext(ctx, `UPDATE object SET some_other_id = COALESCE(NULLIF($2, 0), object.some_other_id) WHERE name = $1`,
"a name",
1010101010144,
)
Both queries are executed in exactly the same way.
Edit: Also corrected parameter (from $51 to $2) in my current workaround.
I would also like to take this opportunity to note that the query does work with my proposed fix, which suggests that the issue is in me confusing postgres with types in the NULLIF statement? There is no stored procedure asking for an INTEGER arg inbetween my code and the database, at least that I have written.
This has to do with how the postgres parser resolves types for the parameters. I don't know how exactly it's implemented, but given the observed behaviour, I would assume that the INSERT query doesn't fail because it is clear from (name,some_other_id) VALUES ($1,$2) that the $2 parameter should have the same type as the target some_other_id column, which is of type int8. This type information is then also used in the NULLIF expression of the DO UPDATE SET part of the query.
You can also test this assumption by using (name) VALUES ($1) in the INSERT and you'll see that the NULLIF expression in DO UPDATE SET will then fail the same way as it does in the UPDATE query.
So the UPDATE query fails because there is not enough context for the parser to infer the accurate type of the $2 parameter. The "closest" thing that the parser can use to infer the type of $2 is the NULLIF call expression, specifically it uses the type of the second argument of the call expression, i.e. 0, which is of type int4, and it then uses that type information for the first argument, i.e. $2.
To avoid this issue, you should use an explicit type cast with any parameter where the type cannot be inferred accurately. i.e. use NULLIF($2::int8, 0).
COALESCE(NULLIF($51, CAST(0 AS BIGINT)), object.some_other_id)
Fifty-one? Realy?
pq: value "1010101010144" is out of range for type integer
Pay attention, the data type in the error message is an integer, not bigint.
I think the reason for the error is out of showed code. So I take out a magic crystal ball and make a pass with my hands.
an "Install" endpoint which effectively acts as an upsert function like so
I also have an "Update" endpoint
Do you call endpoint a PostgreSQL function (stored procedure)? I think yes.
Also $1, $2 looks like PostgreSQL function arguments.
The magic crystal ball says: you have two PostgreSQL function with different data types of arguments:
"Install" endpoint has $2 function argument as a bigint data type. It looks like CREATE FUNCTION Install(VARCHAR(255), bigint)
"Update" endpoint has $2 function argument as an integer data type, not bigint. It looks like CREATE FUNCTION Update(VARCHAR(255), integer).
At last, I would rewrite your condition more understandable:
UPDATE object
SET some_other_id =
CASE
WHEN $2 = 0 THEN object.some_other_id
ELSE $2
END
WHERE name = $1

Postgres bytea error when binding null to prepared statements

I am working with a Java application which uses JPA and a Postgres database, and I am trying to create a flexible prepared statement which can handle a variable number of input parameters. An example query would best explain this:
SELECT *
FROM my_table
WHERE
(string_col = :param1 OR :param1 IS NULL) AND
(double_col = :param2 OR :param2 IS NULL);
The idea behind this "trick" is that if a user specifies only one parameter, say :param1, we can just bind null to :param2, and the WHERE clause would then behave as if only the first parameter were even being checked. This approach lets us handle, in theory, any number of input parameters using a single prepared statement, instead of needing to maintain many different statements.
I have gotten a simple POC working locally using pure JDBC prepared statements. However, doing so required casting the parameter before comparing it to NULL, e.g.
WHERE (double_col = ? OR ?::numeric IS NULL)
^^ does not work without cast
However, my actual application is using JPA, and I keep getting the following persistent error:
Caused by: org.postgresql.util.PSQLException: ERROR: operator does not exist: double precision = bytea
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
The problem does not occur with string/text columns, but only with columns which are double precision in my Postgres table. I have tried all combinations of casting, and nothing works:
(double_col = :param2 OR CAST(:param2 AS double precision) IS NULL);
(CAST(double_col AS double precision) = :param2 OR :param2 IS NULL);
(CAST(double_col AS double precision) = :param2 OR CAST(:param2 AS double precision) IS NULL);
The error seems to be saying that JDBC is sending Postgres a bytea type for the double columns, and then Postgres is rolling over because it can't find a way to cast byte to double precision.
The Java code looks something like:
Query query = entityManager.createNativeQuery(sqlString, MyEntity.class);
query.setParameter("param1", "some value");
// bind other parameters here
List<MyEntity> = query.getResultList();
For reference, here are the versions of everything I am using:
Hibernate version | 4.3.7.Final
Spring data JPA vesion | 1.7.1.RELEASE
Postgres driver version | 42.2.2
Postgres database version | 9.6.10
Java version | 1.8.0_171
Not having received any feedback in the form of answers or even a comment, I was getting ready to give up, when I stumbled onto this excellent blog post:
How to bind custom Hibernate parameter types to JPA queries
The post gives two options for controlling the types which JPA passes through the driver to Postgres (or whatever the underlying database actually is). I went with the approach using TypedParameterValue. Here is what my code looks like continuing with the example given above:
Query query = entityManager.createNativeQuery(sqlString, MyEntity.class);
query.setParameter("param1", new TypedParameterValue(StringType.INSTANCE, null));
query.setParameter("param2", new TypedParameterValue(DoubleType.INSTANCE, null));
List<MyEntity> = query.getResultList();
Of course, it is trivial to be passing null for every parameter in the query, but I am doing this mainly to show the syntax for the text and double columns. In practice, we would expect at least a few of the parameters to be non null, but the above syntax handles all values, null or otherwise.
If you want to keep using plain queries with automatic parameter binding, you could try the following.
WHERE (? IS NULL OR (CAST(CAST(? AS TEXT) AS DOUBLE PRECISION) = double_col
This seems to satisfy the PostgreSQL driver's type checks as well as yielding the correct results. I haven't done much testing, but the performance hit seems minimal because the CASTs happen on a constant value rather than rows from the database.

Syntax error in create aggregate

Trying to create an aggregate function:
create aggregate min (my_type) (
sfunc = least,
stype = my_type
);
ERROR: syntax error at or near "least"
LINE 2: sfunc = least,
^
What am I missing?
Although the manual calls least a function:
The GREATEST and LEAST functions select the largest or smallest value from a list of any number of expressions.
I can not find it:
\dfS least
List of functions
Schema | Name | Result data type | Argument data types | Type
--------+------+------------------+---------------------+------
(0 rows)
Like CASE, COALESCE and NULLIF, GREATEST and LEAST are listed in the chapter Conditional Expressions. These SQL constructs are not implemented as functions .. like #Laurenz provided in the meantime.
The manual advises:
Tip: If your needs go beyond the capabilities of these conditional
expressions, you might want to consider writing a stored procedure in
a more expressive programming language.
The terminology is a bit off here as well, since Postgres does not support true "stored procedures", just functions. (Which is why there is an open TODO item "Implement stored procedures".)
This manual page might be sharpened to avoid confusion ...
#Laurenz also provided an example. I would just use LEAST in the function to get identical functionality:
CREATE FUNCTION f_least(anyelement, anyelement)
RETURNS anyelement LANGUAGE sql IMMUTABLE AS
'SELECT LEAST($1, $2)';
Do not make it STRICT, that would be incorrect. LEAST(1, NULL) returns 1 and not NULL.
Even if STRICT was correct, I would not use it, because it can prevent function inlining.
Note that this function is limited to exactly two parameters while LEAST takes any number of parameters. You might overload the function to cover 3, 4 etc. input parameters. Or you could write a VARIADIC function for up to 100 parameters.
LEAST and GREATEST are not real functions; internally they are parsed as MinMaxExpr (see src/include/nodes/primnodes.h).
You could achieve what you want with a generic function like this:
CREATE FUNCTION my_least(anyelement, anyelement) RETURNS anyelement
LANGUAGE sql IMMUTABLE CALLED ON NULL INPUT
AS 'SELECT LEAST($1, $2)';
(thanks to Erwin Brandstetter for the CALLED ON NULL INPUT and the idea to use LEAST.)
Then you can create your aggregate as
CREATE AGGREGATE min(my_type) (sfunc = my_least, stype = my_type);
This will only work if there are comparison functions for my_type, otherwise you have to come up with a different my_least function.

Rails 4, migration to change datatype of column from daterange to tsrange causing PG::DatatypeMismatch: ERROR:

I'm trying to change a column of type daterange to tsrange (I realized I need time as well as date) using a vanilla Rails migration
def self.up
change_column :events, :when, :tsrange
end
After running rake db:migrate the error is
PG::DatatypeMismatch: ERROR: column "when" cannot be cast automatically to type tsrange
HINT: Specify a USING expression to perform the conversion.
: ALTER TABLE "events" ALTER COLUMN "when" TYPE tsrange
I tried following the hint and used the following
def self.up
change_column :events, :when, :tsrange, 'tsrange USING CAST(when AS tsrange)'
end
but then got
no implicit conversion of Symbol into Integer
From what I can tell, USING CAST is mainly meant for use with ints. Assuming I don't want to drop and then recreate the column, what do you have to specify to alter the type from daterange to tsrange?
I'm using
Rails 4.0.1
ruby-2.0.0-p247
psql (9.2.4)
Some background, daterange and tsrange were introduced to Rails 4 in the following PR: https://github.com/rails/rails/pull/7345. Thanks.
The USING clause is used to specify how to convert the old values to the new ones:
The optional USING clause specifies how to compute the new column value from the old; if omitted, the default conversion is the same as an assignment cast from old data type to new. A USING clause must be provided if there is no implicit or assignment cast from old to new type.
So USING shows up any time there is no default cast from the old type to the new type. Also note that USING is specified as USING expression so any expression (whose value is of the correct type) can be used with a USING, the most common is USING CAST(...) but the expression can be pretty much anything.
Hopefully that should clear up some confusion about USING.
So what's up with the ActiveRecord error? Well, change_column is expecting to see an options Hash in the fourth argument but you're sending in a string. If you look at the change_column source, you'll see things like options[:limit] but String#[] expects integer arguments so your string argument is triggering odd looking complains about Symbols.
AFAIK there is no way to get AR to add a USING clause to the ALTER TABLE ... ALTER COLUMN that change_column generates. This leaves connection.execute(some_sql) if you need a USING clause. Of course this is further complicated by the (apparent) lack of a built-in cast from daterange to tsrange but building the necessary expression isn't terribly difficult if you pull the daterange apart with the upper and lower functions:
connection.execute(%q{
alter table events
alter column "when"
type tsrange using tsrange(lower("when"), upper("when"))
})
You can see the table change in action over here: http://sqlfiddle.com/#!15/fb047/2
That assumes that you're using the default half-open intervals ([...)) for your ranges; if you have ranges that aren't closed on the left and open on the right then you'll have to build a more complicated USING expression using the other range functions to see if the left and right ends of the ranges are open or closed.
BTW, when is a PostgreSQL keyword so it isn't the best choice for an identifier, you'll have to say "when" every time you refer to that column in SQL snippets and that might get tiring. I'd recommend using a different name for that column so that you don't have to worry about quoting.