I have written a function in PostgreSQL with a single parameter of type NUMERIC. I am attempting to call that function from a webapp developed using the Yii framework via the SqlDataProvider component. However, each time the parameter value is left empty, I keep getting the following error:
Ivalid input syntax for type 'numeric'.
Whenever I try to execute the function directly via PhpPgAdmin, everything seems to work flawlessly.
Below is the code for the discussed PL/pgSQL function search(..):
search("itemcode" character varying DEFAULT NULL, "taxpercentage" numeric DEFAULT NULL)
LANGUAGE plpgsql
AS $$
BEGIN
SELECT * FROM item WHERE (itemcode IS NULL OR item.item_code LIKE itemcode||'%')
AND (taxpercentage IS NULL OR item.tax_percentage = taxpercentage);
END;
$$;
In the Yii model, I have also created a search function which in turn calls the database-level PL/pgSQL function:
public function search()
{
$count=Yii::app()->db->createCommand("SELECT COUNT(*) FROM search('$this->item_code','$this->tax_percentage')")->queryScalar();
$sql="SELECT * FROM search('$this->item_code','$this->tax_percentage')";
$dataProvider=new CSqlDataProvider($sql, array(
'totalItemCount'=>$count,
'pagination'=>array(
'pageSize'=>10,
),
));
return $dataProvider;
}
How can I pass a NULL value to parameter of type NUMERIC?
All help will be much appreciated. Thanks in advance.
I solved it by checking for empty in the search function and using parameterized query. So the modified function is
public function search()
{
if(empty($this->tax_percentage))
$this->tax_percentage=null;
$count=Yii::app()->db->createCommand("SELECT COUNT(*) FROM search(:item_code,:tax_percentage)")->queryScalar();
$sql="SELECT * FROM search('$this->item_code','$this->tax_percentage')";
$dataProvider=new CSqlDataProvider($sql, array(
'params' => array(':item_code' => $this->item_code,':tax_percentage' => $this->tax_percentage),
'totalItemCount'=>$count,
'pagination'=>array(
'pageSize'=>10,
),
));
return $dataProvider;
}
Related
I've created a new function in Supabase as follows:
drop function if exists select_whitelist_airdrop_address;
create
or replace function select_whitelist_airdrop_address(airdrop_address text)
returns table(owner text, hash text)
as $$
select a.owner, b.hash
from thirdparty_token_holders a
left join thirdparty_token_airdrops b
on a.id = b.holder_id
where (b.status = 1 AND b.hash != 'NOT PROCESSED' AND a.owner=airdrop_address)
limit 1
$$ language sql;
And deleted an older one called select_whitelist_airdrops. Upon deleting it, I've started getting the following error:
info: getAirdropWhitelistStatus -> error: {"error":{"message":"Could not find the public.select_whitelist_airdrops(hash) function or the public.select_whitelist_airdrops function with
a single unnamed json or jsonb parameter in the schema cache","hint":"If a new function was created in the database with this name and parameters, try reloading the schema cache."},"da
ta":null,"count":null,"status":404,"statusText":"Not Found","body":null}
Why is Supabase referring to this function when:
I no longer need it
I am using another function in my code, see below:
public async getAirdropWhitelistStatus(address: string) {
return this.supabase.rpc('select_whitelist_airdrop_address', {
airdrop_address: address
})
}
Why does Supabase keep referring back to the old proc when run?
I've been trying to dynamically use a PostgreSQL 13 native query:
public interface TasksRepository extends JpaRepository<Task, Long>, JpaSpecificationExecutor<Task> {
}
#AllArgsConstructor
public class TaskSpecification implements Specification<Task> {
private final String entityCode;
private final UUID entityId;
#Override
public Predicate toPredicate(Root<Task> root, CriteriaQuery<?> query, CriteriaBuilder builder) {
// see https://www.postgresql.org/docs/13/functions-json.html
// jsonb_path_exists ( target jsonb, path jsonpath [, vars jsonb [, silent boolean ]] ) → boolean
String template = "$[*] ? (#.entityCode == $code && #.entityId == $id)";
String variable = "{\"code\":\"?1\", \"id\":\"?2\"}"
.replace("?1", this.entityCode)
.replace("?2", this.entityId.toString());
return builder.isTrue(
builder.function("jsonb_path_exists", Boolean.class,
/* target */ root.<List<RelatedEntity>>get("taskTags"),
/* path */ builder.literal("'" + template + "'::jsonpath"), //DEBUG CAST
/* vars */ builder.literal("'" + variable + "'::jsonb"), //DEBUG CAST
/* silent */ builder.literal(Boolean.FALSE)
));
}
}
But ended up with traumatic errors, despite my casting attempt:
Hibernate:
select
task0_.id as id1_0_,
task0_.business_unit as business2_0_,
task0_.due_date as due_date3_0_,
task0_.is_urgent as is_urgen4_0_,
task0_.task_tags as task_tag5_0_,
task0_.task_text as task_tex6_0_,
task0_.task_type as task_typ7_0_
from
tasks_table task0_
where
jsonb_path_exists(task0_.task_tags,?,?,?)=true
binding parameter [1] as [VARCHAR] - ['$[*] ? (#.entityCode == $code && #.entityId == $id)'::jsonpath]
binding parameter [2] as [VARCHAR] - ['{"code":"ETY", "id":"bedb1903-3827-4507-883b-d41888d2ed68"}'::jsonb]
binding parameter [3] as [BOOLEAN] - [false]
SQL Error: 0, SQLState: 42883
ERROR: function jsonb_path_exists(jsonb, character varying, character varying, boolean) does not exist
Indice : No function matches the given name and argument types. You might need to add explicit type casts.
I've tried to cast the above inner query parameters, but I suspect that this is a JPA-level issue; but I couldn't find the corresponding types to cast (jsonpath, jsonb) in my dependencies for them to applied with builder/Expression#as
Maybe the function is not visible (with schema issue or something alike?)
Thanks for any help
Try removing the cast ::jsonpath and give it a try.
A json path is not a data type, so no need for the cast. Instead this part root.<List<RelatedEntity>>get("taskTags") shoud be a valid json object, I am not sure how this is rendered in the query.
To verify what is rendered and the values binded to the query, enable logging for hibernate as such in application.properties;
logging.level.org.hibernate.SQL=trace
logging.level.org.hibernate.type.descriptor.sql.BasicBinder=trace
This will show you the query and the value passed in the query.
It is great to get prompt replies on the npgsql queries. Thanks to its owner! I am trying to port a sproc that took in array valued parameters to postgres with similar abstraction/semantics. I am wondering how one would shape the pgsql sproc and use npgsql api to call that. I have another layer of application abstraction on top of our dal abstraction. We use data adapters to go to sql server, assoc arrays to oracle and trying to figure out what we could map this to for postgres using npgsql. We have some room in shaping the sproc but still keep the number of input params the same. we could certainly build this sproc much different but we still need it behind the same app api which supplies some set of typed arrays as shown below
public static void Flush2OraWithAssocArrayInsnetworkdatabatch(string dbKey ,int?[] ENDPOINTID,DateTime?[] INSERTEDDATETIME,int?[] RECORDTYPEID,long?[] RECORDVALUE,int?[] PACKETSIZE)
{
Database db = Helper.GetDatabase(dbKey);
using (DbConnection con = db.CreateConnection()){
con.Open();
using (DbCommand cmd = con.CreateCommand()){
cmd.CommandText = "Insnetworkdatabatch";
Helper.InitializeCommand(cmd, 300, "Insnetworkdatabatch");
BuildInsnetworkdatabatchOracleAssocArrayCommandParameters(cmd ,ENDPOINTID,INSERTEDDATETIME,RECORDTYPEID,RECORDVALUE,PACKETSIZE);
try {
Helper.ExecuteNonQuery(cmd, cmd.CommandText);
con.Close();
} catch (DALException ) {
throw;
}
}
}
}
I have a oracle sproc written as follows
create or replace PROCEDURE InsNetworkDataBatch2
(
-- Add the parameters for the stored procedure here
v_endPointID IN arrays.t_number ,
v_insertedDateTime IN arrays.t_date ,
v_recordTypeID IN arrays.t_number ,
v_recordValue IN arrays.t_number ,
v_packetSize IN arrays.t_number )
AS
BEGIN
DECLARE
BEGIN
FORALL i IN v_endpointID.FIRST..v_endpointID.LAST SAVE EXCEPTIONS
INSERT
INTO STGNETWORKSTATS
(
INSERTEDDATE,
ENDPOINTID,
RECORDTYPEID,
RECORDVALUE,
PACKETSIZE
)
VALUES
(
v_insertedDateTime(i),
v_endPointID(i),
v_recordTypeID(i),
v_recordValue(i),
v_packetSize(i)
);
END;
END;
-- END PL/SQL BLOCK (do not remove this line) ----------------------------------
Here is the assoc array package in oracle
create or replace PACKAGE Arrays AS
type t_number is table of number index by binary_integer;
type t_date is table of date index by binary_integer;
END Arrays;
Here is how we build the oracle parm and wondering what its equivalency if at all possible in postgres and trying to see how npgsql will support it
public override void CreateAssociativeArrayParameter(DbCommand cmd, string parameterName, object parameterValue, string dbType, ParameterDirection direction)
{
OracleDbType oracleDbType = dbSpecificTypesMap[dbType];
OracleParameter param = new OracleParameter(parameterName, oracleDbType, direction);
param.CollectionType = OracleCollectionType.PLSQLAssociativeArray;
param.Value = parameterValue;
cmd.Parameters.Add(param);
}
I don't know anything about Oracle arrays or associative arrays. However, PostgreSQL has a rich support for complex types. PostgreSQL arrays are a good way to store an array of values in a column, and PostgreSQL even provides indexing and database-side functions to work with arrays.
If you're looking for a dictionary type (associative array?), take a look at hstore or json.
EDITED: If your associative array has a fixed schema (i.e. the fields don't change), you can also consider PostgreSQL composite.
Here is an attempt with Postgres stored procedure. This is now working. I got around some casting issues thrown from inside the npgsql which was a result of my .net type not being compatible with the sproc parameter data type in postgres.
Here is how i am trying to add the param value
create or replace FUNCTION InsNetworkDataBatch
(
-- Add the parameters for the stored procedure here
v_endPointID IN int[] ,
v_insertedDateTime IN timestamp[] ,
v_recordTypeID IN int[] ,
v_recordValue IN bigint[] ,
v_packetSize IN int[] ) RETURNS void
LANGUAGE 'plpgsql'
AS $$
BEGIN
DECLARE
BEGIN
FOR i IN array_lower(v_endPointID, 1) .. array_upper(v_endPointID, 1)
loop
INSERT INTO STGNETWORKSTATS
(
INSERTEDDATE,
ENDPOINTID,
RECORDTYPEID,
RECORDVALUE,
PACKETSIZE
)
VALUES
(
v_insertedDateTime[i],
v_endPointID[i],
v_recordTypeID[i],
v_recordValue[i],
v_packetSize[i]
);
end loop;
END;
END;
$$
Here is how i am trying to bind the app to the command params
public override void CreateAssociativeArrayParameter(DbCommand cmd, string parameterName, object parameterValue, string dbType, ParameterDirection direction)
{
NpgsqlDbType npgsqlDbType;
if (dbSpecificTypesMap.ContainsKey(dbType))
{
npgsqlDbType = dbSpecificTypesMap[dbType];
}
else
{
throw new ApplicationException($"The db type {dbType} could not be parsed into the target NpgsqlDbType. Please check the underlying type of the parameter");
}
NpgsqlParameter param = new NpgsqlParameter(parameterName.ToLower(), NpgsqlDbType.Array | npgsqlDbType);
param.Value = parameterValue;
cmd.Parameters.Add(param);
}
I am implementing a function to get an estimate of the count as described in the PostgreSQL documentation here https://wiki.postgresql.org/wiki/Count_estimate
I'm using the function:
public static Field<Integer> countEstimate(final QueryPart query) {
final String sql = String.format("count_estimate(%s)", escape(query.toString()));
return field(sql(sql), PostgresDataType.INT);
}
Which looks fine until I pass it an IN clause array field in the query. When this happens jOOQ strips the array curly braces from within my SQL. e.g. Calling it with this java code:
final UUID[] ids = new UUID[]{UUID.randomUUID()};
return db.select(countEstimate(db.select(TABLE.ID)
.from(TABLE)
.where(overlaps(ids, TABLE.FILTER_IDS))));
Results in both the variable sql and DSL.sql(sql) in the above function rendering:
count_estimate(E'select "schema"."table"."id"
from "schema"."table"
where (
((\'{"75910f3b-83e6-41ed-bf57-085c225e0131"}\') && ("schema"."table"."filter_ids"))
)')
But field(sql(sql), PostgresDataType.INT) renders this:
count_estimate(E'select "schema"."table"."id"
from "schema"."table"
where (
((\'"75910f3b-83e6-41ed-bf57-085c225e0131"\') && ("schema"."table"."filter_ids"))
)')
Is there any way to work around this and to tell jOOQ to leave my query alone?
(jOOQ 3.8.3, PG 9.5.5, PG driver 9.4-1203-jdbc4)
It turns out it only strips '{}' style arrays. Replacing the code that turns the UUID[] into sql from
DSL.val(ids)
with
DSL.array(Arrays.stream(ids)
.map(UUID::toString)
.collect(Collectors.toList())
.toArray(new String[0]))
.cast(PostgresDataType.UUID.getArrayDataType()
results in it rendering cast(array[\'75910f3b-83e6-41ed-bf57-085c225e0131\'] as uuid[]) prevents it being stripped
This is the query I am trying to run in PostgreSQL:
SELECT * FROM message WHERE id IN (
SELECT unnest(message_ids) "mid"
FROM session_messages WHERE session_id = '?' ORDER BY "mid" ASC
);
However, I am not able do something:
create.selectFrom(Tables.MESSAGE).where(Tables.MESSAGE.ID.in(
create.select(DSL.unnest(..))
Because DSL.unnest is a Table<?>, which makes sense since it is trying to take a List-like object (mostly a literal) and convert it to table.
I have a feeling that I need to find a way to wrap the function around my field name, but I have no clue as to how to proceed.
NOTE. The field message_ids is of type bigint[].
EDIT
So, this is how I am doing it now, and it works exactly as expected, but I am not sure if this is the best way to do it:
Field<Long> unnestMessageIdField = DSL.field(
"unnest(" + SESSION_MESSAGES.MESSAGE_IDS.getName() + ")",
Long.class)
.as("mid");
Field<Long> messageIdField = DSL.field("mid", Long.class);
MESSAGE.ID.in(
ctx.select(messageIdField).from(
ctx.select(unnestMessageIdField)
.from(Tables.CHAT_SESSION_MESSAGES)
.where(Tables.CHAT_SESSION_MESSAGES.SESSION_ID.eq(sessionId))
)
.where(condition)
)
EDIT2
After going through the code on https://github.com/jOOQ/jOOQ/blob/master/jOOQ/src/main/java/org/jooq/impl/DSL.java I guess the right way to do this would be:
DSL.function("unnest", SQLDataTypes.BIGINT.getArrayType(), SESSION_MESSAGES.MESSAGE_IDS)
EDIT3
Since as always lukas is here for my jOOQ woes, I am going to capitalize on this :)
Trying to generalize this function, in a signature of sort
public <T> Field<T> unnest(Field<T[]> arrayField) {
return DSL.function("unnest", <??>, arrayField);
}
I don't know how I can fetch the datatype. There seems to be a way to get DataType<T[]> from DataType<T> using DataType::getArrayDataType(), but the reverse is not possible. There is this class I found ArrayDataType, but it seems to be package-private, so I cannot use it (and even if I could, it does not expose the field elementType).
Old PostgreSQL versions had this funky idea that it is OK to produce a table from within the SELECT clause, and expand it into the "outer" table, as if it were declared in the FROM clause. That is a very obscure PostgreSQL legacy, and this example is a good chance to get rid of it, and use LATERAL instead. Your query is equivalent to this one:
SELECT *
FROM message
WHERE id IN (
SELECT "mid"
FROM session_messages
CROSS JOIN LATERAL unnest(message_ids) AS t("mid")
WHERE session_id = '?'
);
This can be translated to jOOQ much more easily as:
DSL.using(configuration)
.select()
.from(MESSAGE)
.where(MESSAGE.ID).in(
select(field(name("mid"), MESSAGE.ID.getDataType()))
.from(SESSION_MESSAGES)
.crossJoin(lateral(unnest(SESSION_MESSAGES.MESSAGE_IDS)).as("t", "mid"))
.where(SESSION_MESSAGES.SESSION_ID.eq("'?'"))
)
The Edit3 in the question is quite close to a decent solution for this problem.
We can create a custom generic unnest method for jOOQ which accepts Field and use it in jOOQ query normally.
Helper method:
public static <T> Field<T> unnest(Field<T[]> field) {
var type = (Class<T>) field.getType().getComponentType();
return DSL.function("unnest", type, field);
}
Usage:
public void query(SessionId sessionId) {
var field = unnest(SESSION_MESSAGES.MESSAGE_IDS, UUID.class);
dsl.select().from(MESSAGE).where(
MESSAGE.ID.in(
dsl.select(field).from(SESSION_MESSAGES)
.where(SESSION_MESSAGES.SESSION_ID.eq(sessionId.id))
.orderBy(field)
)
);
}