Using xmlserialize in db2 with a timestamp - db2

I was looking for a way to combine multiple returned rows into a single row on a db2 database (I have an application that can query a database, but will only work if a single row is returned). I found this solution which worked pretty well and was a lot easier than using recursive SQL. However, I ran into a problem when I tried to include a column that was set as TIMESTAMP instead of VARCHAR.
So how can I make this work if a column is a TIMESTAMP type?
Error:
SQL0440N No authorized routine named "XMLTEXT" of type "FUNCTION" having
compatible arguments was found. SQLSTATE=42884
SQL0440N No authorized routine named "XMLTEXT" of type "FUNCTION " having compatible arguments was found.
".
Example:
select xmlserialize(
xmlagg(
xmlconcat(
xmltext(column_name),
xmltext(':'),
xmltext(content),
xmltext(','),
xmltext(DATETIMESTAMP),
xmltext(',')
)
) as varchar(10000)
)
from
yourtable

Instead of the suggested CAST you could wrap the TOCHAR` function around the timestamp value:
select xmlserialize(
xmlagg(
xmlconcat(
xmltext(column_name),
xmltext(':'),
xmltext(content),
xmltext(','),
xmltext(TO_CHAR(DATETIMESTAMP)),
xmltext(',')
)
) as varchar(10000)
)
from
yourtable
If you are on a recent version of DB2 and have LISTAGG available I would recommend to use that function. It is much faster than converting the SQL input to XML types and then converting it back. It requires some CPU cycles due to all the official rules involved.

Related

Postgres: getting "... is out of range for type integer" when using NULLIF

For context, this issue occurred in a Go program I am writing using the default postgres database driver.
I have been building a service to talk to a postgres database which has a table similar to the one listed below:
CREATE TABLE object (
id SERIAL PRIMARY KEY NOT NULL,
name VARCHAR(255) UNIQUE,
some_other_id BIGINT UNIQUE
...
);
I have created some endpoints for this item including an "Install" endpoint which effectively acts as an upsert function like so:
INSERT INTO object (name, some_other_id)
VALUES ($1, $2)
ON CONFLICT name DO UPDATE SET
some_other_id = COALESCE(NULLIF($2, 0), object.some_other_id)
I also have an "Update" endpoint with an underlying query like so:
UPDATE object
SET some_other_id = COALESCE(NULLIF($2, 0), object.some_other_id)
WHERE name = $1
The problem:
Whenever I run the update query I always run into the error, referencing the field "some_other_id":
pq: value "1010101010144" is out of range for type integer
However this error never occurs on the "upsert" version of the query, even when the row already exists in the database (when it has to evaluate the COALESCE statement). I have been able to prevent this error by updating COALESCE statement to be as follows:
COALESCE(NULLIF($2, CAST(0 AS BIGINT)), object.some_other_id)
But as it never occurrs with the first query I wondered if this inconsitency had come from me doing something wrong or something that I don't understand? And also what the best practice is with this, should I be casting all values?
I am definitely passing in a 64 bit integer to the query for "some_other_id", and the first query works with the Go implementation even without the explicit type cast.
If any more information (or Go implementation) is required then please let me know, many thanks in advance! (:
Edit:
To eliminate confusion, the queries are being executed directly in Go code like so:
res, err := s.db.ExecContext(ctx, `UPDATE object SET some_other_id = COALESCE(NULLIF($2, 0), object.some_other_id) WHERE name = $1`,
"a name",
1010101010144,
)
Both queries are executed in exactly the same way.
Edit: Also corrected parameter (from $51 to $2) in my current workaround.
I would also like to take this opportunity to note that the query does work with my proposed fix, which suggests that the issue is in me confusing postgres with types in the NULLIF statement? There is no stored procedure asking for an INTEGER arg inbetween my code and the database, at least that I have written.
This has to do with how the postgres parser resolves types for the parameters. I don't know how exactly it's implemented, but given the observed behaviour, I would assume that the INSERT query doesn't fail because it is clear from (name,some_other_id) VALUES ($1,$2) that the $2 parameter should have the same type as the target some_other_id column, which is of type int8. This type information is then also used in the NULLIF expression of the DO UPDATE SET part of the query.
You can also test this assumption by using (name) VALUES ($1) in the INSERT and you'll see that the NULLIF expression in DO UPDATE SET will then fail the same way as it does in the UPDATE query.
So the UPDATE query fails because there is not enough context for the parser to infer the accurate type of the $2 parameter. The "closest" thing that the parser can use to infer the type of $2 is the NULLIF call expression, specifically it uses the type of the second argument of the call expression, i.e. 0, which is of type int4, and it then uses that type information for the first argument, i.e. $2.
To avoid this issue, you should use an explicit type cast with any parameter where the type cannot be inferred accurately. i.e. use NULLIF($2::int8, 0).
COALESCE(NULLIF($51, CAST(0 AS BIGINT)), object.some_other_id)
Fifty-one? Realy?
pq: value "1010101010144" is out of range for type integer
Pay attention, the data type in the error message is an integer, not bigint.
I think the reason for the error is out of showed code. So I take out a magic crystal ball and make a pass with my hands.
an "Install" endpoint which effectively acts as an upsert function like so
I also have an "Update" endpoint
Do you call endpoint a PostgreSQL function (stored procedure)? I think yes.
Also $1, $2 looks like PostgreSQL function arguments.
The magic crystal ball says: you have two PostgreSQL function with different data types of arguments:
"Install" endpoint has $2 function argument as a bigint data type. It looks like CREATE FUNCTION Install(VARCHAR(255), bigint)
"Update" endpoint has $2 function argument as an integer data type, not bigint. It looks like CREATE FUNCTION Update(VARCHAR(255), integer).
At last, I would rewrite your condition more understandable:
UPDATE object
SET some_other_id =
CASE
WHEN $2 = 0 THEN object.some_other_id
ELSE $2
END
WHERE name = $1

How to do parameter replacement within single quote for ## postgres operator

I am having issues with combining couple of different things together. I have tried a lot of things suggested on SO but nothing worked, hence posting my question.
So the requirements are
I need to build the dynamic query (to search jsonb column type) in Java
I need to use prepared statements of Java (to avoid sql injection)
I need to replace parameter which is inside single quotes
This query in CLI works perfectly
select * from items where cnt ## '$.**.text like_regex "#finance" flag "i"';
The bit that I want to parameterise is "#finance". So when I build this query
select * from items where cnt ## '$.**.text like_regex ? flag "i"';
When executing I get following error
"The column index is out of range: 1, number of columns: 0."
It's because ? is within single quote so jdbc driver is not able to identify as replaceable parameter perhaps.
The closest question/discussion I could find is in this post: JDBC Prepared statement parameter inside json.
I have tried it however this does not work for me for some reason. They query now I run is
select * from items where cnt ## ?::jsonb
but with this I get following error
org.postgresql.util.PSQLException: ERROR: operator does not exist: jsonb ## jsonb
Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
Position: 106
I have tried various other ways like escaping single quotes etc but nothing really worked. I have very limited knowledge of PostgreSQL, so any help would be appreciated.
Postgres version: 13.1
Java version: 15
select * from items where cnt ## ?::jsonb
The thing on the right of the ## needs to be a jsonpath, not jsonb. So casting it to jsonb is clearly wrong. Try casting it to jsonpath instead.

Snowflake : Unsupported subquery type cannot be evaluated

I am using snowflake as a data warehouse. I have a CSV file at AWS S3. I am writing a merge sql to merge data received in CSV to the table in snowflake. I have a column in time dimension table with data type as Number(38,0) data type in SF. This table holds all dates time, one e.g. is of column
time_id= 232 and time=12:00
In CSV I am getting a column with the label as time and value as 12:00.
In merge sql I am fetching this value and trying to get time_id for it.
update table_name set start_time_dim_id = (select time_id from time_dim t where t.time_name = csv_data.start_time_dim_id)
On this statement I am getting this error "SQL compilation error: Unsupported subquery type cannot be evaluated"
I am struggling to solve it, during this I google for it and got one reference for it
https://github.com/snowflakedb/snowflake-connector-python/issues/251
So want to make sure if anyone have encountered this issue? If yes, will appreciate pointers over it.
It seems like a conversion issue. I suggest you to check the data in CSV file. Maybe there is a wrong or missing value. Please check your data, and make sure it returns numeric values
create table simpleone ( id number );
insert into simpleone values ( True );
The last statement fails with:
SQL compilation error: Expression type does not match column data type, expecting NUMBER(38,0) but got BOOLEAN for column ID
If you provide sample data, and SQL to produce this error, maybe we can provide a solution.
unfortunately correlated and nested subqueries in Snowflake are a bit limited at this stage.
I would try running something like this:
update table_name
set start_time_dim_id= time_id
from time_dim
where t.time_name=csv_data.start_time_dim_id

how to use subquery with aggregate function in hive

SELECT peridle, CPU
FROM (SELECT MAX(peridle) FROM try2);
While executing this query in hive I am getting following error
Parse Error: line 1:47 cannot recognize input near 'select' 'MAX' '(' in expression specification
Please suggest a solution how to use aggregate functions in hive subquery
At least two things need to be fixed here:
You are not returning fields named peridle or CPU from the sub-query, yet you are trying to select them.
Hive requires you to alias all sub-queries, even if you don't reference the alias. You can quickly do this by changing the ); at the end to ) x; (or however you want to call it).

Rails 4, migration to change datatype of column from daterange to tsrange causing PG::DatatypeMismatch: ERROR:

I'm trying to change a column of type daterange to tsrange (I realized I need time as well as date) using a vanilla Rails migration
def self.up
change_column :events, :when, :tsrange
end
After running rake db:migrate the error is
PG::DatatypeMismatch: ERROR: column "when" cannot be cast automatically to type tsrange
HINT: Specify a USING expression to perform the conversion.
: ALTER TABLE "events" ALTER COLUMN "when" TYPE tsrange
I tried following the hint and used the following
def self.up
change_column :events, :when, :tsrange, 'tsrange USING CAST(when AS tsrange)'
end
but then got
no implicit conversion of Symbol into Integer
From what I can tell, USING CAST is mainly meant for use with ints. Assuming I don't want to drop and then recreate the column, what do you have to specify to alter the type from daterange to tsrange?
I'm using
Rails 4.0.1
ruby-2.0.0-p247
psql (9.2.4)
Some background, daterange and tsrange were introduced to Rails 4 in the following PR: https://github.com/rails/rails/pull/7345. Thanks.
The USING clause is used to specify how to convert the old values to the new ones:
The optional USING clause specifies how to compute the new column value from the old; if omitted, the default conversion is the same as an assignment cast from old data type to new. A USING clause must be provided if there is no implicit or assignment cast from old to new type.
So USING shows up any time there is no default cast from the old type to the new type. Also note that USING is specified as USING expression so any expression (whose value is of the correct type) can be used with a USING, the most common is USING CAST(...) but the expression can be pretty much anything.
Hopefully that should clear up some confusion about USING.
So what's up with the ActiveRecord error? Well, change_column is expecting to see an options Hash in the fourth argument but you're sending in a string. If you look at the change_column source, you'll see things like options[:limit] but String#[] expects integer arguments so your string argument is triggering odd looking complains about Symbols.
AFAIK there is no way to get AR to add a USING clause to the ALTER TABLE ... ALTER COLUMN that change_column generates. This leaves connection.execute(some_sql) if you need a USING clause. Of course this is further complicated by the (apparent) lack of a built-in cast from daterange to tsrange but building the necessary expression isn't terribly difficult if you pull the daterange apart with the upper and lower functions:
connection.execute(%q{
alter table events
alter column "when"
type tsrange using tsrange(lower("when"), upper("when"))
})
You can see the table change in action over here: http://sqlfiddle.com/#!15/fb047/2
That assumes that you're using the default half-open intervals ([...)) for your ranges; if you have ranges that aren't closed on the left and open on the right then you'll have to build a more complicated USING expression using the other range functions to see if the left and right ends of the ranges are open or closed.
BTW, when is a PostgreSQL keyword so it isn't the best choice for an identifier, you'll have to say "when" every time you refer to that column in SQL snippets and that might get tiring. I'd recommend using a different name for that column so that you don't have to worry about quoting.