DataType Conversions in Db2 - db2

I am currently stuck on concatenating three fields from a table into a single string. These three fields are of different datatypes.
Select
CASE COALESCE(CHAR_COLUMN,'XXX') WHEN 'XXX'
THEN 'CHAR_COLUMN is null'
else 'CHAR_COLUMN='''||CHAR_COLUMN||'''' END
||' and '||
CASE COALESCE(DT_COLUMN,TIMESTAMP('1980-01-01-00.00.00'))
WHEN TIMESTAMP('1980-01-01-00.00.00') THEN 'DT_COLUMN is null'
else 'DT_COLUMN='''||DT_COLUMN||'''' END
||' and '||
CASE COALESCE(NUM_COLUMN,111) WHEN 111
THEN 'NUM_COLUMN is null'
else 'NUM_COLUMN='''||NUM_COLUMN||'''' END
from
S_DATATABLE
This works perfectly fine in DB2/AIX64 9.1.7 but not in DB2 z/OS 10.1.5.
Error
when ran separately for numeric column
An unexpected token ",111" was found following ",111". Expected tokens may include: "CONCAT || / MICROSECONDS MICROSECOND SECONDS SECOND MINUTES". SQLSTATE=42601
when ran separately for date column
SQL0171N The data type, length or value of the argument for the parameter in position "2" of routine "||" is incorrect. Parameter name: "||". SQLSTATE=42815
Please suggest what changes need to be done for this DB2 version. Thanks in advance.

First of all, regardless of the DB2 version concatenation requires character operands; you cannot concatenate a string and an integer -- DB2 will attempt to implicitly convert non-character data types to characters. It is best if you do explicit conversion to avoid errors.
Secondly, your SQL seems unnecessarily complex. Instead of
CASE COALESCE(NUM_COLUMN,111)
WHEN 111
THEN 'NUM_COLUMN is null'
else 'NUM_COLUMN='''||NUM_COLUMN||''''
END
you can simply do this:
CASE WHEN NUM_COLUMN IS NULL
THEN 'NUM_COLUMN is null'
ELSE 'NUM_COLUMN='||VARCHAR(NUM_COLUMN)
END
Note that in your original code you compare NUM_COLUMN with a character literal, which will also cause implicit conversion. Not all DB2 platforms support implicit conversion between all data types, so once again, do not rely on it but use explicit conversion instead.

Related

Postgres, query error: ERROR: operator does not exist: character varying = bigint?

I am trying to run this query:
select *
from my_table
where column_one=${myValue}
I get the following error in Datagrip:
[42883] ERROR: operator does not exist: character varying = bigint Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
Now, I have found this question, and I can fix the error by putting a string like this:
select *
from my_table
where column_one='123'
What I need is a way to pass in the '123' as a parameter. I usually do this ${myValue} and it works, but I am not sure how to keep my variable there as an input so I can run dynamic queries in code and let Postgres understand I want to pass in a string and not a number.
Any suggestions?
Here's a screenshot of how I am putting the parameter value in DataGrip...:
Ok, so, I just tried to put quotes in the data grip parameters input field for myValue #thirumal's answer things work. I didn't know I have to quote the value for it to work.
This is what it looks like:
Type cast ${myValue} using SQL Standard,
cast(${myValue} AS varchar)
or using Postgres Syntax:
${myValue}::varchar

PostgreSQL: how to extract value attribute from XML

I am trying to convert the following SQL Server query to Postgresql
select CAST(CAST('<IncludeSettle/><StartTime value="2019-03-26 08:45:48.780"></StartTime>' as XML).value('(//StartTime/#value)[1]', 'datetime') AS varchar(40)) + ''')';
I have tried converting it to below and got back following error.
select unnest(xpath('//StartTime/#value', xmlparse(document '<IncludeSettle/><StartTime value="2019-03-26 08:45:48.780"></StartTime>')))
Error:
ERROR: invalid XML document
DETAIL: line 1: Extra content at the end of the document
<IncludeSettle/><StartTime value="2010-03-26 08:45:48.780"></StartTim
^
SQL state: 2200M
As a hack, I had made the following change to make it work.
select unnest(xpath('//StartTime/#value', xmlparse(document '<tempzz>'||'<IncludeSettle/><StartTime value="2019-03-26 08:45:48.780"></StartTime>'||'</tempzz>')))
Output for Postgresql:
2019-03-26 08:45:48.780
I am looking for a better solution. Any help really appreciated.
You can process that by adding the dummy root as you did. The value is already formatted as an ISO timestamp, so you can simply cast it to a timestamp:
But as there is no direct cast from xml to timestamp you need to cast the result of the xpath() to text first.
with data (input) as (
values ('<IncludeSettle/><StartTime value="2019-03-26 08:45:48.780"></StartTime>')
)
select (xpath('//StartTime/#value', ('<dummy_root>'||input||'</dummy_root>')::xml))[1]::text::timestamp
from data

Passing a row type to jsonb_to_recordset syntax errors

I'm trying to work out how to expand a JSONB field with a jsonb_to_recordset and a custom type.
Postgres 11.4.
This is just a test case, but for the minute I'm trying to work out the syntax. I've got a table named data_file_info with a JSONB field named table_stats. The JSONB field always includes a JSON array with the same structure:
[
{"table_name":"Activity","record_count":0,"table_number":214},
{"table_name":"Assembly","record_count":1,"table_number":15},
{"table_name":"AssemblyProds","record_count":0,"table_number":154}
]
The following code works correctly:
from data_file_info,
jsonb_to_recordset(table_stats) as table_stats_splat (
table_name text,
record_count integer,
table_number integer
)
What I would like to do is pass in a custm type definition instead of the long-form column definition list shown above. Here's the matching type:
create type data.table_stats_type as (
table_name text,
record_count integer,
table_number integer)
Some examples I've seen, and the docs say, that you can supply a type name using a null:row_type casting in the first parameter to jsonb_to_recordset. The examples that I've found use in-line JSON, while I'm trying to pull stored JSON. I've made a few attempts, all have failed. Below are two of the trials, with errors. Can someone point me towards the correct syntax?
FAIL:
select table_stats_splat.*
from data_file_info,
jsonb_populate_recordset(null::table_stats_type, data_file_info) as table_stats_splat;
-- ERROR: function jsonb_populate_recordset(table_stats_type, data_file_info) does not exist
-- LINE 4: jsonb_populate_recordset(null::table_stats_type, dat...
^
-- HINT: No function matches the given name and argument types. You might need to add explicit type casts. (Line 4)
FAIL:
select *
from jsonb_populate_recordset(NULL::table_stats_type, (select table_stats from data_file_info)) as table_stats_splat;
-- ERROR: more than one row returned by a subquery used as an expression. (Line 2)
I'm doubtlessly missing something pretty obvious, and am hoping someone can suggest what that is.
Use the column as the second parameter:
select table_stats_splat.*
from data_file_info,
jsonb_populate_recordset(null::table_stats_type, table_stats) as table_stats_splat;

PostgreSQL: Parameter substitution for LISTEN?

Common sense dictates that SQL query strings should never be assembled by hand. Thus, all database interfaces offer parameter substitution, and all users use it, without exceptions.*
I'm using PostgreSQL v10.5, nodejs v8.12.0, node-postgres 7.6.1.
Parameter substitution works as expected for SELECT statements:
> await db.query("select from users where id = 'mic'");
(success, 1 row returned)
> await db.query("select from users where id = $1", ["mic"]);
(success, 1 row returned)
But it doesn't work for LISTEN statements:
> await db.query("listen topicname");
(success)
> await db.query("listen $1", ["topicname"]);
(error: syntax error at or near "$1")
The name of the topic I want to listen to is dynamic. It is coming from semi-trustworthy sources, which should not be user-controllable. But why go against all established best practice and take any chances?
Unfortunately, from my tests I fear that PostgreSQL simply can't do parameter substitution for LISTEN queries.
Is there any solution or workaround for this?
*) This statement may only be true in some utopic future society.
I don't have enough reputation to comment on the answer, but the proposed solution doesn't work for me.
Using %L results in a quoted string, which causes the following error:
ERROR: syntax error at or near "'topic'"
The %I format should be used instead (SQL identifier, this is documented for table and column names, but it also works for the channel name,). You can also use the quote_ident function. See the documentation on creating dynamic queries here.
The following PL/pgSQL function works for us:
CREATE OR REPLACE FUNCTION listenForChannel(
channel_ TEXT
) RETURNS VOID AS $$
BEGIN
EXECUTE format('LISTEN %I', channel_);
END
$$ LANGUAGE PLPGSQL;
You are right that this cannot be done in PostgreSQL.
As a workaround, write a PL/pgSQL function that uses dynamic SQL like this:
EXECUTE format('LISTEN %L', topicname);
The format function escapes strings properly; in this case, the %L format that produces a properly quoted string Literal is the appropriate one.

T-SQL Conversion failed when converting the nvarchar value '12-02' to data type int

I'm trying to convert this NVARCHAR value into a periodeId.
The Raw data could be '12-02'.
My solution for this was first to try this
(1000+CONVERT(INT,LEFT(2,T1.PERIOD_NAME)))*100+CONVERT(Int,RIGHT(2,t1.PERIOD_NAME))
But i get the same error message here and could find any quick solution for it.
I also tried to just do a simple
LEFT(2,T1.PERIOD_NAME) to see if it was the formula itself that crashed it, but the same error came up.
If you want '12-02' to be 1202, then use replace() to remove the hyphen before conversion:
select cast(replace(period_name, '-', '') as int)
In SQL Server 2012+, you should use try_convert(), in case there are other unexpected values.
You can try:
SELECT (1000+CONVERT(INT,LEFT(t1.PERIOD_NAME,2)))*100+CONVERT(Int,RIGHT(t1.PERIOD_NAME,2))
The character_expression that LEFT operates on is at first place, whereas the integer expression that specifies how many characters of the character_expression will be returned, comes at second place.