Display texte or unicode in column - tsql

I have two columns in my table, one contains latin caracteres (varchar) and another contains unicodes (n'varchar).
What I want is when my first column is null then display the second column :
isnull(column_1, column_2)
This doesn't work. So I tried to convert the select to n'varchar like this
convert(n'varchar(50),isnull(column_1, column_2))
But it doesn't work aswell. The unicode value is displayed with question marks '?'.
So have you an idea how to display the text or unicode (when text is null take the unicode)?
Sorry for my bad English.

You must convert the column inside:
select isnull(cast(column_1 as nvarchar(50)), column_2) from table
Your query convert the Unicode text to non-unicode (at this point the invalid data is converted to ? and practically lost) and then convert it back to Unicode (but it is too late, because the data is already lost).

IsNull returns a value with the same data type as the first argument, in this case VarChar(n). The second argument, an NVarChar(n), is therefore converted to a VarChar(n). (More precisely: "Returns the same type as check_expression. If a literal NULL is provided as check_expression, returns the datatype of the replacement_value. If a literal NULL is provided as check_expression and no replacement_value is provided, returns an int.")
Coalesce returns a value with the highest data type precedence from its arguments. According to the rules for data type precedence an NVarChar is higher than a VarChar and the implicit conversion will occur the way you want.
Coalesce can accept more than two arguments, is ISO/ANSI SQL, and respects data type precedence.

Related

ADF String to Decimal return NULL value

I have an imported CSV file with string values.
In this file there are amounts, of which several lines equal 0,00
I want to create a TotalCA column by adding several fields in my table and convert it to a numeric value.
I use the toDecimal function and the values are all returned NULL and the created column is grayed..
I have done a lot of research and I can't find a solution. Can you help me?
Thank you
Lea
I made an example csv data if I understand you correctly:
Like you said, some rows are enriched with values greater than 0, and others contain "0.00" when it is a zero value. Actually, the row data contains different data type, int and decimal.
For these reason and as I tested, no matter toDecimal(), toFloat() or toDouble(), all of the functions don't work. I use Derived column expression to do the data conversion.
We can't keep these data and only can choose one type of them. If you choose the decimal or float, other rows data would be converted to '11.0', I think that also doesn't you want.
Source Projection: I preset the column type to double:
(Decimal can't keep '0.00', it only returns '0')
In one word, the only way is that use String data type to keep the data. And also use String data type to receive the data in sink dataset.
HTH.
Thank you all for your answers.
Here is my CSV file
If I go to the Source Projection module and change the type of my column LFC1_UM01S to decimal this is what I get:
Why are some values considered as NULL?
To decimal column

Getting NULL Value in Stored Procedure TEXT Column

Below Query, I am using to get the SP definition but in TEXT column I am getting as NULL Value in IBM DATA Studio but I am able to CALL the SP.
SELECT PROCNAME, TEXT FROM SYSCAT.PROCEDURES WHERE PROCNAME LIKE '%USP_ABC%'
Please Help
You have confirmed that the syscat.procedures.language is SQL, and that your query-tool is able to display a substr() of the text.
Workaround depends on the length(text) of the row of interest:
SELECT PROCNAME, substr(TEXT,1, 1024) FROM SYSCAT.PROCEDURES WHERE PROCNAME LIKE '%USP_ABC%'
You may need to adjust the length of the substr extract depending on the length of the text and your configuration. For example substr(TEXT, 1, 2048 ) or a higher value for the length as necessary that your query-tool can cope with.
You can find the length of the text column with the LENGTH(TEXT) for the row of interest.
You can also CAST a CLOB to char or varchar to a length that fits within their limits and whatever query tool limitations you have.
Another option is to use a different query tool that can work with CLOB.
Are you using the latest version of Data Studio with the latest fix? It sounds like you might have an invalid UTF-8 character in you SP, or as you are using SUBSTR and SUBSTRING you are breaking a mulit-byte character in two.
You could try setting
-Ddb2.jcc.charsetDecoderEncoder=3
in your eclipse.ini to get Java to use a replacment character rather than replace the invalid string with nul
See this tech note
https://www-01.ibm.com/support/docview.wss?uid=swg21684365
Otherwise, do raise this with IBM Suppport

How to handle NaNs in pandas dataframe integer column to postgresql database

I have a pandas dataframe with a "year" column. However some rows have a np.NaN value due to an outer merge. The data type of the column in pandas is therefore converted to float64 instead of integer (integer cannot store NaNs?). Next, I want to store the dataframe on a postGreSQL database. For this I use:
df.to_sql()
Everything works fine but my postGreSQL column is now type "double precision" and the np.NaN values are now [null]. This all makes sense since the input column type was float64 and not integer type.
I was wondering if there is a way to store the results in an integer type column with [nans].
Example Notebook
Result of Ami's answer:
(integer cannot store NaNs?)
No, they cannot. If you look at the postgresql numeric documentation, you can see that the number of bytes, and ranges, are completely specified, and integers cannot store this.
A common solution in this case is to decide, by convention, that some number is logically a nan. In your case, if it is year, you might choose a negative value (or just -1) as that. Before writing, you could use
df.year = df.year.fillna(-1).astype(int)
Alternatively, you can define another column as year_is_none.
Alternatively, you can store them as floats.
These solutions range from most efficient, to least efficient in terms of memory.
You should use it;
df.year = df.year.fillna(-1) OR 0

How do I format a number of arbitrary length?

If I have data that includes a numeric column with values into the miillions (eg 63254830038), and I want to format the number as a US Dollar amount (eg. $63,254,830,038), I know I can use:
SELECT numeric_column, to_char(numeric_column, '$999G999G999G999') from table
to format the values, but to do so reliably I either have to include an unnecessarily long text string ('$999G999G999G999') or know the maximum number of possible digits. Is there a way to say, broadly, "group numbers with a comma" instead of explicitly saying "group the hundreds, group the thousands, Oh! and please group the millions"?
You just need cast integer to money type.
E.g.:
tests=> select cast(63254830038 as money);
Or alternative syntax:
tests=> select 6323254830038::money;
And output (I'm from Poland, so money type take my locales and set correct currency symbol):
money
----------------------
63.254.830.038,00 zł
Monetary Types documentation.
You can try something like this (works in sql-server, not sure about postgresql)
select convert(varchar,cast('63254830038' as money),1)
You could do things the hard way using regular expressions: convert the number into a string, reverse it, use regexp_replace to insert commas between pairs of 3 digits, and then reverse it again:
select '$' || reverse(regexp_replace(
reverse(numeric_column::varchar),
E'(\\d\\d\\d)(?=\\d)', '\1,', 'g'))
Explanation
The first argument to regexp_replace is the expression to match, which contains two parts:
(\\d\\d\\d) means 3 digits, which are captured
(?=\\d) is a positive lookahead constraint of a single digit, meaning the match only counts if there is a digit following it. (That is, this digit is checked to exist, but it does not count as part of the match.)
The second argument is what to replace with: the 3 captured digits, plus a comma.
The third argument 'g' is a flag indicating that it should match and replace as many times as possible.
For more information on regular expressions in PostgreSQL, see the documentation.

UNION with different data types in db2 server

I have built a query which contains UNION ALL, but the two parts of it
have not the same data type. I mean, i have to display one column but the
format of the two columns, from where i get the data have differences.
So, if i get an example :
select a,b
from c
union all
select d,b
from e
a and d are numbers, but they have different format. It means that a's length is 15
and b's length is 13. There are no digits after the floating point.
Using digits, varchar, integer and decimal didn't work.
I always get the message : Data conversion or data mapping error.
How can i convert these fields in the same format?
I've no DB2 experience but can't you just cast 'a' & 'd' to the same types. That are large enough to handle both formats, obviously.
I have used the cast function to convert the columns type into the same type(varchar with a large length).So i used union without problems. When i needed their original type, back again, i used the same cast function(this time i converted the values into float), and i got the result i wanted.