TSql SqlParameters - Illegal Value or Collation issue? - tsql

The following sql runs and return a row
SELECT * FROM MyTable WHERE MyCol='01««01'
However if this sql is parameterised and executed
EXEC execustesql 'SELECT * FROM MyTable WHERE MyCol=#p1',N'p1 nvarchar(6)',#p1=N'01««01'
It returns 0 rows.
I have tried explicitly adding various COLLATIONs (Latin1_General_CI_AI, SQL_Latin1_General_CP1_CI_AI) to the filter to both the column and the parameter placeholder e.g.
WHERE MyCol COLLATE collation=#p1 COLLATE collation
But can't seem to get any rows returned.
A further complication, which I'm not sure is relevant, is that the character « is ascii 174, but if I run
SELECT convert(varbinary(max),'01««01')
returns 0x3031ABAB3031 i.e. 171
so I've also tried playing around with the value and replacing ascii 174 with asc 171 even though it is a different char.
So is my problem a collation issue or do Sql Parameters not accept values which contain certain ascii values ... or something else?

Related

VARCHAR comparison on an indexed column

Postgres is behaving differently from the 'common sense' expected behavior:
Given a table 'my_table' and a VARCHAR(250) column named 'MyVarcharColumn' where an index IDX_MyvarcharColumn is created based on the 'MyVarcharColumn'.
Collation: Default
Postgres version: 11.12
LC_COLLATE: en_US.utf8
Enconding: UTF8
CTYPE: en_US.utf8
The problem is presented below:
Given a query (A)
SELECT * FROM my_table t
WHERE 'mystring' = t.MyVarcharColumn
When running the query above, no records are returned even though there is a value 'mystring' present in 'my_table'.
Workaround:
SELECT * FROM my_table t
WHERE 'mystring' = t.MyVarcharColumn collate "C"
By adding 'collate "C"' the query works fine, obviously no one wants to have to add the "collate" statement at the end of every query.
Second 'Workaround':
By recreating the databases indexes 'REINDEX database myDB' the query also starts to work as expected without the need of adding the statement 'collate'.
The question is: is there a way to avoid using the collate statement and/or the REINDEX to make this work without a workaround?
Re-creating the database with a different collation it also not an option at the moment.
Using lower(column_name) to compare isn't an option since it does not use indexes and it would make the query slow.

Postgres - insert NUMERIC[] value '0' in lower UNION select statement

In POSTGRES - I am trying to create a view from 2 tables. When the value of '0' is coded for insertion as a value for the EAST_LONGITUDE_NMBR column of datatype NUMERIC[24,20] in the lower portion of a UNION select statement, an ERROR Message is generated.
The view EXTENTS' column EAST_LONGITUDE_NMBR comes from the table and column, CELL_EXTENT.EAST_LONGITUDE_NMBR with a datatype of NUMERIC[24,20]
The following is the code.
CREATE VIEW EXTENTS
(
ID,
EXTENT_TYPE,
NAME,
EAST_LONGITUDE_NMBR
)
AS
SELECT
"CELL_EXTENT"."CELL_ID_NMBR",
'CELL',
UPPER ("CELL_EXTENT"."CELL_NAME"),
"CELL_EXTENT"."EAST_LONGITUDE_NMBR"
FROM "EARTH"."CELL_EXTENT"
UNION
(SELECT
"AREA_INTEREST"."AREA_ID_NMBR",
'GEOPOLITICAL',
UPPER ("AREA_INTEREST"."AREA_NAME"),
0
FROM "EARTH"."AREA_INTEREST");
The inserted value '0' in the lower UNION select causes the following error in the creation of view EXTENTS.
ERROR: UNION types numeric[] and integer cannot be matched
I have tried the following and received the errors shown:
0 ERROR: UNION types numeric[] and integer cannot be matched
0.0 ERROR: UNION types numeric[] and numeric cannot be matched
0.0::NUMERIC[] ERROR: cannot cast type numeric to numeric[]
0::NUMERIC[] ERROR: cannot cast type integer to numeric[]
I have checked numerous websites with discussions about the Postgres datatypes, particularly NUMERIC, NUMERIC[], INTEGER, DECIMAL
Difference between DECIMAL and NUMERIC datatype in PSQL
https://github.com/npgsql/npgsql/issues/655
https://www.cybertec-postgresql.com/en/mapping-oracle-datatypes-to-postgresql/
http://www.postgresqltutorial.com/postgresql-cast/
http://www.postgresqltutorial.com/postgresql-to_number/
I could go on, but you get the picture. There is a lot about datatypes but there are no examples for '0' as an actual value in Postgres code for a column of datatype NUMERIC[] in a UNION statement.
I feel this is a simple fix, a couple of keystrokes here or there to set the value proper, but it eludes me. I am using pgAdmin4.
Can you help?
Thanks,
Margaret
Seems easy: use an array instead of 0.
Depending on what you prefer, you could use
ARRAY[]::numeric[] -- empty array
or
ARRAY[0]::numeric[] -- array with a single 0

SQL Command to insert Chinese Letters

I have a database with one column of the type nvarchar. If I write
INSERT INTO table VALUES ("玄真")
It shows ¿¿ in the table. What should I do?
I'm using SQL Developer.
Use single quotes, rather than double quotes, to create a text literal and for a NVARCHAR2/NCHAR text literal you need to prefix it with N
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE table_name ( value NVARCHAR2(20) );
INSERT INTO table_name VALUES (N'玄真');
Query 1:
SELECT * FROM table_name
Results:
| VALUE |
|-------|
| 玄真 |
First, using NVARCHAR might not even be necessary.
The 'N' character data types are for storing data that doesn't 'fit' in the database's defined character set. There's an auxiliary character set defined as the NCHAR Character set. It's kind of a band aid - once you create a database it can be difficult to change its character set. Moral of this story - take great care in defining the Character Set when creating your database and do not just accept the defaults.
Here's a scenario (LiveSQL) where we're storing a Chinese string in both NVARCHAR and VARCHAR2.
CREATE TABLE SO_CHINESE ( value1 NVARCHAR2(20), value2 varchar2(20 char));
INSERT INTO SO_CHINESE VALUES (N'玄真', '我很高興谷歌翻譯。' )
select * from SO_CHINESE;
Note that both the character sets are in the Unicode family. Note also I told my VARCHAR2 string to hold 20 characters. That's because some characters may require up to 4 bytes to be stored. Using a definition of (20) would give you only room to store 5 of those characters.
Let's look at the same scenario using SQL Developer and my local database.
And to confirm the character sets:
SQL> clear screen
SQL> set echo on
SQL> set sqlformat ansiconsole
SQL> select *
2 from database_properties
3 where PROPERTY_NAME in
4 ('NLS_CHARACTERSET',
5 'NLS_NCHAR_CHARACTERSET');
PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
NLS_NCHAR_CHARACTERSET AL16UTF16 NCHAR Character set
NLS_CHARACTERSET AL32UTF8 Character set
First of all, you should to establish the Chinese character encoding on your Database, for example
UTF-8, Chinese_Hong_Kong_Stroke_90_BIN, Chinese_PRC_90_BIN, Chinese_Simplified_Pinyin_100_BIN ...
I show you an example with SQL Server 2008 (Management Studio) that incorporates all of this Collations, however, you can find the same characters encodings in other Databases (MySQL, SQLite, MongoDB, MariaDB...).
Create Database with Chinese_PRC_90_BIN, but you can choose other Coallition:
Select a Page (Left Header) Options > Collation > Choose the Collation
Create a Table with the same Collation:
Execute the Insert Statement
INSERT INTO ChineseTable VALUES ('玄真');

How to define VARCHAR columns to support special characters?

In my table product I have a column: product_name with type VARCHAR and size of 100: product_name varchar(100)
When I try to insert a name with special characters like this one:
°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%°°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°
I get this error:
ERROR : org.hibernate.util.JDBCExceptionReporter:78 : logExceptions() : Error for batch element #1: DB2 SQL Error: SQLCODE=-302, SQLSTATE=22001, SQLERRMC=null, DRIVER=3.57.82
My product_name column can have 100 characters, for me 'â' is 1 character.
Is there in DB2 another type (other than varchar), to set it for the product_name column?
thus I can execute this query:
alter table product alter column product_name set data type otherType(100);
Check out the
STRING_UNITS
database configuration parameter. You can use it to switch fronm the default byte length to a character length. This mean char(100) will be default interpreted by DB2 as 100 Bytes. As chracter in a unicode database can span 1-4 bytes 100 bytes are not enough to store 100 chars. After switching to STRING_UNITS = CODEUNITS32 you get 100 chars when defining a column varchar(100).
So you do not need another type but another db cfg setting.

How does Redshift treat guillemets?

I am trying to run a CSV import using the COPY command for some data that includes a guillemet (»). Redshift complains that the column value is too long for the varchar column I have defined. The error in the "Loads" tab in the Redshift GUI displays this character as two dots: .. - had it been treated as one, it would have fit in the varchar column. It's not clear whether there is some sort of conversion error occurring or if there is a display issue.
When trying to do plain INSERTs I run into strange behavior as well:
dev=# create table test (name varchar(3));
CREATE TABLE
dev=# insert into test values ('bla');
INSERT 0 1
3 characters treated as 4?
dev=# insert into test values ('bl»');
ERROR: value too long for type character varying(3)
dev=# insert into test values ('b»');
INSERT 0 1
Why does char_length return 2?
dev=# select char_length(name), name from test;
char_length | name
-------------+------
2 | b»
I've checked the client encoding and database encodings and those all seem to be UTF8/UNICODE.
You need to increase the length of your varchar field. Multibyte characters use more than one character and length in the definition of varchar field are byte based. So, your special char might be taking more than a byte. If it still doesn't work refer to the doc page for Redshift below,
http://docs.aws.amazon.com/redshift/latest/dg/multi-byte-character-load-errors.html