I don't have much experience with MS SQL server 2008 R2 but here is the issue if you would help me please:
I have a table with a column/field (type : nvarchar) that stores text. The text is read from a text file and written to the database using vb.net application.
The text in the text file contains Turkish characters such as the u with 2 dots on top(in the future it will be in different languages )
When I open the table, the text in the column is not readable. It converts the Turkish special character to some unreadable characters.
Is there anyway to make the text readable in the table?
Thank you so much.
SQL Server doesn't change any character stored in tables, I think the problem is displaying the text in different character set. Try using UTF-8 character set.
Related
In ADF Copy activity, I am reading data from Databricks Delta tables, columns of which may contain non-english characters. Its reading the data perfectly, as I can see it in preview data. Next, I am saving (sink) the data in a CSV file.
When I open the CSV file, non-english characters are showing as either non readable characters or question marks depends upon what encoding I am using.
When encoding is UTF-8 (default), non-english characters become non-readable.
When encoding is ISO 8859-15, it becomes question marks.
Below is the sample non-english characters
With encoding UTF-8 (default)
with encoding ISO 8859-15
Any suggestions please
when you create the sink table, you need to define the encode UTF-8 as below:
create table xxxx
(
varchar(500) collate Latin1_General_100_CI_AI_SC_UTF8
)
Please help me how to display Unicode characters(Chinese charactter) in the Report viewer .
I have one columns display but if i run that query in DB itsexist chinee character in the table for that columns.
But I am display in report design its coming like this(????????????TES/??/060/CN) SQuare boxes instead of chinse character.
Please help me to resolve this issue.
I am showing only 1 columns with Nvarchar , but how i need to enabled unicode character in crystal report (both english and chinese)
Below Screenshot displaying data is coming like this instead Unicode characters.
Switch to Crystal XI R2 or later. Only those versions provide full support for UNICODE.
I have a table in postgreSQL with some text which can contains emoji, I want to find which emoji is the most used. How can I do that without have to count separately the number of text with each emoji.
Below Query, I am using to get the SP definition but in TEXT column I am getting as NULL Value in IBM DATA Studio but I am able to CALL the SP.
SELECT PROCNAME, TEXT FROM SYSCAT.PROCEDURES WHERE PROCNAME LIKE '%USP_ABC%'
Please Help
You have confirmed that the syscat.procedures.language is SQL, and that your query-tool is able to display a substr() of the text.
Workaround depends on the length(text) of the row of interest:
SELECT PROCNAME, substr(TEXT,1, 1024) FROM SYSCAT.PROCEDURES WHERE PROCNAME LIKE '%USP_ABC%'
You may need to adjust the length of the substr extract depending on the length of the text and your configuration. For example substr(TEXT, 1, 2048 ) or a higher value for the length as necessary that your query-tool can cope with.
You can find the length of the text column with the LENGTH(TEXT) for the row of interest.
You can also CAST a CLOB to char or varchar to a length that fits within their limits and whatever query tool limitations you have.
Another option is to use a different query tool that can work with CLOB.
Are you using the latest version of Data Studio with the latest fix? It sounds like you might have an invalid UTF-8 character in you SP, or as you are using SUBSTR and SUBSTRING you are breaking a mulit-byte character in two.
You could try setting
-Ddb2.jcc.charsetDecoderEncoder=3
in your eclipse.ini to get Java to use a replacment character rather than replace the invalid string with nul
See this tech note
https://www-01.ibm.com/support/docview.wss?uid=swg21684365
Otherwise, do raise this with IBM Suppport
I have a 1,000,000 rows plus string table, that has some garbage inside due to encoding errors.
The garbage is minimal, but needs to be found.
The column in question is a NVARCHAR column that normally contains text in one of 11 languages.
All of the text should be unicode (utf-8 when we process it application side).
The corrupt columns contain ? characters and or a very limited unusual glyph set, by eye they can be very easily seen not to be valid language. It is likely that these columns have been encoded backwards and forwards into total garbage.
So in the name of speed, is there anything I can do on SQL Server to detect bad encoding / string garbage?
Thanks.
EDIT to add garbage example:
This was Russian или Ð˜Ð¼Ñ Ð£Ñ‡Ð°Ñтника