I have a query that I can run on a DB2 table using my python SQL tester, and it returns the string values I'm looking for.
However, when I run it directly on my database it returns a hex value. Any help in getting the results as a character string would be greatly appreciated!
Here is the field definition:
ORCTL CCDATA 243 A 14 256 Order Control File Data
My query on the iSeries is:
select ccdata from ORCTL where ccctlk = 'BUYRAK'
Using your query, you can cast a string with another CCSID, eg :
select cast(ccdata as char(14) CCSID 37) from ORCTL where ccctlk = 'BUYRAK'
My guess is that it isn't returning hex...
Rather it's returning EBCDIC.
Your column is probably tagged with CCSID 65535, which tells the system not to translate it.
The right way to fix this issue, is to make sure the column is tagged with the appropriate CCSID; for example, 37 for US English.
The alternative, is to look for a "force translate" option in the settings of driver you're using.
Related
I have a CLOB(2000000) field in a db2 (v10) database, and I would like to run a simple UPDATE query on it to replace each occurances of "foo" to "baaz".
Since the contents of the field is more then 32k, I get the following error:
"{some char data from field}" is too long.. SQLCODE=-433, SQLSTATE=22001
How can I replace the values?
UPDATE:
The query was the following (changed UPDATE into SELECT for easier testing):
SELECT REPLACE(my_clob_column, 'foo', 'baaz') FROM my_table WHERE id = 10726
UPDATE 2
As mustaccio pointed out, REPLACE does not work on CLOB fields (or at least not without doing a cast to VARCHAR on the data entered - which in my case is not possible since the size of the data is more than 32k) - the question is about finding an alternative way to acchive the REPLACE functionallity for CLOB fields.
Thanks,
krisy
Finally, since I have found no way to this by an SQL query, I ended up exporting the table, editing its lob content in Notepad++, and importing the table back again.
Not sure if this applies to your case: There are 2 different REPLACE functions offered by DB2, SYSIBM.REPLACE and SYSFUN.REPLACE. The version of REPLACE in SYSFUN accepts CLOBs and supports values up to 1 MByte. In case your values are longer than you would need to write your own (SQL-based?) function.
BTW: You can check function resolution by executing "values(current path)"
I am getting this error running an insert query for a single record:
DB2 SQL Error: SQLCODE=-302, SQLSTATE=22001, SQLERRMC=null,
DRIVER=3.62.56
Exception: org.springframework.dao.DataIntegrityViolationException
I looked this up on IBM's help site, but there being no parameter index, I am stuck. The SQL state also seems to specify it is other than a value being too big.
The format of the query is INSERT INTO [[TABLE_NAME]] VALUES (?,?,?,...) using Spring's JdbcTemplate.update(String sql, Object... params).
This being for work, I cannot post schema nor query. I am looking for general advice into debugging this issue. I already know using Arrays.toString(Object[]) does not print out in SQL format.
To find the explanation for SQLCODE -302 in the manual you need to search for SQL0302N (the general rule for DB2 SQLCODE values is this: "SQL" plus four digits, padded if necessary with zeros on the left, plus "N" for "negative" because -302 is a negative number).
If you have the DB2 command line processor installed, you can also use it to look up error codes:
db2 ? sql302
which would produce something like this:
SQL0302N The value of a host variable in the EXECUTE or OPEN
statement
is out of range for its corresponding use.
Explanation:
The value of an input host variable was found to be out of range for
its use in the SELECT, VALUES, or prepared statement.
In other words, one of the bind variables in your INSERT is too large for the target column. You'll need to compare the table column definitions with the actual values you're trying to insert.
In addition to mustaccio's answer you can also get the info from sql with SYSPROC.SQLERRM. Example:
values SYSPROC.SQLERRM ('SQL302', '', '', 'en_US', 0)
SQL0302N The value of a host variable in the EXECUTE or OPEN statement
is out of range for its corresponding use.
Explanation:
...
I have tried to group the multiple rows into a single rows using WM_CONCAT function using Oracle 10g.
But when running my query, I am getting the following error:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "WMSYS.WM_CONCAT_IMPL", line 30
06502. 00000 - "PL/SQL: numeric or value error%s
Output of my query is :
5319| 64764011907| 6893,1109,1120,1297,1327 (Need Comma separated list of store for a client id)
I will not be able to create the types- lack of previliges.
Please let me know if I can do the grouping using some different method in Oracle 10g.
Tim Hall has a pretty canonical list of string aggregation techniques in Oracle. Unfortunately, I don't believe that any of them are going to work for you if the result needs to be able to exceed 4000 bytes and you cannot create any sort of object in the database and you're using 10.2. The sys_connect_by_path approach would be the only one worth testing but that's almost certainly limited to 4000 bytes as well.
If you have access to the various XML functions, you could potentially use the xmlagg function to produce a CLOB.
I have an ssis package where I am using an OLEDB source linking to SQL Server 2005 table. All columns except a date column are NVARCHAR(255). I am using an Excel destination and using a SQL statement to create the sheet in the Excel workbook, the SQL is in the excel connection manager (effectively a create table statement that creates a sheet) and is derived from the mapping of the columns from the DB.
No matter what I have done I keep getting this unicode --> non-unicode conversion error between my source and destination. Tried conversion to string[DT_STR] between S > D, removed it, changed SQL Table VARCHAR to NVARCHAR and still get this flippin error.
Because I am creating the sheet in Excel with a SQL statement I do not see any way to actually pre-define what the data types of the columns will be in the Excel sheet. I imagine it would be a default meta data but I do not know.
So between my SQL table destination and the creation of my Excel sheet with this SSIS sql statement how can I stop this error coming up?
My error is:
Error at Data Flow Task [OLE DB Source [1]]: Column "MyColumn" cannot convert between unicode and non-unicode string data types.
And for all nvarchar columns.
Appreciate any help
Thanks
Andrew
The below Steps worked for me:
right click on source task.
click on "Show Advanced editor".
Go to "Input and Output Properties" tab.
select the output column for which you are getting the error.
Its data type will be "String[DT_STR]".
Change that data type to "Unicode String[DT_WSTR]".
save and close.
Add Data Conversion transformations to convert string columns from non-Unicode (DT_STR) to Unicode (DT_WSTR) strings.
You need to do this for all the string columns...
The missing piece here is Data Conversion object. It should be in between OLE DB Source and Destination object.
First, add a data conversion block into your data flow diagram.
Open the data conversion block and tick the column for which the error is showing. Below change its data type to unicode string(DT_WSTR) or whatever datatype is expected and save.
Go to the destination block. Go to mapping in it and map the newly created element to its corresponding address and save.
Right click your project in the solution explorer.select properties. Select configuration properties and select debugging in it. In this, set the Run64BitRunTime option to false (as excel does not handle the 64 bit application very well).
Instead of adding an earlier suggested Data Conversion you can cast the nvarchar column to a varchar column. This prevents you from having an unnecessary step and has a higher performance then the alternative.
In the select of your SQL statement replace date with CAST(date AS varchar([size])). For some reason this does not yet change the output data type. To do this do the following:
Right click your OLE DB Source step and open the advanced editor.
Go to Input and Output Properties
Select Output Columns
Select your column
Under Data Type Properties change DataType to string [DT_STR]
Change Length to the length you specified in your CAST statement
After doing this your source data will be output as a varchar and your error will disappear.
Source
I have been having the same issue and tried everything written here but it was still giving me the same error.
Turned out to be NULL value in the column which I was trying to convert.
Removing the NULL value solved my issue.
Cheers,
Ahmed
No-one seems to mention this but, converting varchar to nvarchar in the source query also solves the issue.
On the above example I kept losing the values, I think that delaying the Validation will allow the new data types to be saved as part of the meta data.
On the connection Manager for 'Excel Connection Manager' set the Delay Validation to False from the Properties.
Then on the data flow Destination task for Excel set the ValidationExternalMetaData to False, again from the properties.
This will now allow you to right click on the Excel Destination Task and go to Advanced Editor for Excel Destination --> far right tab - Input and Output Properties. In the External Columns folder section you will be able to now change the Data Types and Length values of the problematic columns and this can now be saved.
Good Luck!
I experienced this condition when I had installed Oracle version 12 client 32 bit client connected to an Oracle 12 Server running on windows.
Although both of Oracle-source and SqlServer-destination are NOT Unicode, I kept getting this message, as if the oracle columns were Unicode.
I solved the problem inserting a data conversion box, and
selecting type DT-STR (not unicode) for varchar2 fields and DT-WSTR (unicode) for numeric fields, then I've dropped the 'COPY OF' from the output field name.
Note that I kept getting the error because I had connected the source box arrow with the conversion box BEFORE setting the convertion types. So I had to switch source box and this cleaned all the errors in the destination box.
When creating table in SQL Server make your table columns NVARCHAR instead of VARCHAR.
I think people are missing this. In my case I had 100 character columns to convert between Oracle and MS Sql. All this stuff about Data Conversion and Advanced Editor is incredibly tedious if you have a 100 separate character columns to assign. Plus SSIS being SSIS, it will sometimes reset all your 100 advanced editor changes even if you set VALIDATEEXTERNALMETADATA to false, incredibly obnoxious. I wouldn't mind doing the Data Conversion if there was some value to it but 20 years ago ETL tools used to take oracle character to ms sql characters without fussing. What Bakalolo and Zafer say is the answer if you have a lot of character columns and you can live with nvarchar, just declare all your output ms sql columns (nvarchar) and your data task will automatically assign your oracle fields into ms sql fields with no manual overrides. I have also found that the new Oracle Source (2021) doesn't complain about a unicode conversion to varchar in ms sql. A colleague just told me that the ssis wizard (it may be only in vs 2019+) to assign oracle character to ms sql varchar will do the assignments automatically with no override, but I haven't tried that personally.
2022 update - I think this is just vs 2019 created packages and later. An ado.net task reading a varchar ms sql table going to oledb (and ado.net I think) ms sql varchar will throw the unicode error. If you switch the input task to oledb reading ms sql varchar table you won't have to do the advanced editor overrides for the varchar fields. If you don't want to do advanced editor overrides (who does?) try different tasks and more oledb tasks.
I just encounter same issue, I solve it in my SQL request : using convert directly
CONVERT(NVARCHAR(50),'') AS MyVarName
I need to put an empty (or fix size string) into excel file. Converting force type of MyVarName from DT-STR to DT-WSTR (unicode)
I know this is a very old post but I ran into the same issue and found that I had to manually select the conversion component output alias as the mapping in the excel destination component. Since the names of the OLE DB Source match the excel column names it was mapping it to the OLE DB and not to the Output Alias. Such as SourceID column from the OLE DB component being named Copy of SourceID after conversion. I don't see the original question saying they specifically selected the new alias name just that they mapped to DB columns. #Serge Voloshenko post comes the closest but also does not mention to make sure the mapping happens. To a new SSIS user this might be overlooked.
I'm attempting to construct a LIKE operator in my query on DB2 that is checking if a varchar is just two digits. I've looked online and it seems like DB2 does not support a character range i.e. [0-9]. I've tried LIKE '[0-9][0-9]' and I didn't get an error from DB2, but no rows showed up in my result set from that query when I can see rows that exactly match this through looking at a SELECT * of the same table.
Is there anyway I can replicate this in DB2 if it is indeed true? Is my syntax for the LIKE wrong? Thanks in advance.
The TRANSLATE function is more appropriate for validating an expression that contains a limited number of valid values.
WHERE TRANSLATE( yourExpressionOrColumn, '000000000', '123456789') = '00'
Found it. No you cannot and there are no symbols that can represent an OR in LIKE.