Issue with a numeric field in SSIS dtsx package - datareader

I've an SSIS dtsx package which is used to load data from a remote MAS db server using a DSN based connection. We load data from many tables into their replica tables in SQL-Server. Everything was working fine until we made some changes to a table in MAS. The dtsx has been failing with the following error:
Error: 0xC02090F8 at Data Flow Task, Import Data, DataReader Source
[28866]: The value was too large to fit in the output column
"UDF_TREAD_DEPTH" (29160).
Actually I believe it might be related to a single table field "UDF_TREAD_DEPTH" which is a decimal field. This field is shown in the DataReader source as "numeric [DT_NUMERIC]" with Length:0, Precision:4 & Scale:2.
In past we had simple data in format xx.xx. And now I see after the issue that we have data like xx.xx, xxx, .. however, still the data type didn't change after I refreshed the Data Reader source.
I believe the "Precision shud be updated to 5" for the data we have
based on this description.
I'm unable to change the data type as visible in the attached screen (Data Source Output column.png). When I debug this dtsx package, it errs while loading the Data Reader Source. If I'm nailing it right - how can I fix it. If there're any other possibilities then kindly let me know.

Have you tried to edit the source with the advanced editor? (Right click and select "Show Advanced Editor...") You can navigate to the Input and output parameters section (generally the last tab), go into the output columns section (for OLE DB, click the + next to OLE DB Source Output, then the plus next to Output Columns, then highlight the column name you want to change) and change the properties of the column in question (look for Data Type Properties and change Precision and scale as needed.). If you are not able to do that, you can try deleting the source and replacing it with a new source to the same data (ie the recreation of this object will requery the connection for column properties).

I got the data to be updated with the xxx.xx mask so 100 became 100.00. And this helped the DataReader in SSIS infer the type correctly.
In addition to it I also found another easy way of doing so which didn't require support of any cast / convert function -
UDF_TREAD_DEPTH * 1.00 as UDF_TREAD_DEPTH
This also allowed the DataReader to infer the type (i.e. precision & scale) correctly.

Related

libreoffice base import table data type problem

I'm importing a table from LibreOffice Calc to a new embedded Libreoffice base database. I select the data, copy & paste it, the wizard pops up and I select use first line as column names.
I then select all the fields and move to the third step of the import wizard. I can right-click my ID field & make it the primary key, fine. The problem is that if I set the field data types to anything other than double or varchar the import crashes with error "incorrect type for setstring". I want to use integer and date types - how am I supposed to import them?
If I leave all fields at either double or varchar and try to edit the table later it won't let me change data types. Same problem if I first define the table and then append records.
This would be easy if I was making a new database from scratch, but I have lots of existing records to import. I need to preserve the keys to set up relationships with other tables.
I've tried both HSQLDB and firebird embedded.
This bug stops me from ditching Microsoft Access in favour of libreoffice base. Can anyone suggest a work-around?
Edit
Thank you Jim K for your response, this solves half the problem.
I have found two problematic columns - a date field and a boolean field. Although Calc does understand that my date field is a date, it crashes the import to Base as described. I then told Calc to display the date as YYYY-MM-DD and the import to Base worked perfectly.
The next problem is the boolean (YES/NO) field. A blank cell in Calc imports OK as boolean false. Anything else I tried - YES, NO, TRUE, FALSE, 1, 0 - all crashed the import to base with error message "incorrect type for setstring".
Moving boolean data from Base back into Calc shows values as TRUE or FALSE, so it looks like that is what the Base import is expecting. This works correctly for the HSQLDB engine but not for Firebird Embedded.
The bug has already been reported, so all you need to do is wait for it to be fixed.
In the meantime, it's possible to write a Calc macro to read the values from the spreadsheet and run a SQL UPDATE statement to get the correct values into Base. My answer here has some code to get started.
However, there is an easier way. Create a temporary Base file that uses HSQLDB and import the data into it from Calc. Then, close Calc and open both the Firebird Embedded and the HSQLDB Base files. Drag the table from the HSQLDB Base window into the other window, which imports seamlessly.

How to populated the table via Pentaho Data Integration's table_output step?

I am performing an ETL job via Pentaho 7.1.
The job is to populate a table 'PRO_T_TICKETS' in PostgreSQL 9.2 via the Pentaho Jobs and transformations?
I have mapped the table fields with respect to the stream fields
Mapped Fields
My Table PRO_T_TICKETS contains the Schema (Column Names) in UPPERCASE.
Is this the reason I can't populate the table PRO_T_TICKETS with my ETL Job?
I duplicated the step TABLE_OUTPUT to PRO_T_TICKETS and changed the Target table field to 'PRO_T_TICKETS2'. Pentaho created a new table with lowercase schema and populated the data in it.
But I want this data to be uploaded in the table PRO_T_TICKETS only and with the UPPERCASE schema if possible.
I am attaching the whole job here and the error thrown by Pentaho. Pentaho Error I have also tried my query by adding double quotes to the column names as you can see in the error. But it didn't help.
What do you think I should do?
When you create (or modify) the connection, select Advanced on the left panel and click on the Force to upper case or Force to lower case or, even better, Preserve case of reserved words.
To know which option to choose, copy the 4th line of your error log, the line starting with INSERT INTO "public"."PRO_T_TICKETS("OID"... in your SQL-developer tool and change the connection advanced parameters until it works.
Also, at debug time, don't use batch updates, don't use lazy conversion on previous steps, and try with one (1) field rather than all (25).
Just as a complement: it worked for me following the tips from AlainD and using specific configurations that I'd like to share with you. I have a transformation streaming data from MySQL to PostgreSQL using a Table Input and Output. In both of DBs I have uppercase objects.
I did the following steps to work in the right way:
In the table input (MySQL) the objects are uppercase too, but I typed in lowercase and it worked and I didn't set any special option in the DB Connection.
In the table output (PostgreSQL) I typed everything in uppercase (schema, table name and columns) and I also set "specify the database fields" (clicking on "Get fields").
In the target DB Connection (PostgreSQL) I put the options (in "Advanced" section): "Quote all in database" and "Preserve case of reserved words".
PS: Ah, the last option is because I've found out that there was one more problem with my fields: there was a column called "Admin" (yes guys, they created a camelcase column using a reserved word!) and for that reason I must to put "Preserve case of reserved words" and type it as "Admin" (without quotes and in camelcase) in the Table Output.

SSIS Convert Between Unicode and Non-Unicode Error

I have an ssis package where I am using an OLEDB source linking to SQL Server 2005 table. All columns except a date column are NVARCHAR(255). I am using an Excel destination and using a SQL statement to create the sheet in the Excel workbook, the SQL is in the excel connection manager (effectively a create table statement that creates a sheet) and is derived from the mapping of the columns from the DB.
No matter what I have done I keep getting this unicode --> non-unicode conversion error between my source and destination. Tried conversion to string[DT_STR] between S > D, removed it, changed SQL Table VARCHAR to NVARCHAR and still get this flippin error.
Because I am creating the sheet in Excel with a SQL statement I do not see any way to actually pre-define what the data types of the columns will be in the Excel sheet. I imagine it would be a default meta data but I do not know.
So between my SQL table destination and the creation of my Excel sheet with this SSIS sql statement how can I stop this error coming up?
My error is:
Error at Data Flow Task [OLE DB Source [1]]: Column "MyColumn" cannot convert between unicode and non-unicode string data types.
And for all nvarchar columns.
Appreciate any help
Thanks
Andrew
The below Steps worked for me:
right click on source task.
click on "Show Advanced editor".
Go to "Input and Output Properties" tab.
select the output column for which you are getting the error.
Its data type will be "String[DT_STR]".
Change that data type to "Unicode String[DT_WSTR]".
save and close.
Add Data Conversion transformations to convert string columns from non-Unicode (DT_STR) to Unicode (DT_WSTR) strings.
You need to do this for all the string columns...
The missing piece here is Data Conversion object. It should be in between OLE DB Source and Destination object.
First, add a data conversion block into your data flow diagram.
Open the data conversion block and tick the column for which the error is showing. Below change its data type to unicode string(DT_WSTR) or whatever datatype is expected and save.
Go to the destination block. Go to mapping in it and map the newly created element to its corresponding address and save.
Right click your project in the solution explorer.select properties. Select configuration properties and select debugging in it. In this, set the Run64BitRunTime option to false (as excel does not handle the 64 bit application very well).
Instead of adding an earlier suggested Data Conversion you can cast the nvarchar column to a varchar column. This prevents you from having an unnecessary step and has a higher performance then the alternative.
In the select of your SQL statement replace date with CAST(date AS varchar([size])). For some reason this does not yet change the output data type. To do this do the following:
Right click your OLE DB Source step and open the advanced editor.
Go to Input and Output Properties
Select Output Columns
Select your column
Under Data Type Properties change DataType to string [DT_STR]
Change Length to the length you specified in your CAST statement
After doing this your source data will be output as a varchar and your error will disappear.
Source
I have been having the same issue and tried everything written here but it was still giving me the same error.
Turned out to be NULL value in the column which I was trying to convert.
Removing the NULL value solved my issue.
Cheers,
Ahmed
No-one seems to mention this but, converting varchar to nvarchar in the source query also solves the issue.
On the above example I kept losing the values, I think that delaying the Validation will allow the new data types to be saved as part of the meta data.
On the connection Manager for 'Excel Connection Manager' set the Delay Validation to False from the Properties.
Then on the data flow Destination task for Excel set the ValidationExternalMetaData to False, again from the properties.
This will now allow you to right click on the Excel Destination Task and go to Advanced Editor for Excel Destination --> far right tab - Input and Output Properties. In the External Columns folder section you will be able to now change the Data Types and Length values of the problematic columns and this can now be saved.
Good Luck!
I experienced this condition when I had installed Oracle version 12 client 32 bit client connected to an Oracle 12 Server running on windows.
Although both of Oracle-source and SqlServer-destination are NOT Unicode, I kept getting this message, as if the oracle columns were Unicode.
I solved the problem inserting a data conversion box, and
selecting type DT-STR (not unicode) for varchar2 fields and DT-WSTR (unicode) for numeric fields, then I've dropped the 'COPY OF' from the output field name.
Note that I kept getting the error because I had connected the source box arrow with the conversion box BEFORE setting the convertion types. So I had to switch source box and this cleaned all the errors in the destination box.
When creating table in SQL Server make your table columns NVARCHAR instead of VARCHAR.
I think people are missing this. In my case I had 100 character columns to convert between Oracle and MS Sql. All this stuff about Data Conversion and Advanced Editor is incredibly tedious if you have a 100 separate character columns to assign. Plus SSIS being SSIS, it will sometimes reset all your 100 advanced editor changes even if you set VALIDATEEXTERNALMETADATA to false, incredibly obnoxious. I wouldn't mind doing the Data Conversion if there was some value to it but 20 years ago ETL tools used to take oracle character to ms sql characters without fussing. What Bakalolo and Zafer say is the answer if you have a lot of character columns and you can live with nvarchar, just declare all your output ms sql columns (nvarchar) and your data task will automatically assign your oracle fields into ms sql fields with no manual overrides. I have also found that the new Oracle Source (2021) doesn't complain about a unicode conversion to varchar in ms sql. A colleague just told me that the ssis wizard (it may be only in vs 2019+) to assign oracle character to ms sql varchar will do the assignments automatically with no override, but I haven't tried that personally.
2022 update - I think this is just vs 2019 created packages and later. An ado.net task reading a varchar ms sql table going to oledb (and ado.net I think) ms sql varchar will throw the unicode error. If you switch the input task to oledb reading ms sql varchar table you won't have to do the advanced editor overrides for the varchar fields. If you don't want to do advanced editor overrides (who does?) try different tasks and more oledb tasks.
I just encounter same issue, I solve it in my SQL request : using convert directly
CONVERT(NVARCHAR(50),'') AS MyVarName
I need to put an empty (or fix size string) into excel file. Converting force type of MyVarName from DT-STR to DT-WSTR (unicode)
I know this is a very old post but I ran into the same issue and found that I had to manually select the conversion component output alias as the mapping in the excel destination component. Since the names of the OLE DB Source match the excel column names it was mapping it to the OLE DB and not to the Output Alias. Such as SourceID column from the OLE DB component being named Copy of SourceID after conversion. I don't see the original question saying they specifically selected the new alias name just that they mapped to DB columns. #Serge Voloshenko post comes the closest but also does not mention to make sure the mapping happens. To a new SSIS user this might be overlooked.

How to convert number to words (iReport)

I want to convert for example, 1000 to one thousand (currency). How can i do it in Jasper?
See http://www.rgagnon.com/javadetails/java-0426.html
Create a class based on the given implementation.
Compile the class and put it in a directory where iReport can read the file.
Update the CLASSPATH in iReport to point to the directory containing the class (be aware of directory relationships to package namespaces).
Restart iReport.
Change the text field expression to: EnglishNumberToWords.convert( $F{field_name} )
You will have to change field_name and the data type of the convert method according to your implementation details.
An alternative to Dave's response:
1) If your RDBMS supports it (like HSQLDB, for example) you can create a user-defined, user-invoked function that takes the data model representation for a field and converts it to a presentation-layer representation. For example, a database stores timestamps internally as Modified Julian Day numbers (doubles). A Java function can be written and stored with the database (SQL/JRT) to convert from a UTC double to a localized time/date string.
2) Write an SQL Query to produce a table containing the data you want in the report. The difference is that you use your user-invoked SQL/JRT function on the source column to convert it to the presentation-layer representation in the Result Table.
3) Use your SQL Query (once you have it working) as the basis for a CREATE VIEW (DDL) statement.
4) Build your report using the newly defined View as the iReport datasource.
Advantages:
No customization of iReports needed. The View you create can serve as the basis for any reporting tool, not only iReports.
Disadvantages:
This creates a dependency between your database and a JRE and (most likely) your RDBMS. In order to access your user-invoked function, you'll need to store the function in the database and it will need to be able to access a JRE in order to create the View. There is a SQL/JRT standard and so it is possible that your migration target RDBMS might be able to support it, but certainly this is not ever guaranteed.

OpenRowSet command in TSQL is returning NULLS

Been investigating for a while now and keep hitting a brick wall. I am importing from xls files into temp tables via the OpenRowset command. Now I have a problem where I’m trying to import a certain column has a range values but the most common are the following. Columns structured as long numbers i.e. 15598 and the some columns as strings i.e. 15598-E.
Now the openrowset is reading the string version no problem but is reporting the number version as a NULL. I read (http://www.sqldts.com/254.aspx ) that openrowset has that issue and the author speaks of implementing “HDR=YES;IMEX=1” into the query string but that’s not working for me at all.
Have any of you guys every encountered this?
Just some more info as well. I may not do this with the JET engine (Microsoft.Jet.OLEDB.4.0) so this is what my query looks like:
SELECT *
FROM
OPENROWSET('MSDASQL'
, 'Driver=Microsoft Excel Driver (*.xls);HDR=YES;IMEX=1;DBQ=C:\ImportFile.xls;'
, 'SELECT * FROM [Sheet1$]')
I notice you are using the Excel ODBC driver. Have you tried the JET OLEDB Provider with the equivalent connection string?
select * from openrowset(
'Microsoft.Jet.OLEDB.4.0',
'Data Source=C:\ImportFile.xls;Extended Properties="Excel 8.0;HDR=Yes;IMEX=1"',
'SELECT * FROM [Sheet1$]')
EDIT: Sorry, just noticed your last paragraph. Surely the Excel ODBC driver still goes via the JET engine, so what difference would it make?
EDIT: I have looked at the KB194124 link, and the registry values it recommends are the default values on my machine, which I have never changed. I have used the above method several times myself without problems. Maybe it's an environmental issue?
If you don't mind opening the file in Excel, take the columns that have the problem, select the column, and do
Data -> Text to Columns -> Next -> Next -> Text
Save the spreadsheet and they should all come in as Text in OPENROWSET
I've found using .CSV files instead of Excel, opened by setting up a Linked Server, and setting up the format of the files in schema.ini a more practical approach for handling imports like this, with that method you can explicitly choose each column's format.
We've come across the same issue. Unfortunately we've not found a solution either. There's more information here which indicates that there might be a registry fix.
I had the same problem. I fixed it cuting and pasting a row that contains a column with the string/numeric value (for example 123ABC) in the first row position of the sheet. For some reason T-SQL reads the first row and assumes that all the values are numeric.
Response by SqlACID in this link worked great [https://wikigurus.com/Article/Show/185717/OpenRowSet-command-in-TSQL-is-returning-NULLS] :-
If you don't mind opening the file in Excel, take the columns that have the problem, select the column, and do
Data -> Text to Columns -> Next -> Next -> Text
Save the spreadsheet and they should all come in as Text in OPENROWSET
I've found using .CSV files instead of Excel, opened by setting up a Linked Server, and setting up the format of the files in schema.ini a more practical approach for handling imports like this, with that method you can explicitly choose each column's format.