Unable to create db2 table with DATE data type - db2

I am using DB2 9.7 (LUW) in a windows server, in which multiple DBs are available in a single DB instance. I just found that in one of these DBs, I am unable to add a column with DATE data type, during table creation or altering. The column been added is getting changed to timestamp instead.
Any help on this will be welcome.

Check out your Oracle Compatibility setting
Depending on that setting a date is interpreted as Timestamp(0) like in your example.
Because these settings take effect if the database has been created after setting the DB2_COMPATIBILITY_VECTOR registry variable your database can show a different behaviour.

Related

What could cause Firebird to silently turn calculated fields into "normal" fields?

I'm using Firebird 2.5.8 to store information for a software I designed.
A customer contacted me today to inform me of multiple errors that I couldn't understand, and I used the "IBExpert" tool to inspect its database.
To my surprise, all the calculated fields had been transformed into "standard" fields. This is clearly visible in the "DDL" tab of the database tool, which displays tables definition as SQL code.
For instance, the following table definition:
CREATE TABLE TVERSIONS (
...
PARENTPATH COMPUTED BY (((SELECT TFILES.FILEPATH FROM TFILES WHERE ID = TVERSIONS.FILEID))),
....
ISCOMPLETE COMPUTED BY ((((SELECT TBACKUPVERSIONS.ISCOMPLETE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.CVERSION)))),
CDATE COMPUTED BY (((SELECT TBACKUPVERSIONS.SERVERSTARTDATE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.CVERSION))),
DDATE COMPUTED BY (((SELECT TBACKUPVERSIONS.SERVERSTARTDATE FROM TBACKUPVERSIONS WHERE ID = TVERSIONS.DVERSION))),
...
);
has been "changed" in the client database into this:
CREATE TABLE TVERSIONS (
...
PARENTPATH VARCHAR(512) CHARACTER SET UTF8 COLLATE UNICODE,
...
ISCOMPLETE SMALLINT,
CDATE TIMESTAMP,
DDATE TIMESTAMP,
...
);
How can such a thing be possible?
I've been using Firebird for more than 10 years, and I've never seen such a behavior until now. Is it possible that it could be a corruption of RDB$FIELDS.RDB$COMPUTED_SOURCE fields?
What would you advise?
To summarize the discussion on firebird-support (and comments above):
The likely cause of this happening is that the database was backed up and restored using gbak, and the restore did not complete successfully. If this happens, gbak will have ended in an error, and the database is in single shutdown state (which means only SYSDBA or the database owner is allowed to create one connection). If the database is not currently in single shutdown mode, someone used gfix to bring the database online again in normal state.
When a database is restored using gbak, calculated fields are initially created as normal fields (though their values are not part of the backup). After data is restored successfully, those fields are altered to be calculated fields. If there are any errors before or during redefinition of the calculated fields, the restore will fail, and the database will be in single shutdown state, and the calculated fields will still be "normal" fields.
I recommend doing a structural comparison of the database to check if calculated fields are the only problem, or if other things (e.g. constraints) are missing. A simple way to do this is to export the DDL of the database and a "known-good" database, for example using ISQL (command line option -extract), and comparing them with a diff tool.
Then either fix the existing database by executing the necessary DDL to restore calculated fields (and other things), or create a new empty database, and move the data from the old to the new (using a datapump tool).
Also check if any data is missing. By default, gbak restores the data in a single transaction, so in that case either all data is present or all data is missing. However, gbak also has a "transaction-per-table" mode (-ONE_AT_A_TIME or -O), which could mean some tables have data, and others have no data.

How to populated the table via Pentaho Data Integration's table_output step?

I am performing an ETL job via Pentaho 7.1.
The job is to populate a table 'PRO_T_TICKETS' in PostgreSQL 9.2 via the Pentaho Jobs and transformations?
I have mapped the table fields with respect to the stream fields
Mapped Fields
My Table PRO_T_TICKETS contains the Schema (Column Names) in UPPERCASE.
Is this the reason I can't populate the table PRO_T_TICKETS with my ETL Job?
I duplicated the step TABLE_OUTPUT to PRO_T_TICKETS and changed the Target table field to 'PRO_T_TICKETS2'. Pentaho created a new table with lowercase schema and populated the data in it.
But I want this data to be uploaded in the table PRO_T_TICKETS only and with the UPPERCASE schema if possible.
I am attaching the whole job here and the error thrown by Pentaho. Pentaho Error I have also tried my query by adding double quotes to the column names as you can see in the error. But it didn't help.
What do you think I should do?
When you create (or modify) the connection, select Advanced on the left panel and click on the Force to upper case or Force to lower case or, even better, Preserve case of reserved words.
To know which option to choose, copy the 4th line of your error log, the line starting with INSERT INTO "public"."PRO_T_TICKETS("OID"... in your SQL-developer tool and change the connection advanced parameters until it works.
Also, at debug time, don't use batch updates, don't use lazy conversion on previous steps, and try with one (1) field rather than all (25).
Just as a complement: it worked for me following the tips from AlainD and using specific configurations that I'd like to share with you. I have a transformation streaming data from MySQL to PostgreSQL using a Table Input and Output. In both of DBs I have uppercase objects.
I did the following steps to work in the right way:
In the table input (MySQL) the objects are uppercase too, but I typed in lowercase and it worked and I didn't set any special option in the DB Connection.
In the table output (PostgreSQL) I typed everything in uppercase (schema, table name and columns) and I also set "specify the database fields" (clicking on "Get fields").
In the target DB Connection (PostgreSQL) I put the options (in "Advanced" section): "Quote all in database" and "Preserve case of reserved words".
PS: Ah, the last option is because I've found out that there was one more problem with my fields: there was a column called "Admin" (yes guys, they created a camelcase column using a reserved word!) and for that reason I must to put "Preserve case of reserved words" and type it as "Admin" (without quotes and in camelcase) in the Table Output.

SSIS Convert Between Unicode and Non-Unicode Error

I have an ssis package where I am using an OLEDB source linking to SQL Server 2005 table. All columns except a date column are NVARCHAR(255). I am using an Excel destination and using a SQL statement to create the sheet in the Excel workbook, the SQL is in the excel connection manager (effectively a create table statement that creates a sheet) and is derived from the mapping of the columns from the DB.
No matter what I have done I keep getting this unicode --> non-unicode conversion error between my source and destination. Tried conversion to string[DT_STR] between S > D, removed it, changed SQL Table VARCHAR to NVARCHAR and still get this flippin error.
Because I am creating the sheet in Excel with a SQL statement I do not see any way to actually pre-define what the data types of the columns will be in the Excel sheet. I imagine it would be a default meta data but I do not know.
So between my SQL table destination and the creation of my Excel sheet with this SSIS sql statement how can I stop this error coming up?
My error is:
Error at Data Flow Task [OLE DB Source [1]]: Column "MyColumn" cannot convert between unicode and non-unicode string data types.
And for all nvarchar columns.
Appreciate any help
Thanks
Andrew
The below Steps worked for me:
right click on source task.
click on "Show Advanced editor".
Go to "Input and Output Properties" tab.
select the output column for which you are getting the error.
Its data type will be "String[DT_STR]".
Change that data type to "Unicode String[DT_WSTR]".
save and close.
Add Data Conversion transformations to convert string columns from non-Unicode (DT_STR) to Unicode (DT_WSTR) strings.
You need to do this for all the string columns...
The missing piece here is Data Conversion object. It should be in between OLE DB Source and Destination object.
First, add a data conversion block into your data flow diagram.
Open the data conversion block and tick the column for which the error is showing. Below change its data type to unicode string(DT_WSTR) or whatever datatype is expected and save.
Go to the destination block. Go to mapping in it and map the newly created element to its corresponding address and save.
Right click your project in the solution explorer.select properties. Select configuration properties and select debugging in it. In this, set the Run64BitRunTime option to false (as excel does not handle the 64 bit application very well).
Instead of adding an earlier suggested Data Conversion you can cast the nvarchar column to a varchar column. This prevents you from having an unnecessary step and has a higher performance then the alternative.
In the select of your SQL statement replace date with CAST(date AS varchar([size])). For some reason this does not yet change the output data type. To do this do the following:
Right click your OLE DB Source step and open the advanced editor.
Go to Input and Output Properties
Select Output Columns
Select your column
Under Data Type Properties change DataType to string [DT_STR]
Change Length to the length you specified in your CAST statement
After doing this your source data will be output as a varchar and your error will disappear.
Source
I have been having the same issue and tried everything written here but it was still giving me the same error.
Turned out to be NULL value in the column which I was trying to convert.
Removing the NULL value solved my issue.
Cheers,
Ahmed
No-one seems to mention this but, converting varchar to nvarchar in the source query also solves the issue.
On the above example I kept losing the values, I think that delaying the Validation will allow the new data types to be saved as part of the meta data.
On the connection Manager for 'Excel Connection Manager' set the Delay Validation to False from the Properties.
Then on the data flow Destination task for Excel set the ValidationExternalMetaData to False, again from the properties.
This will now allow you to right click on the Excel Destination Task and go to Advanced Editor for Excel Destination --> far right tab - Input and Output Properties. In the External Columns folder section you will be able to now change the Data Types and Length values of the problematic columns and this can now be saved.
Good Luck!
I experienced this condition when I had installed Oracle version 12 client 32 bit client connected to an Oracle 12 Server running on windows.
Although both of Oracle-source and SqlServer-destination are NOT Unicode, I kept getting this message, as if the oracle columns were Unicode.
I solved the problem inserting a data conversion box, and
selecting type DT-STR (not unicode) for varchar2 fields and DT-WSTR (unicode) for numeric fields, then I've dropped the 'COPY OF' from the output field name.
Note that I kept getting the error because I had connected the source box arrow with the conversion box BEFORE setting the convertion types. So I had to switch source box and this cleaned all the errors in the destination box.
When creating table in SQL Server make your table columns NVARCHAR instead of VARCHAR.
I think people are missing this. In my case I had 100 character columns to convert between Oracle and MS Sql. All this stuff about Data Conversion and Advanced Editor is incredibly tedious if you have a 100 separate character columns to assign. Plus SSIS being SSIS, it will sometimes reset all your 100 advanced editor changes even if you set VALIDATEEXTERNALMETADATA to false, incredibly obnoxious. I wouldn't mind doing the Data Conversion if there was some value to it but 20 years ago ETL tools used to take oracle character to ms sql characters without fussing. What Bakalolo and Zafer say is the answer if you have a lot of character columns and you can live with nvarchar, just declare all your output ms sql columns (nvarchar) and your data task will automatically assign your oracle fields into ms sql fields with no manual overrides. I have also found that the new Oracle Source (2021) doesn't complain about a unicode conversion to varchar in ms sql. A colleague just told me that the ssis wizard (it may be only in vs 2019+) to assign oracle character to ms sql varchar will do the assignments automatically with no override, but I haven't tried that personally.
2022 update - I think this is just vs 2019 created packages and later. An ado.net task reading a varchar ms sql table going to oledb (and ado.net I think) ms sql varchar will throw the unicode error. If you switch the input task to oledb reading ms sql varchar table you won't have to do the advanced editor overrides for the varchar fields. If you don't want to do advanced editor overrides (who does?) try different tasks and more oledb tasks.
I just encounter same issue, I solve it in my SQL request : using convert directly
CONVERT(NVARCHAR(50),'') AS MyVarName
I need to put an empty (or fix size string) into excel file. Converting force type of MyVarName from DT-STR to DT-WSTR (unicode)
I know this is a very old post but I ran into the same issue and found that I had to manually select the conversion component output alias as the mapping in the excel destination component. Since the names of the OLE DB Source match the excel column names it was mapping it to the OLE DB and not to the Output Alias. Such as SourceID column from the OLE DB component being named Copy of SourceID after conversion. I don't see the original question saying they specifically selected the new alias name just that they mapped to DB columns. #Serge Voloshenko post comes the closest but also does not mention to make sure the mapping happens. To a new SSIS user this might be overlooked.

SELECT old data state with DB2

Is it possible to access old data state in an DB2 database?
Oracle has the clause select ... as of timestamp to do it. Does DB2 have something like it?
Yes, you can select a set of rows that were / will be valid in a past / future time. This is called Time Travel in DB2, but you have to configure / create the table with the extra columns in order to activate this feature. This is new in DB2 10, but I think it is not available in all editions.
For more information, take a look at this: http://www.ibm.com/developerworks/data/library/techarticle/dm-1204db2temporaldata/
Remember, there are two concepts: business time and application time, and when using both is called bi-temporal.

Using the Time data type in Postgres with a Microsoft Access Front-end

I have a field in my postgres database using the time (without time zone) data type. I have a Microsoft Access front-end for the database connected using psqlODBC, which reads this field as a "Date/Time" data type.
If I try to insert something into the field through the front end, I get the following error:
ODBC - insert on a linked table "table_name" failed.
ERROR: column "column_name" is of type time without time zone but expression is of type date;
I'm assuming that access is trying to input a time stamp instead.
Basically my question is it even really possible to use the time data type with Access? Or should I just be using the timestamp datatype instead?
If you are manually typing data into a linked table then no this won't be possible at present, if you have the option of updating your table via forms or VB then you could try this to get access to produce only a time value:
TimeSerial(Hour(Now()), Minute(Now()), Second(Now()))
Otherwise as you say, it's probably a good idea to change your data type to timestamp.