Create table with a column of data-type Date creates a column with data-type Timestamp - db2

The following SQL Query:
CREATE TABLE "SomeTable" ("dateEnd" DATE)
Creates a table SomeTable with a column dateEnd. However, the database-type is Timestamp, not Date. It used to work, but after reimporting a whole database dump, all the Date data-types are replaced by Timestamp data-types. Even If I create a very simple table, like the one above, the data-type jumps to Timestamp. I am using DB2 express c version 11.1.0.

If your Db2 database was created in Oracle Compatibility mode, then DATE columns are implemented as TIMESTAMP(0) columns to match what Oracle does.
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.apdv.porting.doc/doc/r0053667.html
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.admin.config.doc/doc/r0054912.html
BTW you may want to use either Db2 Developer-C or Db2 Developer Community Edition. Those are effectively replacing the old Express-C edition
https://www.ibm.com/uk-en/marketplace/ibm-db2-direct-and-developer-editions

Related

Create new Date column of DATE type from existing Date column of TEXT type in PostgresSQL

I have a PostgresSQL table that has a Date column of type TEXT
Values look like this: 2019-07-19 00:00
I want to either cast this column to type DATE so I can query based on latest values, etc.... or create a new column of type DATE and cast into there (so I have both for the future). I would appreciate any advice on both options!
Hope this isn't a dupe, but I havn't found any answers on SO.
Some context, I will need to add more data later on to the table that only has the TEXT column, which is why i want to keep the original but open to suggestions.
You can alter the column type with the simple command:
alter table my_table alter my_col type date using my_col::date
This seems to be the best solution as maintaining duplicate columns of different types is a potential source of future trouble.
Note that all values in the column have to be null or be recognizable by Postgres as a date, otherwise the conversion will fail.
Test it in db<>fiddle.
However, if you insist on creating a new column, use the update command:
alter table my_table add my_date_col date;
update my_table
set my_date_col = my_col::date;
Db<>fiddle.

Timestamp in postgresql during oracle to postgresql migration

I have a table in Oracle with timestamp data in "JAN-16-15 05.10.14.034000000 PM".
When I created the table in postgresql with "col_name timestamp" it is showing data as "2015-01-16 17:10:14.034".
Any suggestions on how can set the column to get the data format as in postgre same to what I have in Oracle?
Timestamps (or dates or numbers) do not have any "format2. Neither in Postgres nor in Oracle or in any other relational database).
Any "format" you see, is applied by your SQL client displaying those values.
You need to configure your SQL client to use a different format for timestamp, or use the to_char() function to format the value as you want.
In particular, to get the format you desire, use
SELECT to_char(current_timestamp, 'MON-MM-YY HH.MI.SS.US000 AM');
The output format can be changed in psql by changing the DateStyle parameter, but I would strongly recommend to not change it away from the default ISO format as that also affects the input that is parsed.

The type of column conflit with the type of other columns specified in the UNPIVOT list

In SQL Server 2005, I built a trigger that contains a SQL statement that unpivot's some data. Somewhat similar to the following simple example: http://sqlfiddle.com/#!3/cdc1b/1/0. Let's say that the table the trigger is built on is "table1" and it's set run after updates.
Within SSMS whenever I update "table1" everything works fine. Unfortunately, whenever I update "table1" in a proprietary application (which I don't have the source code to), it fails with the message "The type of column conflit with the type of other columns specified in the UNPIVOT list".
After doing a bit of searching I added COLLATE DATABASE_DEFAULT to my cast's in the view without any luck. It was a bit of a long shot because the collation all matched whenever I queried INFORMATION_SCHEMA.COLUMNS.
I then changed the casts from VARCHAR to CHAR and it worked without issue. For obvious reasons, I'd rather use VARCHAR. What is different between a SSMS and application connection? I assume the application isn't using a connection property that SSMS uses.
PS: The database is a bit funky because it does not use NULLs and uses CHAR instead of VARCHAR.

Importing csv into Postgres database with improper date value

I have a query which has a date field with values that look like this in the query results window:
2013-10-01 00:00:00
However, when I save the results to csv, it gets saved like this:
2013-10-01T00:00:00
This is causing a problem when I'm trying to COPY the csv into a table in Redshift, where it gives me an error stating that the value is not a valid timestamp (the field I'm importing to is a timestamp field).
How can I get it so that it either strips out the time component completely, leaving just the date, or at least that the "T" is removed from the results?
I'm exporting results to csv using Aginity SQL Workbench for Redshift.
According to this knowledgebase article:
After import, add new TIMESTAMP columns and use the CAST() function to
populate them:
ALTER TABLE events ADD COLUMN received_at TIMESTAMP DEFAULT NULL;
UPDATE events SET received_at = CAST(received_at_raw as timestamp);
ALTER TABLE events ADD COLUMN generated_at TIMESTAMP DEFAULT NULL;
UPDATE events SET generated_at = CAST(generated_at_raw as timestamp);
Finally, if you forsee no more imports to this table, the raw VARCHAR
timestamp columns may be removed. If you forsee importing more events
from S3, do not remove these columns. To remove the columns, run:
ALTER TABLE events DROP COLUMN received_at_raw; ALTER TABLE events
DROP COLUMN generated_at_raw;
Hope that helps...

Postgres equivalent to Sql Servers ##DBTS

I am mainly from a Sql Server background, and following some issues with getting MySql to work with the Microsoft Sync Framework (namely it does not cater for snapshots), I am having to look into Postgres and try to get that working with the Sync Framework.
The triggers that are needed include a call to function "##DBTS", but I am having trouble finding an equivalent in Postgres for this.
From the microsoft documentation for this it says:
##DBTS returns the current database's last-used timestamp value.
A new timestamp value is generated when a row with a timestamp
column is inserted or updated.
In MySql it was the following:
USE INFORMATION_SCHEMA;
SELECT MAX(UPDATE_TIME) FROM TABLES WHERE UPDATE_TIME < NOW();
Can anyone tell me what this would be in Postgres?
PostgreSQL does not keep track when a table was last modified. So there is no equivalent for SQL Server's ##DBTS nor for MySQL's INFORMATION_SCHEMA.TABLES.UPDATE_TIME.
You also might be interested in this discussion:
http://archives.postgresql.org/pgsql-general/2009-02/msg01171.php
which essentially says: "if you need to know when a table was last modified, you have to add a timestamp column to each table that records that last time the row was updated".