Postgres alter column from time without time zone to int - postgresql

I am having trouble converting a table column TYPE from time without time zone to integer.
Is that possible?
Where integer will be equal to seconds.
I am using ALTER table exampleTable ALTER COLUMN time TYPE integer.

This should do it:
ALTER TABLE your_table
ALTER COLUMN time TYPE integer USING 0;
ALTER COLUMN is usually only used when you want to preserve the data in that column. In your case the above should work, but doesn't buy you anything. Dropping the column and then adding a new one with the type integer and a default value of 0 would be just as efficient.
(Btw: you should not use a reserved word like time as a column name)
Edit
If you do want to convert the existing data then you can use this:
ALTER TABLE your_table
ALTER COLUMN time_column TYPE integer USING extract(epoch from time_column);

Following up on the horse's answer, you can also add an expression in the using part, e.g.:
ALTER TABLE your_table
ALTER COLUMN time TYPE integer USING (
extract('hours' from time) * 3600 +
extract('minutes' from time) * 60 +
extract('seconds' from time)
);
http://www.postgresql.org/docs/current/static/sql-altertable.html

Related

Set UTC timestamp as default value for column in DB2

I want to use a UTC timestamp as default value for a TIMESTAMP column. We're using DB2 9.5 on Linux
I'm aware of using CURRENT TIMESTAMP, but it provides the local time of the DB2 server (e.g. CEST). In queries you can use
SELECT JOB_ID, (CURRENT TIMESTAMP - CURRENT TIMEZONE) as tsp FROM SYSTEM_JOBS;
but this does not work in the column definition
ALTER TABLE SYSTEM_JOBS ALTER COLUMN CREATED SET DEFAULT CURRENT TIMESTAMP - CURRENT TIMEZONE
[42601][-104] An unexpected token "ALTER TABLE SYSTEM_JOBS ALTER COLUMN CREAT" was found following "BEGIN-OF-STATEMENT". Expected tokens may include: "<values>".. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.23.42
I also tried to define a function, that makes the calculation.
CREATE OR REPLACE FUNCTION UTCTIMESTAMP ()
RETURNS TIMESTAMP
LANGUAGE SQL
DETERMINISTIC
NO EXTERNAL ACTION
BEGIN ATOMIC
DECLARE L TIMESTAMP;
DECLARE U TIMESTAMP;
SET L = CURRENT TIMESTAMP;
SET U = L - CURRENT TIMEZONE;
RETURN U;
END
;
But it's also not accepted in column definition
ALTER TABLE SYSTEM_JOBS ALTER COLUMN CREATED SET DEFAULT UTCTIMESTAMP();
[42894][-574] DEFAULT value or IDENTITY attribute value is not valid for column "CREATED" in table "DB2INST1.SYSTEM_JOBS". Reason code: "7".. SQLCODE=-574, SQLSTATE=42894, DRIVER=4.23.42
I'm looking for a method to set the default value in neutral UTC.
You are not able to use expressions in the DEFAULT clause. See the description of default-clause of the CREATE TABLE statement.
You can use a BEFORE INSERT trigger instead, for example, to achieve the same functionality:
CREATE TABLE SYSTEM_JOBS (ID INT NOT NULL, CREATED TIMESTAMP NOT NULL) IN USERSPACE1;
CREATE TRIGGER SYSTEM_JOBS_BIR
BEFORE INSERT ON SYSTEM_JOBS
REFERENCING NEW AS N
FOR EACH ROW
WHEN (N.CREATED IS NULL)
SET CREATED=CURRENT TIMESTAMP - CURRENT TIMEZONE;
INSERT INTO SYSTEM_JOBS(ID) VALUES 1;

PostgreSQL - to_timestamp is not properly converting unix timestamp

I'm trying to get current UTC time, and insert it into PostgreSQL timestamp. But it's not working properly.
I am using the following command:
INSERT INTO public.rt_block_height
VALUES(to_timestamp('2018-09-09 00:36:00.778653', 'yyyy-mm-dd hh24:mi:ss.MS.US'), 83.7)
However, when I check the result, it looks like this:
tutorial=# select * from rt_block_height;
time | block_height
-------------------------+--------------
2018-09-09 00:48:58.653 | 83.7
(1 row)
I don't know what's causing this mismatch.
FYI, here my source code for table:
CREATE TABLE IF NOT EXISTS public.rt_BLOCK_HEIGHT
(
"time" timestamp without time zone,
BLOCK_HEIGHT double precision
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE public.rt_BLOCK_HEIGHT
OWNER to postgres;
SELECT create_hypertable('rt_BLOCK_HEIGHT', 'time');
There is a logical error in the format string as you should not use MS and US at the same time. However, you do not need the function at all, just cast the string to timestamp:
INSERT INTO public.rt_block_height
VALUES('2018-09-09 00:36:00.778653'::timestamp, 83.7)
From the documentation:
to_timestamp and to_date exist to handle input formats that cannot be converted by simple casting. For most standard date/time formats, simply casting the source string to the required data type works, and is much easier.

Can I get Unix timestamp automatically converted to a TIMESTAMP column when importing from CSV to a PostgreSQL database?

This is basically a duplicate of this with s/mysql/postgresql/g.
I created a table that has a timestamp TIMESTAMP column and I am trying to import data from CSV files that have Unix timestamped rows.
However, when I try to COPY the file into the table, I get errors to the tune of
2:1: conversion failed: "1394755260" to timestamp
3:1: conversion failed: "1394755320" to timestamp
4:1: conversion failed: "1394755800" to timestamp
5:1: conversion failed: "1394755920" to timestamp
Obviously this works if I set the column to be INT.
In the MySQL variant, I solved with a trick like
LOAD DATA LOCAL INFILE 'file.csv'
INTO TABLE raw_data
fields terminated by ','
lines terminated by '\n'
IGNORE 1 LINES
(#timestamp, other_column)
SET timestamp = FROM_UNIXTIME(#timestamp),
third_column = 'SomeSpecialValue'
;
Note two things: I can map the #timestamp variable from the CSV file using a function to turn it into a proper DATETIME, and I can set extra columns to certain values (this is necessary because I have more columns in the database than in the CSV).
I'm switching to postgresql because mysql lacks some functions that make my life so much easier with the queries I need to write.
Is there a way of configuring the table so that the conversion happens automatically?
I think you could accomplish this by importing the data as-is, creating a second column with the converted timestamp and then using a trigger to make sure any time a row is inserted it populates the new column:
alter table raw_table
add column time_stamp timestamp;
CREATE OR REPLACE FUNCTION raw_table_insert()
RETURNS trigger AS
$BODY$
BEGIN
NEW.time_stamp = timestamp 'epoch' + NEW.unix_time_stamp * interval '1 second';
return NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
CREATE TRIGGER insert_raw_table_trigger
BEFORE INSERT
ON raw_table
FOR EACH ROW
EXECUTE PROCEDURE raw_table_insert();
If the timestamp column can be modified, then you will want to make sure the trigger applies to updates as well.
Alternatively, you can create a view that generates the timestamp on the fly, but the advantages/disadvantages depend on how often you search on the column, how large the table is going to be and how much DML you expect.

PostgreSQL create index on cast from string to date

I'm trying to create an index on the cast of a varchar column to date. I'm doing something like this:
CREATE INDEX date_index ON table_name (CAST(varchar_column AS DATE));
I'm getting the error: functions in index expression must be marked IMMUTABLE But I don't get why, the cast to date doesn't depends on the timezone or something like that (which makes a cast to timestamp with time zone give this error).
Any help?
Your first error was to store a date as a varchar column. You should not do that.
The proper fix for your problem is to convert the column to a real date column.
Now I'm pretty sure the answer to that statement is "I didn't design the database and I cannot change it", so here is a workaround:
CAST and to_char() are not immutable because they can return different values for the same input value depending on the current session's settings.
If you know you have a consistent format of all values in the table (which - if you had - would mean you can convert the column to a real date column) then you can create your own function that converts a varchar to a date and is marked as immutable.
create or replace function fix_bad_datatype(the_date varchar)
returns date
language sql
immutable
as
$body$
select to_date(the_date, 'yyyy-mm-dd');
$body$
ROWS 1
/
With that definition you can create an index on the expression:
CREATE INDEX date_index ON table_name (fix_bad_datatype(varchar_column));
But you have to use exactly that function call in your query so that Postgres uses it:
select *
from foo
where fix_bad_datatype(varchar_column) < current_date;
Note that this approach will fail badly if you have just one "illegal" value in your varchar column. The only sensible solution is to store dates as dates,
Please provide the database version, table ddl, and some example data.
Would making your own immutable function do what you want, like this? Also look into creating a new cast in the docs and see if that does anything for you.
create table emp2 (emp2_id integer, hire_date VARCHAR(100));
insert into emp2(hire_date)
select now();
select cast(hire_date as DATE)
from emp2
CREATE FUNCTION my_date_cast(VARCHAR) RETURNS DATE
AS 'select cast($1 as DATE)'
LANGUAGE SQL
IMMUTABLE
RETURNS NULL ON NULL INPUT;
CREATE INDEX idx_emp2_hire_date ON emp2 (my_date_cast(hire_date));

Cast varchar type to date

I'd like to change a specific column in my PostgreSQL database from character_varying type to type date. Date is in the format yyyy:mm:dd
I tried to do:
alter table table_name
alter column date_time type date using (date_time::text::date);
But I received an error message:
date/time field value out of range: "2011:06:15"
When you cast text or varchar to date, the default date format of your installation is expected - depending on the datestyle setting of your session, typically set in your postgresql.conf. If in doubt, check with:
SHOW datestyle;
Generally, colon (:) is a time separator, In a simple cast, PostgreSQL will probably try to interpret '2011:06:15' as time - and fail.
To remove ambiguity use to_date() with a matching pattern for your dates:
ALTER TABLE table_name
ALTER COLUMN date_time type date
USING to_date(date_time, 'YYYY:MM:DD'); -- pattern for your example