One of the column in csv file has date column. My date format is DD-Mon-YY. I getting "gdk-05058 non-numeric character found" issue. Can anyone solve this issue please. I need to import this date column
I have a PostgreSQL table that contains an empty column of type 'date'
I'm trying to copy date values from a CSV file.
But It raises this:
COPY books (publication_date) FROM 'path/to/file/pub.csv' CSV;
ERROR: date/time field value out of range: "11/31/2000"
This value is at index 8178 of the CSV, so it's not the entire file that's faulty.
I don't understand why, as the date seems perfectly fine.
So, how can I fix this or make Postgres ignore the faulty dates?
ERROR: date/time field value out of range: "11/31/2000"
I don't understand why, as the date seems perfectly fine.
Well, november has only 30 days, so the date is indeed invalid.
You need to set the datestyle to the required format.
https://www.postgresql.org/docs/7.2/sql-set.html
https://www.postgresql.org/docs/9.1/runtime-config-client.html#GUC-DATESTYLE
SET datestyle = DMY;
I am trying to get the current_timestamp in a column in my dataframe. I am using below code for that.
df_new = df.withColumn('LOAD_DATE_TIME' , F.current_timestamp())
But this code is generating load_date_time in below format when exported to csv file.
2019-11-19T16:59:44.000+05:30
I don't want the timezone part and want the datetime in this below format.
2019-11-19 16:59:44
I'm very new to sql/hive. At first, I loaded a txt file into hive using:
drop table if exists Tran_data;
create table Tran_data(tran_time string,
resort string, settled double)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
Load data local inpath 'C:\Users\me\Documents\transaction_data.txt' into table Tran_Data;
The variable tran_time in the txt file is like this:10-APR-2014 15:01. After loading this Tran_data table, I tried to convert tran_time to a "standard" format so that I can join this table to another table using tran_time as the join key. The date format desired is 'yyyymmdd'. I searched online resources, and found this: unix_timestamp(substr(tran_time,1,11),'dd-MMM-yyyy')
So essentially, I'm doing this: unix_timestamp('10-APR-2014','dd-MMM-yyyy'). However, the output is "NULL".
So my question is: how to convert the date format to a "standard" format, and then further convert it to 'yyyymmdd' format?
from_unixtime(unix_timestamp('20150101' ,'yyyyMMdd'), 'yyyy-MM-dd')
My current Hive Version: Hive 0.12.0-cdh5.1.5
I converted datetime in first column to date in second column using the below hive date functions. Hope this helps!
select inp_dt, from_unixtime(unix_timestamp(substr(inp_dt,0,11),'dd-MMM-yyyy')) as todateformat from table;
inp_dt todateformat
12-Mar-2015 07:24:55 2015-03-12 00:00:00
unix_timestamp function will convert given string date format to unix timestamp in seconds , but not like this format dd-mm-yyyy.
You need to write your own custom udf to convert a given string date to the format that you need as present Hive do not have any predefined functions. We have to_date function to convert a timestamp to date , remaining all unix_timestamp functions won't help your problem.
select from_unixtime(unix_timestamp('01032018' ,'MMddyyyy'), 'yyyyMMdd');
input format: mmddyyyy
01032018
output after query: yyyymmdd
20180103
To help someone in the future:
The following function should work as it worked in my case
to_date(from_unixtime(UNIX_TIMESTAMP('10-APR-2014','dd-MMM-yyyy'))
unix_timestamp('2014-05-01','dd-mmm-yyyy') will work, your input string should be in this format for hive yyyy-mm-dd or yyyy-mm-dd hh:mm:ss
Where as you are trying with '01-MAY-2014' hive won't understand it as a date string
I am reading a csv file with date fields of formatted mm/dd/yyyy. I expected the same kind of format from a Postgres table after the import, but I see yyyy-mm-dd hh:mm:ss.
The date fields in my table are defined as timestamp without time zone data type.
How do I maintain the same format of data? I am using PostgreSQL 9.3.
Postgresql only stores the value, it doesn't store formatting (which would waste space).
You can use the to_char function in your query if you like to get the output formatted in a special way. Details are in the manual.