SQLDeveloper not able to handle empty dates? - oracle-sqldeveloper

I have a DB table export (.csv file generated using SQLDeveloper) that I need to import into another DB table.
The issue is that there are date columns that are nullable and these values are obviously exported as empty string. When I try to import that file then SQLDeveloper internally seems to generate an insert statement for each line since I get the below error message:
INSERT INTO <tablename> (<fieldnames here>) VALUES (... ,to_date('', 'DD.MM.RRRR HH24:MI:SS'), ...);
Error report -
ORA-01830: date format picture ends before converting entire input string
In that insert SQLDeveloper apparently tries to convert the empty string into a date using to_date(...) which then obviously yields an error.
Is there some workaround, so allows to import such dates as 'null' into the DB? After all: it should somehow be feasible to import .csv files again that were generated by SQLDeveloper, shouldn't it?

It's working for me.
Since you didn't provide table definition or sample data, I made up my own scenario. Compare what I did to what you're doing.
create table csv_null_dates (id integer, dates date);
insert into csv_null_dates values (1, sysdate);
insert into csv_null_dates values (2, sysdate-1);
insert into csv_null_dates values (3, sysdate+1);
insert into csv_null_dates values (4, null);
insert into csv_null_dates values (5, sysdate);
set sqlformat csv
cd c:\users\jdsmith\desktop
spool null_dates.csv
select * from csv_null_dates;
spool off
The output:
Table CSV_NULL_DATES created.
1 row inserted.
1 row inserted.
1 row inserted.
1 row inserted.
1 row inserted.
"ID","DATES"
1,26-SEP-19
2,25-SEP-19
3,27-SEP-19
4,
5,26-SEP-19
I then opened the table import wizard, and pointed to my CSV file:
I finished the wizard, import ran to completion, here's my log:
** Import Start ** at 2019.09.26-08.12.59
Import C:\Users\jdsmith\Desktop\null_dates.csv to HR.HR.CSV_NULL_DATES
Load Method: Insert
** Import End ** at 2019.09.26-08.13.00
And when I go to browse my table, I see that I have 2x the records I had before, including the row with the NULL date:

Related

Importing CSV file PostgreSQL using pgAdmin 4

I'm trying to import a CSV file to my PostgreSQL but I get this error
ERROR: invalid input syntax for integer: "id;date;time;latitude;longitude"
CONTEXT: COPY test, line 1, column id: "id;date;time;latitude;longitude"
my csv file is simple
id;date;time;latitude;longitude
12980;2015-10-22;14:13:44.1430000;59,86411203;17,64274849
The table is created with the following code:
CREATE TABLE kordinater.test
(
id integer NOT NULL,
date date,
"time" time without time zone,
latitude real,
longitude real
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE kordinater.test
OWNER to postgres;
You can use Import/Export option for this task.
Right click on your table
Select "Import/Export" option & Click
Provide proper option
Click Ok button
You should try this it must work
COPY kordinater.test(id,date,time,latitude,longitude)
FROM 'C:\tmp\yourfile.csv' DELIMITER ',' CSV HEADER;
Your csv header must be separated by comma NOT WITH semi-colon or try to change id column type to bigint
to know more
I believe the quickest way to overcome this issue is to create an intermediary temporary table, so that you can import your data and cast the coordinates as you please.
Create a similar temporary table with the problematic columns as text:
CREATE TEMPORARY TABLE tmp
(
id integer,
date date,
time time without time zone,
latitude text,
longitude text
);
And import your file using COPY:
COPY tmp FROM '/path/to/file.csv' DELIMITER ';' CSV HEADER;
Once you have your data in the tmp table, you can cast the coordinates and insert them into the test table with this command:
INSERT INTO test (id, date, time, latitude, longitude)
SELECT id, date, time, replace(latitude,',','.')::numeric, replace(longitude,',','.')::numeric from tmp;
One more thing:
Since you're working with geographic coordinates, I sincerely recommend you to take a look at PostGIS. It is quite easy to install and makes your life much easier when you start your first calculations with geospatial data.

hive insert current date into a table using date function errors

I have to insert current date (timestamp) in a table via hive query. The query is failing for some reason. Can someone please help me out.
CREATE EXTERNAL TABLE IF NOT EXISTS dataFlagTest(
date string
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
LOCATION 's3://bckt1/hive_test/dateFlag/';
Now To insert into it, I run following query :
INSERT OVERWRITE TABLE dataFlagTest
SELECT from_unixtime(unix_timestamp()) ;
It failed with the following error :
FAILED: NullPointerException null
Can someone please help me out
Solution is you have to do select from a table. You cannot run select without from clause.
So, create a sample table with 1 row or use an existing table like below :
Insert OVERWRITE TABLE dataflagtest SELECT from_unixtime(unix_timestamp()) as date FROM EXISTING_TABLE TABLESAMPLE(1 ROWS);

Postgres import csv duplicate keys

I have a table called measurement with 3 columns: value, moment, seriesid.
51.02|2006-12-31 23:00:00|1
24.88|2006-12-31 23:00:00|2
55|2006-12-31 23:00:00|3
3.34823004011|2006-12-31 23:00:00|5
I am trying to load a csv in this table and I am getting the following error:
Key (moment, seriesid)=(2009-05-25 00:00:00,186) already exists.
After reading some posts here on StackOverflow, best I managed to do was this:
CREATE TEMP TABLE measurement_tmp AS SELECT * FROM measurement LIMIT 0;
COPY measurement_tmp FROM '/home/airquality/dat/import.csv'
WITH DELIMITER ',';
INSERT INTO measurement
SELECT DISTINCT ON (moment,seriesid)
value,moment,seriesid
FROM measurement_tmp
As far as I understand
1) A table measurement_tmp is created.
2) All contents of the measurement table are loaded in measeurement_tmp
3) All contents of the import.csv file are loaded in measurement_tmp without Key (moment, seriesid) restriction.
4) Selecting DISTINCT ON (moment, seriesid) should only return only 'sane' data and import them in measurement.
Still getting the same error,
2014-11-20 10:06:24 GMT-2 ERROR: duplicate key value violates unique constraint
"measurement_pkey"
2014-11-20 10:06:24 GMT-2 DETAIL: Key (moment, seriesid)=(2009-05-25 00:00:00,
186) already exists.
Any ideas?

Importing variable number of columns into SQLite database

I have a list of synonyms in a csv file format: word,meaning1,meaning2,meaning3....
Different words have different number of synonyms which means that rows are likely to have a variable number of columns. I am trying to import the csv file into an sqlite database like so:
sqlite3 synonyms
sqlite> create table list(word text, meaning0 text, meaning1 text, meaning2 text, meaning3 text, meaning4 text, meaning5 text, meaning6 text, meaning7 text, meaning8 text, meaning9 text);
sqlite> .mode list
sqlite> .separator ,
sqlite> .import ./csv/synonyms.csv list
To be on the safe side, I assumed a max. number of 10 columns to each word. For those words with less than 10 synonyms, the other columns should be null. The error I get on executing the import command is:
Error: ./csv/synonyms.csv line 1: expected 11 columns of data but found 3
My question(s):
1. In case the number of columns is less than 10, how can I tell SQLite to substitute it with null?
2. Is there some way of specifying that I want 10 columns after word instead of typing it automatically?
You can do following:
Import all data into single column;
Update table splitting column contents into other columns.
Sample:
-- Create a table with only one column;
CREATE TABLE table_name(first);
-- Choose a separator which doesn't exist within file
.separator ~
-- Import data
.import file.csv table_name
-- Add another column to split data
ALTER TABLE table_name ADD COLUMN second;
-- Split data between first and second column
UPDATE table_name SET first=SUBSTR(first, 1, INSTR(first, ",")-1), second=SUBSTR(first, INSTR(first, ",")+1) WHERE INSTR(first, ",")>0;
-- Repeat to next column
ALTER TABLE table_name ADD COLUMN third;
-- Split data between second and third column
UPDATE table_name SET second=SUBSTR(second, 1, INSTR(second, ",")-1), third=SUBSTR(second, INSTR(second, ",")+1) WHERE INSTR(second, ",")>0;
-- And so on...
ALTER TABLE table_name ADD COLUMN fourth;
UPDATE table_name SET third=SUBSTR(third, 1, INSTR(third, ",")-1), fourth=SUBSTR(third, INSTR(third, ",")+1) WHERE INSTR(third, ",")>0;
-- Many times as needed...
Not being an optimal method, sqlite performance should render it enough fast.

Dump file from Sqlite3 to PostgreSQL: why do I always get errors when import it?

I have many tables in Sqlite3 db and now I want to export it to PostgreSQL, but all the time I get errors.
I've used different techniques to dump from sqlite:
.mode csv
.header on
.out ddd.sql
select * from my_table
and
.mode insert
.out ddd.sql
select * from my_table
And when I try to import it through phppgadmin I get errors like this:
ERROR: column "1" of relation "my_table" does not exist
LINE 1: INSERT INTO "public"."my_table" ("1", "1", "Vitas", "a#i.ua", "..
How to avoid this error?
Thanks in advance!
Rant
You get this "column ... does not exist" error with INSERT INTO "public"."my_table" ("1", ... - because quotes around the "1" mean this is an identifier, not literal.
Even if you fix this, the query still will give error, because of missing VAULES keyword, as Jan noticed in other answer.
The correct form would be:
INSERT INTO "public"."my_table" VALUES ('1', ...
If this SQL was autogenerated by sqlite, bad for sqlite.
This great chapter about SQL syntax is only about 20 pages in print. My advice to whoever generated this INSERT, is: read it :-) it will pay off.
Real solution
Now, to the point... To transfer table from sqlite to postgres, you should use COPY because it's way faster than INSERT.
Use CSV format as it's understood on both sides.
In sqlite3:
create table tbl1(one varchar(20), two smallint);
insert into tbl1 values('hello',10);
insert into tbl1 values('with,comma', 20);
insert into tbl1 values('with "quotes"', 30);
insert into tbl1 values('with
enter', 40);
.mode csv
.header on
.out tbl1.csv
select * from tbl1;
In PostgreSQL (psql client):
create table tbl1(one varchar(20), two smallint);
\copy tbl1 from 'tbl1.csv' with csv header delimiter ','
select * from tbl1;
See http://wiki.postgresql.org/wiki/COPY.
Seems there is missing "VALUES" keyword:
INSERT INTO "public"."my_table" VALUES (...)
But! - You have to insert values with appropriate quotes - single quotes for text and without quotes for numbers.