how to import .csv file in PostgreSQL? - postgresql-14

create table covid19(
FIPS varchar(6),
Admin2 VARCHAR(41),
Province_State VARCHAR(40),
Country_Region VARCHAR(32),
Last_Update VARCHAR(19),
Lat VARCHAR(19),
Long_ varchar(19),
Confirmed integer,
Deaths integer,
Recovered varchar(10),
Active varchar(10),
Combined_Key VARCHAR(42),
Incident_Rate float,
Case_Fatality_Ratio float
);
copy covid19
from 'C:\Users\ryan\Downloads\10.12.2022.csv'
delimiter ','
csv header;
I have two questions.
First is the path of .csv file. I copy the path directly like this 'C:\Users\ryan\Downloads\10.12.2022.csv'.But the error is that could not open file "C:\Users\ryan\Downloads\10.12.2022.csv" for reading: No such file or directory. How should I do?
There is another error that syntax error at or near "csv" . I guess the mistake is 'header'. How to correct header?
I think the header is the first row of my .csv file. I really don't know how to correct it.

Related

value too long for type character varying(512)--Why can't import the data?

The maximum size of limited character types (e.g. varchar(n)) in Postgres is 10485760.
description on max length of postgresql's varchar
Please download the file for testing and extract it in /tmp/2019q4, we only use pre.txt to import data with.
sample data
Enter you psql and create a database:
postgres=# create database edgar;
postgres=# \c edgar;
Create table according to the webpage:
fields in pre table definations
edgar=# create table pre(
id serial ,
adsh varchar(20),
report numeric(6,0),
line numeric(6,0),
stmt varchar(2),
inpth boolean,
rfile char(1),
tag varchar(256),
version varchar(20),
plabel varchar(512),
negating boolean
);
CREATE TABLE
Try to import data:
edgar=# \copy pre(adsh,report,line,stmt,inpth,rfile,tag,version,plabel,negating) from '/tmp/2019q4/pre.txt' with delimiter E'\t' csv header;
We analyse the error info:
ERROR: value too long for type character varying(512)
CONTEXT: COPY pre, line 1005798, column plabel: "LIABILITIES AND STOCKHOLDERS EQUITY 0
0001493152-19-017173 2 11 BS 0 H LiabilitiesAndStockholdersEqu..."
Time: 1481.566 ms (00:01.482)
1.What size i set in the field is just 512 ,more less than 10485760.
2.the content in line 1005798 is not same as in error info:
0001654954-19-012748 6 20 EQ 0 H ReclassificationAdjustmentRelatingToAvailableforsaleSecuritiesNetOfTaxEffect 0001654954-19-012748 Reclassification adjustment relating to available-for-sale securities, net of tax effect" 0
Now i drop the previous table ,convert the plabel field as text,re-create it:
edgar=# drop table pre;
DROP TABLE
Time: 22.763 ms
edgar=# create table pre(
id serial ,
adsh varchar(20),
report numeric(6,0),
line numeric(6,0),
stmt varchar(2),
inpth boolean,
rfile char(1),
tag varchar(256),
version varchar(20),
plabel text,
negating boolean
);
CREATE TABLE
Time: 81.895 ms
Import the same data with same copy command:
edgar=# \copy pre(adsh,report,line,stmt,inpth,rfile,tag,version,plabel,negating) from '/tmp/2019q4/pre.txt' with delimiter E'\t' csv header;
COPY 275079
Time: 2964.898 ms (00:02.965)
edgar=#
No error info in psql console,let me check the raw data '/tmp/2019q4/pre.txt' ,which it contain 1043000 lines.
wc -l /tmp/2019q4/pre.txt
1043000 /tmp/2019q4/pre.txt
There are 1043000 lines,how much lines imported then?
edgar=# select count(*) from pre;
count
--------
275079
(1 row)
Why so less data imported without error info ?
The sample data you provided is obviously not the data you are really loading. It does still show the same error, but of course the line numbers and markers are different.
That file occasionally has double quote marks where there should be single quote marks (apostrophes). Because you are using CSV mode, these stray double quotes will start multi-line strings, which span all the way until the next stray double quote mark. That is why you have fewer rows of data than lines of input, because some of the data values are giant multiline strings.
Since your data clearly isn't CSV, you probably shouldn't be using \copy in CSV format. It loads fine in text format as long as you specify "header", although that option didn't become available in text format until v15. For versions before that, you could manually remove the header line, or use PROGRAM to skip the header like FROM PROGRAM 'tail +2 /tmp/pre.txt' Alternatively, you could keep using CSV format, but choose a different quote character, one that never shows up in your data such as with (delimiter E'\t', format csv, header, quote E'\b')

Importing CSV file PostgreSQL using pgAdmin 4

I'm trying to import a CSV file to my PostgreSQL but I get this error
ERROR: invalid input syntax for integer: "id;date;time;latitude;longitude"
CONTEXT: COPY test, line 1, column id: "id;date;time;latitude;longitude"
my csv file is simple
id;date;time;latitude;longitude
12980;2015-10-22;14:13:44.1430000;59,86411203;17,64274849
The table is created with the following code:
CREATE TABLE kordinater.test
(
id integer NOT NULL,
date date,
"time" time without time zone,
latitude real,
longitude real
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE kordinater.test
OWNER to postgres;
You can use Import/Export option for this task.
Right click on your table
Select "Import/Export" option & Click
Provide proper option
Click Ok button
You should try this it must work
COPY kordinater.test(id,date,time,latitude,longitude)
FROM 'C:\tmp\yourfile.csv' DELIMITER ',' CSV HEADER;
Your csv header must be separated by comma NOT WITH semi-colon or try to change id column type to bigint
to know more
I believe the quickest way to overcome this issue is to create an intermediary temporary table, so that you can import your data and cast the coordinates as you please.
Create a similar temporary table with the problematic columns as text:
CREATE TEMPORARY TABLE tmp
(
id integer,
date date,
time time without time zone,
latitude text,
longitude text
);
And import your file using COPY:
COPY tmp FROM '/path/to/file.csv' DELIMITER ';' CSV HEADER;
Once you have your data in the tmp table, you can cast the coordinates and insert them into the test table with this command:
INSERT INTO test (id, date, time, latitude, longitude)
SELECT id, date, time, replace(latitude,',','.')::numeric, replace(longitude,',','.')::numeric from tmp;
One more thing:
Since you're working with geographic coordinates, I sincerely recommend you to take a look at PostGIS. It is quite easy to install and makes your life much easier when you start your first calculations with geospatial data.

How to store string spaces as null in numeric column

I want to get records from my local txt file to postgresql table.
I have created following table.
create table player_info
(
Name varchar(20),
City varchar(30),
State varchar(30),
DateOfTour date,
pay numeric(5),
flag char
)
And, my local txt file contains following data.
John|Mumbai| |20170203|55555|Y
David|Mumbai| |20170305| |N
Romcy|Mumbai| |20170405|55555|N
Gotry|Mumbai| |20170708| |Y
I am just executing this,
copy player_info (Name,
City,
State,
DateOfTour,
pay_id,
flag)
from local 'D:\sample_player_info.txt'
delimiter '|' null as ''
exceptions 'D:\Logs\player_info'
What I want is,
For my numeric column, If 3 spaces are there,
then I have to insert NULL as pay else whatever 5 digits numeric number.
pay is a column in my table whose datatype is numeric.
Is this correct or possible to do this ?
You cannot store strings in a numeric column, at all. 3 spaces is a string, so it cannot be stored in the column pay as that is defined as numeric.
A common approach to this conundrum is to create a staging table which uses less precise data types in the column definitions. Import the source data into the staging table. Then process that data so that it can be reliably added to the final table. e.g. in the staging table set a column called pay_str to NULL where pay_str = ' ' (or perhaps LIKE ' %')

ERROR: extra data after last expected column in postgres table

I am working on a project where I need to create a new table, then import data from a CSV. I've read many similar questions ("extra data after last expected column") and answers on StackOverflow, but I still haven't found the culprit.
CREATE TABLE colleges2014_15 (
unitid integer,
intsnm text,
city text,
stabbr text,
zip_clean char,
control integer,
latitude float,
longitude float,
tutionfee_in float,
tuitionfee_out float,
pctpell float,
inc_pct_lo float,
dep_stat_pct_ind float,
dep_debt_mdn float,
ind_debt_mdn float,
pell_debt_mdn float,
ugds_men float,
ubds_women float,
locale integer,
PRIMARY KEY(unitid)
);
The table is created successfully with the 19 different columns. Then I go try to import the data into the new table.
COPY colleges2014_15(
unitid,
intsnm,
city,
stabbr,
zip_clean,
control,
latitude,
longitude,
tutionfee_in,
tuitionfee_out,
pctpell,
inc_pct_lo,
dep_stat_pct_ind,
dep_debt_mdn,
ind_debt_mdn,
pell_debt_mdn,
ugds_men,
ubds_women,
locale
)
FROM '/Users/compose/Downloads/CollegeScorecard_Raw_Data x/MERGED2014_15_cleaned.csv' CSV HEADER
;
And I get the error message. I've done the following in the CSV:
Made sure it's saved as UTF-8 CSV (working on a Mac)
Already cleaned out all commas in every row
Cleaned out all NULL values
Confirmed that all the data types (integer, float, text, etc.) are correct
I've tried to simply COPY only the first column, unitid; it failed. I've tried importing only the second column (intsnm) and it failed with the same error.
The full error message when trying to COPY over all 19 columns is as follows:
An error occurred when executing the SQL command: COPY
colleges2014_15( unitid, intsnm, city, stabbr, zip_clean,
control, latitude, longitude, tutionfee_in, tuitionfee_out,
pctpell, inc_pct_...
ERROR: extra data after last expected column Where: COPY
colleges2014_15, line 2: "100654,Alabama A & M
University,Normal,AL,35762,35762,1,34.783368,-86.568502,9096,16596,0.7356,0.651..."
1 statement failed.
Execution time: 0.03s
The full error message when trying to copy simply the first column only is:
An error occurred when executing the SQL command: COPY
colleges2014_15( unitid ) FROM
'/Users/compose/Downloads/CollegeScorecard_Raw_Data
x/MERGED2014_15_cleaned.csv' CSV HEADER
ERROR: extra data after last expected column Where: COPY
colleges2014_15, line 2: "100654,Alabama A & M
University,Normal,AL,35762,35762,1,34.783368,-86.568502,9096,16596,0.7356,0.651..."
1 statement failed.
Execution time: 0.01s
Hugely appreciate any help.
It took me a while to figure out what was wrong searching on the error so have posted my problem to help others. My issue was inexperience with pgAdmin as
pgAdmin requires the table to be created WITH columns before the data is imported. I had expected that the headers would be used from the .csv file, most other packages I have used worked this way.
If you are working with a GIS system using PostGIS there is an easy solution. I am using QGIS 3.4, with Postgres and PostGIS installed.
In QGIS
Select Database menu option
Select DBManager
On left - choose location for table
Select Import Layer/File
On the next window select the following
Input - choose file
Table - enter table name
OK
Your CSV has a ZIP column which your table and COPY statement omit.

sql auto generated id when importing from file

I want to import some data into the Postgres database from local CSV file. My SQL:
CREATE TABLE trajectory (
id serial,
lat varchar(40),
lon varchar(40),
);
The CSV file looks as follows:
28.218273, 21.12938
...
And my import clause:
COPY trajectory FROM 'my directory\20081023025304.plt' DELIMITER ',' CSV;
But this gives an error:
ERROR: invalid input syntax for integer: "39.984702"
SQL state: 22P02
Context: COPY trajectory, line 1, column id: "39.984702"
The main problem is that I definitely need ID column in DB, but CSV file does not contain it. How can I add autogenerated id when importing the data (I mean ID=1 for first row, ID=2 for second, etc.)?
Just define columns - here is DOC:
copy trajectory (lat, lon) from my directory\20081023025304.plt delimiter ',' csv;