sql auto generated id when importing from file - postgresql

I want to import some data into the Postgres database from local CSV file. My SQL:
CREATE TABLE trajectory (
id serial,
lat varchar(40),
lon varchar(40),
);
The CSV file looks as follows:
28.218273, 21.12938
...
And my import clause:
COPY trajectory FROM 'my directory\20081023025304.plt' DELIMITER ',' CSV;
But this gives an error:
ERROR: invalid input syntax for integer: "39.984702"
SQL state: 22P02
Context: COPY trajectory, line 1, column id: "39.984702"
The main problem is that I definitely need ID column in DB, but CSV file does not contain it. How can I add autogenerated id when importing the data (I mean ID=1 for first row, ID=2 for second, etc.)?

Just define columns - here is DOC:
copy trajectory (lat, lon) from my directory\20081023025304.plt delimiter ',' csv;

Related

value too long for type character varying(512)--Why can't import the data?

The maximum size of limited character types (e.g. varchar(n)) in Postgres is 10485760.
description on max length of postgresql's varchar
Please download the file for testing and extract it in /tmp/2019q4, we only use pre.txt to import data with.
sample data
Enter you psql and create a database:
postgres=# create database edgar;
postgres=# \c edgar;
Create table according to the webpage:
fields in pre table definations
edgar=# create table pre(
id serial ,
adsh varchar(20),
report numeric(6,0),
line numeric(6,0),
stmt varchar(2),
inpth boolean,
rfile char(1),
tag varchar(256),
version varchar(20),
plabel varchar(512),
negating boolean
);
CREATE TABLE
Try to import data:
edgar=# \copy pre(adsh,report,line,stmt,inpth,rfile,tag,version,plabel,negating) from '/tmp/2019q4/pre.txt' with delimiter E'\t' csv header;
We analyse the error info:
ERROR: value too long for type character varying(512)
CONTEXT: COPY pre, line 1005798, column plabel: "LIABILITIES AND STOCKHOLDERS EQUITY 0
0001493152-19-017173 2 11 BS 0 H LiabilitiesAndStockholdersEqu..."
Time: 1481.566 ms (00:01.482)
1.What size i set in the field is just 512 ,more less than 10485760.
2.the content in line 1005798 is not same as in error info:
0001654954-19-012748 6 20 EQ 0 H ReclassificationAdjustmentRelatingToAvailableforsaleSecuritiesNetOfTaxEffect 0001654954-19-012748 Reclassification adjustment relating to available-for-sale securities, net of tax effect" 0
Now i drop the previous table ,convert the plabel field as text,re-create it:
edgar=# drop table pre;
DROP TABLE
Time: 22.763 ms
edgar=# create table pre(
id serial ,
adsh varchar(20),
report numeric(6,0),
line numeric(6,0),
stmt varchar(2),
inpth boolean,
rfile char(1),
tag varchar(256),
version varchar(20),
plabel text,
negating boolean
);
CREATE TABLE
Time: 81.895 ms
Import the same data with same copy command:
edgar=# \copy pre(adsh,report,line,stmt,inpth,rfile,tag,version,plabel,negating) from '/tmp/2019q4/pre.txt' with delimiter E'\t' csv header;
COPY 275079
Time: 2964.898 ms (00:02.965)
edgar=#
No error info in psql console,let me check the raw data '/tmp/2019q4/pre.txt' ,which it contain 1043000 lines.
wc -l /tmp/2019q4/pre.txt
1043000 /tmp/2019q4/pre.txt
There are 1043000 lines,how much lines imported then?
edgar=# select count(*) from pre;
count
--------
275079
(1 row)
Why so less data imported without error info ?
The sample data you provided is obviously not the data you are really loading. It does still show the same error, but of course the line numbers and markers are different.
That file occasionally has double quote marks where there should be single quote marks (apostrophes). Because you are using CSV mode, these stray double quotes will start multi-line strings, which span all the way until the next stray double quote mark. That is why you have fewer rows of data than lines of input, because some of the data values are giant multiline strings.
Since your data clearly isn't CSV, you probably shouldn't be using \copy in CSV format. It loads fine in text format as long as you specify "header", although that option didn't become available in text format until v15. For versions before that, you could manually remove the header line, or use PROGRAM to skip the header like FROM PROGRAM 'tail +2 /tmp/pre.txt' Alternatively, you could keep using CSV format, but choose a different quote character, one that never shows up in your data such as with (delimiter E'\t', format csv, header, quote E'\b')

how to import .csv file in PostgreSQL?

create table covid19(
FIPS varchar(6),
Admin2 VARCHAR(41),
Province_State VARCHAR(40),
Country_Region VARCHAR(32),
Last_Update VARCHAR(19),
Lat VARCHAR(19),
Long_ varchar(19),
Confirmed integer,
Deaths integer,
Recovered varchar(10),
Active varchar(10),
Combined_Key VARCHAR(42),
Incident_Rate float,
Case_Fatality_Ratio float
);
copy covid19
from 'C:\Users\ryan\Downloads\10.12.2022.csv'
delimiter ','
csv header;
I have two questions.
First is the path of .csv file. I copy the path directly like this 'C:\Users\ryan\Downloads\10.12.2022.csv'.But the error is that could not open file "C:\Users\ryan\Downloads\10.12.2022.csv" for reading: No such file or directory. How should I do?
There is another error that syntax error at or near "csv" . I guess the mistake is 'header'. How to correct header?
I think the header is the first row of my .csv file. I really don't know how to correct it.

Error in query (7): ERROR: syntax error at or near "\" LINE 1: \copy persons (supervisor_lname, supervisor_fname, lname, fn

How to fix this error, when I try to copy a csv into my personal database table, it gives this error.
Bigger background is I want to import a csv file from local to my database to as an extract and dependent table. Don't know how to directly load the file to be a table, so, I first create an empty table, then copy the csv file to this empty table.
This is the command I used:
\copy persons (supervisor_lname, supervisor_fname, lname, fname, supervisor_id)
FROM '/Users/baoying/Downloads/sql.csv'
DELIMITER ','
CSV HEADER;

Postgres: INVALID input syntax for type numeric: "2021-02-14" ... but it's in datetime format?

I'm very confused about this error I'm getting in my Query Tool in PgAdmin. I've been working on this for days, and cannot find a solution to fixing this error when attempting to upload this csv file to my Postgres table.
ERROR: invalid input syntax for type numeric: "2021-02-14"
CONTEXT: COPY CardData, line 2, column sold_price: "2021-02-14"
SQL state: 22P02
Here is my code in the Query Tool that I am running
CREATE TABLE Public."CardData"(Title text, Sold_Price decimal, Bids int, Sold_Date date, Card_Link text, Image_Link text)
select * from Public."CardData"
COPY Public."CardData" FROM 'W:\Python_Projects\cardscrapper_project\ebay_api\card_data_test.csv' DELIMITER ',' CSV HEADER ;
Here is a sample from the first row of my csv file.
Title,Sold_Date,Sold_Price,Bids,Card_Link,Image_Link
2018 Contenders Optic Sam Darnold #103 Red Rookie #/99 PSA 8 NM-MT AUTO 10,2021-02-14,104.5,26,https://www.ebay.com/itm/2018-Contenders-Optic-Sam-Darnold-103-Red-Rookie-99-PSA-8-NM-MT-AUTO-10/143935698791?hash=item21833c7767%3Ag%3AjewAAOSwNb9gGEvi&LH_Auction=1,https://i.ebayimg.com/thumbs/images/g/jewAAOSwNb9gGEvi/s-l225.jpg
The "Sold_Date" column is in the correct datetime format that is easy for Postgres to understand, but the error is calling on the "Sold-Price" column?
I'm very confused. Any help is greatly appreciated.
Notice that the columns are not in the same order in the csv file and in the table.
You would have to specify the proper column order
COPY Public."CardData" (Title,Sold_Date,Sold_Price,Bids,Card_Link,Image_Link)
FROM 'W:\Python_Projects\cardscrapper_project\ebay_api\card_data_test.csv'
DELIMITER ',' CSV HEADER ;
You have created the table with sold_price as the second column, so the COPY command will expect a price/number to be the second column in your CSV file. Your CSV file, however has sold_date as the second column, which will lead to the data type mismatch error that you see.
Either you can re-define your CREATE TABLE statement with the sold_date as second column and sold_price as 4th column, or you can specify the column parsing order in your COPY statement as COPY public."CardData" (<column order>)
Another option is to open up the CSV file in Excel and re-order the columns and do a Save As...

Importing CSV file PostgreSQL using pgAdmin 4

I'm trying to import a CSV file to my PostgreSQL but I get this error
ERROR: invalid input syntax for integer: "id;date;time;latitude;longitude"
CONTEXT: COPY test, line 1, column id: "id;date;time;latitude;longitude"
my csv file is simple
id;date;time;latitude;longitude
12980;2015-10-22;14:13:44.1430000;59,86411203;17,64274849
The table is created with the following code:
CREATE TABLE kordinater.test
(
id integer NOT NULL,
date date,
"time" time without time zone,
latitude real,
longitude real
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE kordinater.test
OWNER to postgres;
You can use Import/Export option for this task.
Right click on your table
Select "Import/Export" option & Click
Provide proper option
Click Ok button
You should try this it must work
COPY kordinater.test(id,date,time,latitude,longitude)
FROM 'C:\tmp\yourfile.csv' DELIMITER ',' CSV HEADER;
Your csv header must be separated by comma NOT WITH semi-colon or try to change id column type to bigint
to know more
I believe the quickest way to overcome this issue is to create an intermediary temporary table, so that you can import your data and cast the coordinates as you please.
Create a similar temporary table with the problematic columns as text:
CREATE TEMPORARY TABLE tmp
(
id integer,
date date,
time time without time zone,
latitude text,
longitude text
);
And import your file using COPY:
COPY tmp FROM '/path/to/file.csv' DELIMITER ';' CSV HEADER;
Once you have your data in the tmp table, you can cast the coordinates and insert them into the test table with this command:
INSERT INTO test (id, date, time, latitude, longitude)
SELECT id, date, time, replace(latitude,',','.')::numeric, replace(longitude,',','.')::numeric from tmp;
One more thing:
Since you're working with geographic coordinates, I sincerely recommend you to take a look at PostGIS. It is quite easy to install and makes your life much easier when you start your first calculations with geospatial data.