COPY FROM csv file into Postgresql table and skip id first row - postgresql

simple question I think but I can't seem to find the answer through Googling etc.
I am importing csv data into a postgresql table via psql. I can do this fine through the pgAdmin III GUI but am now using Codio Online IDE where it is all done through psql.
How can I import into the Postgresql table and skip the first 'id' auto incrementing column?
In pgAdmin it was as simple as unselecting the id column on the 'columns to import' tab.
So far I have in the SQL Query toolbox
COPY products FROM '/media/username/rails_projects/app/db/import/bdname_products.csv' DELIMITER ',' CSV;
Alternatively, is it possible to get an output on the SQL that PgAdmin III used after you execute an Import using the menu Import command?
Thank you for your consideration.

As explained in the manual, copy allows you to specify a field list to read, like this:
COPY table_name ( column_name , ... ) FROM 'filename'

Related

Populate a table using COPY into an extension?

I have doing an extension in Postgres.
For do that, I make a backup in plain text of my functions, types, etc and I use this file for my extension.
Now I want to add an auxiliar table too. But the dump in the file for the table is like that (after it has create the table "tAcero" and the sequence):
COPY sdmed."tAcero" (id, area, masa, tipo, tamanno) FROM stdin;
44 65.30 502.000 HEB 180
45 78.10 601.000 HEB 200
.....
more values
\.
and I wonder if could be possible to use this COPY statement for populate the table into the extension, or I only can do it using "INSERT"?
Thank you.
You can indeed load tables in PostgreSQL using the COPY statement.
An example using the psql client and a CSV file:
CREATE TABLE test_of_copy (my_column text);
\COPY test_of_copy FROM './a_file_stored_locally' CSV HEADER;
Where the contents of a_file_stored_locally are:
my_column
"test_input"
Please have a read of the documentation: https://www.postgresql.org/docs/9.2/sql-copy.html. If you have any issues with this, perhaps add some more detail to your question.

migration from sqllite to postgresql

I am trying to export table from sqllite and import to postgresql db. but when I try to import into postgresql db it throws some delimiter issue. My table already created in postgresql database. I am following export policy from below link:
https://www.sqlitetutorial.net/sqlite-tutorial/sqlite-export-csv/
and got below error when import:
DELIMITER ',' CSV HEADER QUOTE '\"' ESCAPE '''';""
Any one please help
I had same issue, but i solved it in different way. Maybe this wont fit here but still u would like to try.
I first converter/transformed it to .cs file
Then I created table in postgresql database with same number of columns and same dtype as it was before
Then used i query like following :
COPY sports(playerid,name,age) from "<file location>\sports.csv" DELIMITER ',' CSV HEADER;
Like this all the columns in that table were imported in postgresql.
If this worked for you, your welcome! ;)

How to COPY CSV as JSON fields

Is there a way to COPY the CSV file data directly into a JSON or JSONb array?
Example:
CREATE TABLE mytable (
id serial PRIMARY KEY,
info jSONb -- or JSON
);
COPY mytable(info) FROM '/tmp/myfile.csv' HEADER csv;
NOTE: each CSV line is mapped to a JSON array. It is a normal CSV.
Normal CSV (no JSON-embeded)... /tmp/myfile.csv =
a,b,c
100,Mum,Dad
200,Hello,Bye
The correct COPY command must be equivalent to the usual copy bellow.
Usual COPY (ugly but works fine)
CREATE TEMPORARY TABLE temp1 (
a int, b text, c text
);
COPY temp1(a,b,c) FROM '/tmp/myfile.csv' HEADER csv;
INSERT INTO mytable(info) SELECT json_build_array(a,b,c) FROM temp1;
It is ugly because:
need the a priory knowledge about fields, and a previous CREATE TABLE with it.
for "big data" need a big temporary table, so lost CPU, disk and my time — the table mytable have CHECKs and UNIQUEs constraints for each line.
... Needs more than 1 SQL command.
Perfect solution!
Not need to know all the CSV columns, only extract what you know.
Use at SQL CREATE EXTENSION PLpythonU;: if the command produce an error like "could not open extension control file ... No such file" you need to install pg-py extra-packages. In standard UBUNTU (16 LTS) is simple, apt install postgresql-contrib postgresql-plpython.
CREATE FUNCTION get_csvfile(
file text,
delim_char char(1) = ',',
quote_char char(1) = '"')
returns setof text[] stable language plpythonu as $$
import csv
return csv.reader(
open(file, 'rb'),
quotechar=quote_char,
delimiter=delim_char,
skipinitialspace=True,
escapechar='\\'
)
$$;
INSERT INTO mytable(info)
SELECT jsonb_build_array(c[1],c[2],c[3])
FROM get_csvfile('/tmp/myfile1.csv') c;
The split_csv() function was defined here. The csv.reader is very reliable (!).
Not tested for big-big CSV... But expected Python do job.
PostgreSQL workaround
It is not a perfect solution, but it solves the main problem, that is the
... big temporary table, so lost CPU, disk and my time"...
This is the way we do it, a workaround with file_fdw!
Adopt your conventions to avoid file-copy and file-permission confusions... The standard file path for a CSV. Example: /tmp/pg_myPrj_file.csv
Initialise your database or SQL script with the magic extension,
CREATE EXTENSION file_fdw;
CREATE SERVER files FOREIGN DATA WRAPPER file_fdw;
For each CSV file, myNewData.csv,
3.1. make a symbolic link (or scp remote copy) for your new file ln -sf $PWD/myNewData.csv /tmp/pg_socKer_file.csv
3.2. configure the file_fdw for your new table (suppose mytable).
CREATE FOREIGN TABLE temp1 (a int, b text, c text)
SERVER files OPTIONS (
filename '/tmp/pg_socKer_file.csv',
format 'csv',
header 'true'
);
PS: after running SQL script with psql, when having some permission problem, change owner of the link by sudo chown -h postgres:postgres /tmp/pg_socKer_file.csv.
3.3. use the file_fdw table as source (suppose populating mytable).
INSERT INTO mytable(info)
SELECT json_build_array(a,b,c) FROM temp1;
Thanks to #JosMac (and his tutorial)!
NOTE: if there is a STDIN way to do it (exists??), will be easy, avoiding permission problems and use of absolute paths. See this answer/discussion.

How should I import data from CSV into a Postgres table using pgAdmin 3?

Is there any plugin or library which I need to use for this?
I want to try this on my local system first and then do the same on Heroku Postgresql
pgAdmin has GUI for data import since 1.16. You have to create your table first and then you can import data easily - just right-click on the table name and click on Import.
assuming you have a SQL table called mydata - you can load data from a csv file as follows:
COPY MYDATA FROM '<PATH>/MYDATA.CSV' CSV HEADER;
For more details refer to: http://www.postgresql.org/docs/9.2/static/sql-copy.html
You may have a table called 'test'
COPY test(gid, "name", the_geom)
FROM '/home/data/sample.csv'
WITH DELIMITER ','
CSV HEADER

SELECT using schema name

I have an issue with psql. I am trying to select the records from a table but psql acts like the table doesnt exist. I have tried finding it and found that it resides in the 'public' schema. I have tried selecting from this table like so:
highways=# SELECT * FROM public.CLUSTER_128000M;
This does not work stating the following:
ERROR: relation 'public.CLUSTER_128000M' does not exist
I know that it definetly exists and that it is definetly in the 'public' schema so how can I perform a select statement on it?
Edit:
This was caused by useing FME to create my tables. As a result FME used " marks on the table names making them case sensitive. To reverse this see the comments below.
This issue was caused by the third party software FME using quotes around the names of the tables at time of creation. The solution to make the tables useable again was to use the following command:
ALTER TABLE "SOME_NAME" RENAME TO some_name