I am a beginner with PostgreSQL, I am trying to import a csv file into pgAdmin4, but seem to be having some trouble. I have the CSV file saved on my desktop and there is no header in the CSV file. Here is what my query currently looks like,
COPY opioid_csv(Substance, Source, Specific_Measure, Type_event, Region, PRUID, Time_Period, Year_Quarter, Aggregator, Disaggregator, Unit, Value)
FROM 'Users/myusername/Desktop/opioid_csv.csv'
DELIMITER ','
I have created a table with all the column names as well but I am getting the following error:
ERROR: relation "opioid_csv" does not exist SQL state: 42P01
Try, using psql:
postgres=# \dt *.opioid_csv
List of relations
Schema | Name | Type | Owner
--------------+------------+-------+-------
marcothesane | opioid_csv | table | marco
(1 row)
Very probably, if you did succeed to create your opioid_csv table, you created it in another schema than public. You would have to put the schema (marcothesane in my example) before the table name: use marcothesane.opioid_csv instead of just opioid.csv
Let me try to do it with a CSV file of mine:
marco ~/1/Vertica/supp $ head test.csv
id,text
1,1VIAgFg
2,IfPLHbT
3,EWOmXAx
4,zl8paoh
5,9EpQ9Kx
6,ZpagcCh
7,6xoVoit
8,mCniu1U
9,euieQZa
Matching table DDL - using my own d2l DDL inferrer:
marco ~/1/Vertica/supp $ d2lm -coldelcomma test.csv | tee test.ddl.sql
CREATE TABLE IF NOT EXISTS test (
id INTEGER NOT NULL
, text CHAR(7) NOT NULL
);
Then, I run psql with that newly generated script:
marco ~/1/Vertica/supp $ psql -af test.ddl.sql
CREATE TABLE IF NOT EXISTS test (
id INTEGER NOT NULL
, text CHAR(7) NOT NULL
);
CREATE TABLE
Then, as I have a header line in my file, I COPY with csv header:
marco ~/1/Vertica/supp $ psql -c "copy test from '/Users/marco/1/Vertica/supp/test.csv' delimiter ',' csv header"
COPY 50000
marco ~/1/Vertica/supp $ psql -c "select count(*) from test"
count
-------
50000
What is different from what you do?
Related
I am trying to run the below command on the unix console :
env 'PGOPTIONS=-c search_path=admin -c client_min_messages=error' psql -h hostname -U user -p 1111 -d platform -c "CREATE TEMP TABLE admin.tmp_213 AS SELECT * FROM admin.file_status limit 0;\copy admin.tmp_213(feed_id,run_id,extracted_date,file_name,start_time,file_size,status,crypt_flag) FROM '/opt/temp/213/list.up' delimiter ',' csv;UPDATE admin.file_status SET file_size = admin.tmp_213.file_size FROM admin.tmp_213 A WHERE admin.file_status.feed_id = A.feed_id and admin.file_status.file_name = A.file_name;"
I am getting the below error:
ERROR: syntax error at or near "\"
LINE 1: ...* FROM admin.file_status limit 0;\copy admi...
If I use the above command without a backslash before COPY, it gives me the below error:
ERROR: cannot create temporary relation in non-temporary schema
I am doing the above to implement the solution as mentioned here:
How to update selected rows with values from a CSV file in Postgres?
Your first error is because metacommands like \copy cannot be combined in the same line as regular commands when given with -c. You can give two -c options, with one command in each instead.
The second error is self-explanatory. You don't get to decide what schema your temp table goes to. Just omit the schema.
I want to use COPY CSV method to insert a csv file with array of json. I tried to look around in stack overflow but most of the methods are done through language like Python, I havent found the one using COPY CSV.
My Postgres table is this
CREATE TABLE individuals
(
arr jsonb[]
)
My CSV is something like this
arr
-------
{{"y":0, "x"; 0}}
And here is how I do the copy command:
cat 'file.csv' | psql -h localhost -p 5432 -d testdb -U user -c "COPY individuals(arr) FROM STDIN DELIMITER ',' CSV HEADER;"
I got this error:
ERROR: malformed array literal: "{{"y": 0, "x": 0}}"
DETAIL: Unexpected array element.
I have a table like this in the server:
CREATE TABLE example_table (
id BIGSERIAL PRIMARY KEY,
name VARCHAR(70) NOT NULL,
status VARCHAR(70) NOT NULL CONSTRAINT status_enum CHECK (status IN ('old', 'new')),
UNIQUE (id, name)
);
And I have an SQL file, example.sql. The first line contain a header:
name_of_class,status
'CLASSNAME','old';
And I try to run a psql \copy to google server:
PGPASSWORD=password psql -d database --username username --port 5432 --host 11.111.111 << EOF
BEGIN;
\copy example_table(name,status) FROM example.sql DELIMITER ',' CSV Header
COMMIT;
EOF
I then get this error:
ERROR: new row for relation "example_table" violates check constraint "status_enum"
DETAIL: Failing row contains (1, 'CLASSNAME', 'old';).
CONTEXT: COPY example_table, line 2: "'CLASSNAME','old';"
ROLLBACK
Any idea how to solve this? 🙂
It appears that your source csv is using the ' (single-quote) to quote all the columns. You could specify that as the quote character using the option QUOTE
The \copy command is trying to load 'old' into the status column that checks that values are either new or old. The extra quotes violate the constraint.
\copy example_table(name,status) FROM example.sql DELIMITER ',' CSV Header QUOTE ''''
4 single quotes are required because 1 specifies the actual quote char, 1 to escapes the quote-character, and 2 encloses the escaped quote-character.
I'm trying to copy a table from one database to another database (NOT schema). The code I used in terminal is as below:
pg_dump -U postgres -t OldSchema.TableToCopy OldDatabase | psql -U postgres -d NewDatabase
When I press Enter it requests postgres password I enter my pass and then It requests psql password. I enter it and press Enter. I receive lots of:
invalid command \N
ERROR: relation "TableToCopy" does not exist
Both tables have UTF8 encoding. Am I doing something wrong?
OS: windows XP
Error output:
psql:TblToCopy.sql:39236: invalid command \N
psql:TblToCopy.sql:39237: invalid command \N
psql:TblToCopy.sql:39238: invalid command \N
.
.
.
After Hundreds of above errors, the terminal echoes:
psql:TblToCopy.sql:39245: ERROR: syntax error at or near "509"
LINE 1: 509 some gibberish words and letters here
And Finally:
sql:TblToCopy.sql:39245: ERROR: relation "TableToCopy" does not exist
EDIT
I read this response to the same problem \N error with psql , it says to use INSERT instead of COPY, but in the file pg_dump created COPY. How to say to pg_dump to use INSERT instead of COPY?
I converted the file with iconv to utf-8. Now that error has gone but I have a new error. In this particular case when I use psql to import data to database something new happens. Table gets created but without data. It says:
SET
SET
SET
SET
SET
SET
SET
SET
CREATE TABLE
ALTER TABLE
psql:tblNew.sql:39610: ERROR: value too long for type character(3)
CONTEXT: COPY words, line 1, column first_two_letters: "سر"
ALTER TABLE
ALTER TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE TRIGGER
I've tried to create a database with Encoding: UTF8 with a table and insert the two UTF-8 encoded characters the COPY command is trying to insert and it works when using INSERT.
CREATE DATABASE test
WITH OWNER = postgres
ENCODING = 'UTF8'
TABLESPACE = pg_default
LC_COLLATE = 'English_United States.1252'
LC_CTYPE = 'English_United States.1252'
CONNECTION LIMIT = -1;
CREATE TABLE x
(
first_two_letters character(3)
)
WITH (
OIDS=FALSE
);
ALTER TABLE x
OWNER TO postgres;
INSERT INTO x(
first_two_letters)
VALUES ('سر');
According to http://rishida.net/tools/conversion/ for the failing COPY the Unicode code points are:
U+0633 U+0631
which are two characters, which means you should be able to store them in a column defined as character(3), which stores strings up to 3 characters (not bytes) in length.
and if we try to INSERT, it succeeds:
INSERT INTO x(
first_two_letters)
VALUES (U&'\0633\0631');
From the pgdump documentation you can INSERT instead of COPY by using the --inserts option
--inserts
Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is mainly useful for making dumps that can
be loaded into non-PostgreSQL databases. However, since this option
generates a separate command for each row, an error in reloading a row
causes only that row to be lost rather than the entire table contents.
Note that the restore might fail altogether if you have rearranged
column order. The --column-inserts option is safe against column order
changes, though even slower.
Try to use this instead for Step 1:
pg_dump -U postgres -t OldSchema."TableToCopy" --inserts OldDatabase > Table.sql
I've also tried to COPY from a table to a file and use COPY to import and for me it works.
Are you sure your client and server database encoding is UTF8 ?
Firstly, export the table named "x" from schema "public" on database "test" to a plain text SQL file:
pg_dump -U postgres -t public."x" test > x.sql
which creates the x.sql file that contains:
--
-- PostgreSQL database dump
--
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: x; Type: TABLE; Schema: public; Owner: postgres; Tablespace:
--
CREATE TABLE x (
first_two_letters character(3)
);
ALTER TABLE public.x OWNER TO postgres;
--
-- Data for Name: x; Type: TABLE DATA; Schema: public; Owner: postgres
--
COPY x (first_two_letters) FROM stdin;
سر
\.
--
-- PostgreSQL database dump complete
--
Secondly, import with:
psql -U postgres -d test -f x.sql
The table name should be quoted , as the following
pg_dump -U postgres -t OldSchema."TableToCopy" OldDatabase | psql -U postgres -d NewDatabase
And I suggest you do the job in two steps
Step 1
pg_dump -U postgres -t OldSchema."TableToCopy" OldDatabase > Table.sql
If step 1 goes ok then do the step2.
Step 2
psql -U postgres -d NewDatabase -f Table.sql
Using psql is there a way to do a select statement where the output is a list of insert statements so that I can execute those insert statements somewhere else.
SELECT * FROM foo where some_fk=123;
Should output
INSERT INTO foo
(column1,column2,...) VALUES
('abc','xyyz',...),
('aaa','cccc',...),
.... ;
That I can redicet to a file say export.sql which I can then import with psql -f export.sql
My goal is to move export the result of a select statement in a format that I can import into another database instance with exactly the same table structure.
Have a look at the --inserts option of pg_dump
pg_dump -t your_table --inserts -f somefile.txt your_db
Edit the resulting file if necessary.
For a subset, as IgorRomanchenko mentioned, you can use COPY with a SELECT statement.
Example of COPYing as CSV.
COPY (select * from table where foo='bar') TO '/path/to/file.csv' CSV HEADER