I have an empty table in postgreSQL :
CREATE TABLE public.tbltesting
(
"ID" integer,
"testValue" numeric
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
I have a CSV file with the following data :
ID,testValue
1,2.0
2,3.33
3,4
The file is huge and requires a bulk copy and so I am trying to run the following command from PSQL :
\copy tblfoodnutrients FROM 'C:\temp\tbltestingbulk.csv' with CSV HEADER
ERROR: relation "tblfoodnutrients" does not exist
I have also tried the following :
\copy public.tblfoodnutrients FROM 'C:\temp\tbltestingbulk.csv' with CSV HEADER
ERROR: relation "public.tblfoodnutrients" does not exist
DUH! My connection was missing the database name to begin with.
Related
I am running a script that creates a database, some tables with foreign keys and insert some data, but somehow creating the tables is not working, although it doesn't throw any error: I go to pgAdmin, look for the tables created and there's no one...
When I copy the text of my script and execute it into the Query Tool, it works fine and creates the tables.
Can you please explain me what I am doing wrong?
Script:
DROP DATABASE IF EXISTS test01 WITH (FORCE); --drops even if in use
CREATE DATABASE test01
WITH
OWNER = postgres
ENCODING = 'UTF8'
LC_COLLATE = 'German_Germany.1252'
LC_CTYPE = 'German_Germany.1252'
TABLESPACE = pg_default
CONNECTION LIMIT = -1
IS_TEMPLATE = False
;
CREATE TABLE customers
(
customer_id INT GENERATED ALWAYS AS IDENTITY,
customer_name VARCHAR(255) NOT NULL,
PRIMARY KEY(customer_id)
);
CREATE TABLE contacts
(
contact_id INT GENERATED ALWAYS AS IDENTITY,
customer_id INT,
contact_name VARCHAR(255) NOT NULL,
phone VARCHAR(15),
email VARCHAR(100),
PRIMARY KEY(contact_id),
CONSTRAINT fk_customer
FOREIGN KEY(customer_id)
REFERENCES customers(customer_id)
ON DELETE CASCADE
);
INSERT INTO customers(customer_name)
VALUES('BlueBird Inc'),
('Dolphin LLC');
INSERT INTO contacts(customer_id, contact_name, phone, email)
VALUES(1,'John Doe','(408)-111-1234','john.doe#bluebird.dev'),
(1,'Jane Doe','(408)-111-1235','jane.doe#bluebird.dev'),
(2,'David Wright','(408)-222-1234','david.wright#dolphin.dev');
I am calling the script from a Windows console like this:
"C:\Program Files\PostgreSQL\15\bin\psql.exe" -U postgres -f "C:\Users\my user name\Desktop\db_create.sql" postgres
My script is edited in Notepad++ and saved with Encoding set to UTF-8 without BOM, as per a suggestion found here
I see you are using -U postgres command line parameter, and also using database name as last parameter (postgres).
So all your SQL commands was executed while you are connected to postgres database. Of course, CREATE DATABASE command did creation of test01 database, but CREATE TABLE and INSERT INTO did executed not for test01 database, but for postgres database, and all your tables are in postgres database, but not in test01.
You need to split your SQL script into 2 scripts (files): first for 'CREATE DATABASE', second for the rest of.
You need to execute first script as before, like
psql.exe -U postgres -f "db_create_1.sql" postgres
And for second one need to choose the database which was created at 1st step, like
psql.exe -U postgres -f "db_create_2.sql" test01
I just set up a new foreign table and it works as intended if I just select the "ID" (integer) field.
When I add the "Description"(text) field and try to select the table, it fails with this error message:
utf-8 'Codec cannot decode byte 0xfc in position 10: invalid start byte
After checking the remote table, I found that "Description" contains special characters like: "ö, ü, ä"
What can i do to fix this?
Table definitions (Only first 2 rows)
Remote table:
CREATE TABLE test (
[Id] [char](8) NOT NULL,
[Description] [nvarchar](50) NOT NULL
)
Foreign table:
Create Foreign Table "Test" (
"Id" Char(8),
"Description" VarChar(50)
) Server "Remote" Options (
schema_name 'dbo', table_name 'test'
);
Additional information:
Foreign data wrapper: tds_fdw
Local server: Postgres 12, encoding: UTF8
Remote server: Sql Server, encoding: Latin1_General_CI_AS
As Laurenz Albe suggested in the comments, I created a freetds.conf in my PostgreSQL folder with the following content:
[global]
tds version = auto
client charset = UTF-8
Don't forget to set the path to the configuration file in the environment variable FREETDS.
Powershell:
[System.Environment]::SetEnvironmentVariable('FREETDS','C:\Program Files\PostgreSQL\12',[System.EnvironmentVariableTarget]::Machine)
I have a table xml_table_date. Below is the structure and sample data in the table.
But I want to insert multiple xml files (here it is 9) into the table in one go. These files resides in a DB directory
My code to insert xml file into the table:
CREATE TABLE xml_table_data (
File_name varchar2(100),
Insert_date timestamp
xml_data XMLTYPE
);
INSERT INTO xml_tab VALUES ( 'DataTransfer_HH_TWWholesale_001_004_12142020113003.xml',
XMLTYPE (BFILENAME ('TESTING', 'DataTransfer_HH_TWWholesale_001_004_12142020113003.xml'),NLS_CHARSET_ID ('AL32UTF8')));
Please help me on this.. Thanks for reading my query.
You can use an external table with preproxessing to read the filenames from the directory.
ALTER SESSION SET CONTAINER=pdb1;
CREATE DIRECTORY data_dir AS '/u02/data';
CREATE DIRECTORY script_dir AS '/u02/scripts';
CREATE DIRECTORY log_dir AS '/u02/logs';
GRANT READ, WRITE ON DIRECTORY data_dir TO demo1;
GRANT READ, EXECUTE ON DIRECTORY script_dir TO demo1;
GRANT READ, WRITE ON DIRECTORY log_dir TO demo1;
Create a list_files.sh file in the scripts-directory. Make sure oracle is the owner and the privileges are 755 on the file.
The preprocessing script file don't inherit the $PATH enviroment variable. So you have to prepend /usr/bin before all commands.
/usr/bin/ls -1 /u02/data/test*.xml | /usr/bin/xargs -n1 /usr/bin/basename
You also need a source file for the external table, but this can be an empty dummy file.
CREATE TABLE data_files
( file_name VARCHAR2(255))
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_dir
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE CHARACTERSET AL32UTF8
PREPROCESSOR script_dir: 'list_files.sh'
BADFILE log_dir:'list_files_%a_%p.bad'
LOGFILE log_dir:'list_files_%a_%p.log'
FIELDS TERMINATED BY WHITESPACE
)
LOCATION ('dummy.txt')
)
REJECT LIMIT UNLIMITED;
Now you can insert the xml data into your table.
INSERT INTO xml_table_data
( file_name,
insert_date,
xml_data
)
SELECT file_name,
SYSTIMESTAMP,
XMLTYPE (BFILENAME ('DATA_DIR', file_name), NLS_CHARSET_ID ('AL32UTF8'))
FROM data_files;
You still need to adapt the example to your environment, please.
I have exported the oracle database table data in to csv file and importing the same data into postgres database using '\copy' command via command prompt. While importing I'm getting below error because of the timestamp issue
psql command:
\copy "CSV_IMPORT"."DUMMY_TABLE" FROM 'D:\Database_Auto\DUMMY_TABLE_DATA.csv' DELIMITER ',' CSV HEADER;
CSV_IMPORT is the schema name
DUMMY_TABLE is the table name
Error:
ERROR: invalid input syntax for type timestamp with time zone: "21-JUN-07 06.42.43.950926000 PM"
CONTEXT: COPY DUMMY_TABLE, line 2, column updated_date: "21-JUN-07 06.42.43.950926000 PM"
If I modify the timestamp data with : instead of . as 21-JUN-07 06:42:43.950926000 PM it is importing the record without any error. I can't do it manually for millions of records in csv file. Any solution via psql command.
Table Create Script:
CREATE TABLE "CSV_IMPORT"."DUMMY_TABLE"
(
ID VARCHAR(100) NOT NULL
, DOCK_TYPE VARCHAR(1) NOT NULL
, START_DATE TIMESTAMP(6) WITH TIME ZONE NOT NULL
, UPDATE_SEQ_NBR DOUBLE PRECISION NOT NULL
, END_DATE TIMESTAMP(6) WITH TIME ZONE
, CONSTRAINT PK_DUMMY_TABLE PRIMARY KEY
(
ID
, DOCK_TYPE
, START_DATE
, UPDATE_SEQ_NBR
)
);
Table Data in CSV file:
"ID","DOCK_TYPE","START_DATE","UPDATE_SEQ_NBR","END_DATE"
"756748","L",21-JUN-07 06.42.43.950926000 PM,1,21-JUN-07 06.42.43.950926000 PM
"658399","T",15-NOV-03 02.59.54.000000000 AM,2,15-NOV-03 02.59.54.000000000 AM
"647388","F",19-NOV-04 11.09.05.000000000 PM,3,19-NOV-04 11.09.05.000000000 PM
Your best option is to re-do the export from Oracle and use to_string() to format the timestamp correctly.
If that is not feasible, then change your DUMMY_TABLE column to text instead of timestamptz and use to_timestamp(<tstz_column>, 'DD-MON-YY HH.MI.SS.US000 PM') to parse it inside of PostgreSQL.
If you were not stuck on Windows, you could use \copy ... from program and use sed to clean up your export on the fly.
I want to import the csv file into database table .but it was not working..
I run the bash shell in the linux env .
CREATE TABLE test.targetDB (
no int4 NOT NULL GENERATED ALWAYS AS IDENTITY,
year varchar(15) NOT NULL,
name bpchar(12) NOT NULL,
city varchar(15) NOT NULL,
ts_load timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (seq_no)
)
test.csv file
"2019","112 ","1123",2019-07-26-05.33.43.000000
Linux Cmd
psql -d $database -c " COPY test.targetDB from 'test.csv' delimiter ',' csv "
Error
ERROR: invalid input syntax for integer: "2019"
CONTEXT: COPY targetDB, line 1, column no: "2019"
How can I resolve this issue
You need to tell copy, that the no column is not part of the CSV file by specifying the columns that should be populated:
COPY test.targetDB(year, name, city, ts_load) from 'test.csv' delimiter ',' csv
I would recommend, using datagrip - a postgresql client tool. You can use a evaluation version if you don't wish to purchase. It's pretty simple from the UI to import a file rather using a command line.