I am writing a PLPGSQL function, that needs to import files into a table.
I have created a temporary table with 4 columns
CREATE TABLE IF NOT EXISTS tmp_ID_Customer (
ID int4 NULL,
Name varchar(2000) NULL,
CodeEx varchar(256) NULL,
AccountID varchar(256) NULL
)ON COMMIT DROP;
I am then trying to copy a file into this table, with the following
EXECUTE format('COPY tmp_ID_Customer FROM %L (FORMAT CSV, HEADER TRUE, DELIMITER(''|''))', _fileName);
The issue I have is some of these files only contain the first 3 columns.
So I am receiving an error saying
extra data after last expected column
I've tried specifying the columns, but as the final column doesn't always exist. I get an error.
Specify the columns you are copying:
COPY tmp_ID_Customer(id, name, codex) FROM ...
Related
So my first question here on SO,
let me describe the setup:
I have a postgressql database (version 12) with a table guilds (containing an internal guild_id and a few other informations). The guild_id is used as foreign key for many other tables like a teams table. Now if a team is inserted in teams for another guild then the guild with the guild_id = 1, I want a trigger function to create the same team entry, but now with a modified guild_id (should be now 1).
Definition of the relevant things I have atm:
create table if not exists bot.guilds
(
guild_id bigserial not null
constraint guilds_pk
primary key,
guild_dc_id bigint not null,
);
create table if not exists bot.teams
(
team_id bigserial not null
constraint teams_pk
primary key,
guild_id bigserial not null
constraint teams_guilds_guild_id_fk
references bot.guilds
on delete cascade,
team_name varchar(20) not null,
team_nickname varchar(10) not null
);
alter table bot.teams
owner to postgres;
create unique index if not exists teams_guild_id_team_name_uindex
on bot.teams (guild_id, team_name);
create unique index if not exists teams_guild_id_team_nickname_uindex
on bot.teams (guild_id, team_nickname);
create function duplicate_teams() returns trigger
language plpgsql
as
$$
BEGIN
INSERT INTO bot.teams VALUES(1,NEW."team_name",NEW."team_nickname");
RETURN NEW;
END;
$$;
create trigger duplicate_team
after insert
on bot.teams
for each row
execute procedure bot.duplicate_teams();
If I try now to insert a new row in teams (INSERT INTO bot.teams ("guild_id", "team_name", "team_nickname")VALUES (14, 'test2', 'test2');), I get the following error message (orginial german, translated by me to english):
[42804] ERROR: Column »guild_id« has type integer, but the expression has the type character varying
HINT: You have to rewrite the expression or cast the value.
WITH: PL/pgSQL-function duplicate_teams() row 3 in SQL-expressions
After execution the orgininal insert statement isn't in the table neither the copy.
I tried to cast the values for the guild id to serial, integer, bigserial.. but everytime the same error. I'm confused by the error message part with "has the type character varying".
So my questions are:
Is my understanding correct, that the error is caused by the trigger? and due to the error in the trigger the original insert statement doesnt work too?
Why is the type varing even with a cast?
Where is the error in the code?
I tried to search for the problem, but found nothing helpfull. Any hints are welcome. Thank you for your help!
EDIT:
The answer from #Lukas Thaler works, but now I get a new error:
[23505] ERROR: doubled key value violates unique-constraint »teams_guild_id_team_name_uindex«
Detail: Key»(guild_id, team_name)=(1, test3)« exists already.
WHERE: SQL-Statement»INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname")«
PL/pgSQL-Function duplicate_teams() row 3 in SQL-Statement
SQL-Statment »INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname")«
PL/pgSQL-Function duplicate_teams() row 3 in SQL-Statement
But the table only contains only "3,11,TeamUtils,TU"...
bot.teams has four columns: team_id, guild_id (both numerical data types), team_name and team_nickname (both varchars). In your INSERT statement in the function definition, you only provide three values and no association to particular columns. The default is to insert them in order, which assigns 1 to team_id and (crucially) NEW."team_name" to guild_id, hence the insert fails with a type mismatch error.
Specifying
INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname");
in your function should resolve your problem
To answer your other questions:
The INSERT statement is being executed inside a transaction, and a failure in the trigger will cause the entire transaction to be aborted and rolled back, hence you don't see the original row inserted to the table, either
The type is not deviating from the cast, it was the wrong value being inserted that caused the data type mismatch
PostgreSQL version: 9.6
I have a table xyz with columns a integer, b integer, c text, d integer.
The column c has a non-null constraint.
I have a CSV looking like so:
foo.csv
1,4
2,5
3,7
How could I only update column a and d by importing the CSV above?
I tried COPY xyz (a,d) FROM '/foo.csv' DELIMITER ','; but it gave me an error
ERROR: null value in column "c" violates not-null constraint DETAIL: Failing row contains (1, null, null, 4).
Usually, a not null column should have a default value defined. You can define a default value for an existing column, e.g.
alter table xyz alter c set default '';
After that, the copy command will succeed.
Well consider a table created like this:
CREATE TABLE public.test
(
id integer NOT NULL DEFAULT nextval('user_id_seq'::regclass),
name text,
PRIMARY KEY (id)
)
So the table has a unique 'id' column that auto generates default values using a sequence.
Now I wish to import data from a csv file, extending this table. However "obviously" the ids need to be unique, and thus I wish to let the database itself generate the ids, the csv file itself (coming from a complete different source) has hence an "empty column" for the ids:
,username
,username2
However if I then import this csv using psql:
\copy public."user" FROM '/home/paul/Downloads/test.csv' WITH (FORMAT csv);
The following error pops up:
ERROR: null value in column "id" violates not-null constraint
So how can I do this?
The empty colum from the CSV file is interpreted as SQL NULL, and inserting that value overrides the DEFAULT and leads to the error.
You should omit the empty column from the file and use:
\copy public."user"(name) FROM '...' (FORMAT 'csv')
Then the default value will be used for id.
I have a table structure like this in sql server:
CREATE TABLE [dbo].[taname](
[ID] [char](7) NOT NULL,
[SOURCE] [char](14) NOT NULL,
[TARGET] [char](14) NOT NULL,
[ID1] [char](100) NULL,
)
this similar table I'm trying to create in DB2:
CREATE TABLE schema.taname(
ID char(7) NOT NULL,
SOURCE char(14) NOT NULL,
TARGET char(14) NOT NULL,
ID1 char(100) NULL --error is here
);
However, I'm getting error in "ID":
Keyword NULL not expected. Valid tokens: AS NO FOR NOT FILE WITH CCSID CHECK LOGGED UNIQUE COMPACT. Cause . . . . . : The keyword NULL was not expected here. A syntax error was detected at keyword NULL. The partial list of valid tokens is AS NO FOR NOT FILE WITH CCSID CHECK LOGGED UNIQUE COMPACT. This list assumes that the statement is correct up to the unexpected keyword. The error may be earlier in the statement but the syntax of the statement seems to be valid up to this point. Recovery . . . : Examine the SQL statement in the area of the specified keyword. A colon or SQL delimiter may be missing. SQL requires reserved words to be delimited when they are used as a name. Correct the SQL statement and try the request again.
Processing ended because the highlighted statement did not complete successfully
I would like to create table similar to SQL Server and allow NULL in the ID field. How can I correct this?
NULL is the default... you can just leave it off...
CREATE TABLE schema.taname(
ID char(7) NOT NULL,
SOURCE char(14) NOT NULL,
TARGET char(14) NOT NULL,
ID1 char(100)
);
alternatively, specify the DEFAULT clause...
CREATE TABLE schema.taname(
ID char(7) NOT NULL,
SOURCE char(14) NOT NULL,
TARGET char(14) NOT NULL,
ID1 char(100) DEFAULT NULL
);
I have a simple table (4 text columns, and an ID column). I am trying to import my CSV file which has no ID column.
In Postico I have the schema setup as such:
DROP TABLE changes;
CREATE TABLE changes(
id SERIAL PRIMARY KEY,
commit_id TEXT,
additions INTEGER,
deletions INTEGER,
file_id TEXT
);
CREATE TEMP TABLE tmp_x AS SELECT * FROM changes LIMIT 0;
COPY tmp_x(commit_id,additions,deletions,file_id) FROM '/Users/George/git-parser/change_file' (format csv, delimiter E'\t');
INSERT INTO changes SELECT * FROM tmp_x
ON CONFLICT DO NOTHING;
DROP TABLE tmp_x;
But I am getting the error ERROR: null value in column "id" violates not-null constraint
You need to specify the columns:
COPY tmp_x (commit_id, additions, deletions, file_id)
FROM '/Users/George/git-parser/change_file' (format csv, delimiter E'\t');
The order of columns specified in the copy statement must obviously match the order of the columns in the input file.
You need to change your insert statement as well.
INSERT INTO changes SELECT * FROM tmp_x
will insert all columns from tmp_x into the target table, but as you did not define the id column as serial in the tmp_x table, nothing got generated and null values were inserted. And your insert statement just copies those null values.
You need to skip the id column in the insert statement:
INSERT INTO changes (commit_id,additions,deletions,file_id)
SELECT commit_id,additions,deletions,file_id
FROM tmp_x
ON CONFLICT DO NOTHING;
You can actually remove the id column from tmp_x