PostgreSQL insert query - postgresql

I try to insert a single line to log table, but it throws an error message . ><
The log table structure is like this:
no integer NOT NULL nextval('log_no_seq'::regclass)
ip character varying(50)
country character varying(10)
region character varying(10)
city character varying(50)
postalCode character varying(10)
taken numeric
date date
and my query:
INSERT INTO log (ip,country,region,city,postalCode,taken,date) VALUES
("24.24.24.24","US","NY","Binghamton","11111",1,"2011-11-09")
=> ERROR: column "postalcode" of relation "log" does not exist
second try query : (without postalcode)
INSERT INTO log (ip,country,region,city,taken,date) VALUES
("24.24.24.24","US","NY","11111",1,"2011-11-09")
=> ERROR: column "24.24.24.24" does not exist
I don't know what I did wrong...
And PostgreSQL does not have datetime type? (2011-11-09 11:00:10)

Try single quotes (e.g. '2011-11-09')

PostgreSQL has a "datetime" type: timestamp. Read the manual here.
The double-qutes "" are used for identifiers if you want them as is. It's best you never have to use them as #wildplasser advised.
String literals are enclosed in single quotes ''.
Start by reading the chapter Lexical Structure. It is very informative. :)

Try it rewrite in this way:
INSERT INTO log (ip,country,region,city,"postalCode",taken,date) VALUES
('24.24.24.24','US','NY','Binghamton','11111',1,'2011-11-09');
When you are using mixed case in the name of column, or reserved words (such as "column", "row" etc.), you have to use double quotes, instead of values, where you have to use a single ones, as you can see in the example.

Related

How to store word "é" in postgres using limited varchar

I've been having some problems trying to save a string word with limited varchar(9).
create database big_text
LOCALE 'en_US.utf8'
ENCODING UTF8
create table big_text(
description VARCHAR(9) not null
)
# OK
insert into big_text (description) values ('sintético')
# I Got error here
insert into big_text (description) values ('sintético')
I already know that the problem is because one word is using 'é' -> Latin small letter E with Acute (this case only have 1 codepoint) and another word is using 'é' -> Latin Small Letter E + Combining Acute Accent Modifier. (this case I have 2 codepoint).
How can I store the same word using both representation in a limited varchar(9)? There is some configuration that the database is able to understand both ways? I thought that database being UTF8 is enough but not.
I appreciate any explanation that could help me understand where am I wrong? Thank you!
edit: Actually I would like to know if there is any way for postgres automatically normalize for me.
A possible workaround using CHECK to do the character length constraint.
show lc_ctype;
lc_ctype
-------------
en_US.UTF-8
create table big_text(
description VARCHAR not null CHECK (length(normalize(description)) <= 9)
)
-- Note shortened string. Explanation below.
select 'sintético'::varchar(9);
varchar
----------
sintétic
insert into big_text values ('sintético');
INSERT 0 1
select description, length(description) from big_text;
description | length
-------------+--------
sintético | 10
insert into big_text values ('sintético test');
ERROR: new row for relation "big_text" violates check constraint "big_text_description_check"
DETAIL: Failing row contains (sintético test).
From here Character type the explanation for the string truncation vs the error you got when inserting:
An attempt to store a longer string into a column of these types will result in an error, unless the excess characters are all spaces, in which case the string will be truncated to the maximum length.(This somewhat bizarre exception is required by the SQL standard.)
If one explicitly casts a value to character varying(n) or character(n), then an over-length value will be truncated to n characters without raising an error. (This too is required by the SQL standard.)

Troubled trying to convert a column data type in pgAdmin from varchar to integer (SQL state: 22P02)

I am simply trying to convert a column with character varying data type to integer by run this bit of Postgres script:
ALTER TABLE tbl.test ALTER COLUMN "XX" TYPE integer USING ("XX"::integer);
and I get this error:
ERROR: invalid input syntax for integer: "XX" SQL state: 22P02
Can someone help me to resolve this issue, please?
Based on the error you get it looks like you have some values in XX that cannot be converted to an integer. You will need to correct those values before issuing the alter table.
Find your incorrect values with this query:
select "XX" from tbl.test where "XX" !~ '^-{0,1}\d+$';
The !~ is the NOT regex match. The regex anchors to the beginning of the value with ^, accounts for an optional minus sign with -{0,1}, which matches zero or one hyphen character, and then insures that the remaining characters to the end of the value are all digits with \d+$.
Any values of XX that fail to match this pattern will be retrieved, and you can figure out how to deal with them either by updating the table or modifying the using part of your alter table.

Postgres \copy a file with double quotes

This is what my data looks like -
"to_claim_id" "NEW_PATIENT" "from_rend" "from_bill" "to_rend" "to_bill" "from_date" "to_date" "days_diff"
"10193136348200818391" "102657" "103325" "174597" "1830139" "17497" 20180904 20181002 28
How do I import this data into my database using \copy?
I have tried \copy public.data from '/data/test' with delimiter E'\t' csv header quote '"' but I get ERROR: value too long for type character varying(25) error.
That means at least one column in the target table public.data is type varchar(25) and a corresponding value in the CSV file has more characters.
You might change the data type of such columns (temporarily) to just varchar or text, import, and then identify and trim offending values - or just live happily ever after as you probably don't need that restriction to begin with.
Related:
Any downsides of using data type "text" for storing strings?

How does Redshift treat guillemets?

I am trying to run a CSV import using the COPY command for some data that includes a guillemet (»). Redshift complains that the column value is too long for the varchar column I have defined. The error in the "Loads" tab in the Redshift GUI displays this character as two dots: .. - had it been treated as one, it would have fit in the varchar column. It's not clear whether there is some sort of conversion error occurring or if there is a display issue.
When trying to do plain INSERTs I run into strange behavior as well:
dev=# create table test (name varchar(3));
CREATE TABLE
dev=# insert into test values ('bla');
INSERT 0 1
3 characters treated as 4?
dev=# insert into test values ('bl»');
ERROR: value too long for type character varying(3)
dev=# insert into test values ('b»');
INSERT 0 1
Why does char_length return 2?
dev=# select char_length(name), name from test;
char_length | name
-------------+------
2 | b»
I've checked the client encoding and database encodings and those all seem to be UTF8/UNICODE.
You need to increase the length of your varchar field. Multibyte characters use more than one character and length in the definition of varchar field are byte based. So, your special char might be taking more than a byte. If it still doesn't work refer to the doc page for Redshift below,
http://docs.aws.amazon.com/redshift/latest/dg/multi-byte-character-load-errors.html

Load NULL TIMESTAMP with TIME ZONE using COPY FROM in PostgreSQL

I have a CSV file that I'm trying to load into a PostgreSQL 9.2.4 database using the COPY FROM command. In particular there is a timestamp field that is allowed to be null, however when I load "null values" (actually just "") I get the following error:
ERROR: invalid input syntax for type timestamp with time zone: ""
An example CSV file looks as follows:
id,name,joined
1,"bob","2013-10-02 15:27:44-05"
2,"jane",""
The SQL looks as follows:
CREATE TABLE "users"
(
"id" BIGSERIAL NOT NULL PRIMARY KEY,
"name" VARCHAR(255),
"joined" TIMESTAMP WITH TIME ZONE,
);
COPY "users" ("id", "name", "joined")
FROM '/path/to/data.csv'
WITH (
ENCODING 'utf-8',
HEADER 1,
FORMAT 'csv'
);
According to the documentation, null values should be represented by an empty string that cannot contain the quote character, which is double quote (") in this case:
NULL
Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings. This option is not allowed when using binary format.
Note: When using COPY FROM, any data item that matches this string will be stored as a null value, so you should make sure that you use the same string as you used with COPY TO.
I've tried the option NULL '' but that seems to have no affect. Advice, please!
empty string without quotes works normally:
id,name,joined
1,"bob","2013-10-02 15:27:44-05"
2,"jane",
select * from users;
id | name | joined
----+------+------------------------
1 | bob | 2013-10-03 03:27:44+07
2 | jane |
maybe it would be simpler to replace "" with empty string using sed.
The FORCE_NULL option for COPY FROM in Postgres 9.4+ would be the most elegant way to solve your problem. Per documentation:
FORCE_NULL
Match the specified columns' values against the null string, even if
it has been quoted, and if a match is found set the value to NULL. In
the default case where the null string is empty, this converts a
quoted empty string into NULL. This option is allowed only in COPY
FROM, and only when using CSV format.
Of course, it converts all matching values in all columns.
In older versions, you can COPY to a temporary table with the same table layout - except data type text for the problem column. Then fix offending values and INSERT from there:
single quotes appear arround value after running copy in postgres 9.2
Could not get it to work. Ended up using this program:
http://neilb.bitbucket.org/csvfix/
With that you can replace empty fileds with other values.
So for example in your case column 3 needs to have a timestamp value, so I give it a fake one. In this case '1900-01-01 00:00:00'. if needed you can delete or filter them out once the data is imported.
$CSVFIXHOME/csvfix map -f 3 -fv '' -tv '1900-01-01 00:00:00' -rsep ',' $YOURFILE > $FILEWITHDATES
After that you can import the newly created file.