Postgresql constraint to check for non-ascii characters - postgresql

I have a Postgresql 9.3 database that is encoded 'UTF8'. However, there is a column in database that should never contain anything but ASCII. And if non-ascii gets in there, it causes a problem in another system that I have no control over. Therefore, I want to add a constraint to the column. Note: I already have a BEFORE INSERT trigger - so that might be a good place to do the check.
What's the best way to accomplish this in PostgreSQL?

You can define ASCII as ordinal 1 to 127 for this purpose, so the following query will identify a string with "non-ascii" values:
SELECT exists(SELECT 1 from regexp_split_to_table('abcdéfg','') x where ascii(x) not between 1 and 127);
but it's not likely to be super-efficient, and the use of subqueries would force you to do it in a trigger rather than a CHECK constraint.
Instead I'd use a regular expression. If you want all printable characters then you can use a range in a check constraint, like:
CHECK (my_column ~ '^[ -~]*$')
this will match everything from the space to the tilde, which is the printable ASCII range.
If you want all ASCII, printable and nonprintable, you can use byte escapes:
CHECK (my_column ~ '^[\x00-\x7F]*$')
The most strictly correct approach would be to convert_to(my_string, 'ascii') and let an exception be raised if it fails ... but PostgreSQL doesn't offer an ascii (i.e. 7-bit) encoding, so that approach isn't possible.

Use a CHECK constraint built around a regular expression.
Assuming that you mean a certain column should never contain anything but the lowercase letters from a to z, the uppercase letters from A to Z, and the numbers 0 through 9, something like this should work.
alter table your_table
add constraint allow_ascii_only
check (your_column ~ '^[a-zA-Z0-9]+$');
This is what people usually mean when they talk about "only ASCII" with respect to database columns, but ASCII also includes glyphs for punctuation, arithmetic operators, etc. Characters you want to allow go between the square brackets.

Related

Postgres Varchar(n) or text which is better [duplicate]

What's the difference between the text data type and the character varying (varchar) data types?
According to the documentation
If character varying is used without length specifier, the type accepts strings of any size. The latter is a PostgreSQL extension.
and
In addition, PostgreSQL provides the text type, which stores strings of any length. Although the type text is not in the SQL standard, several other SQL database management systems have it as well.
So what's the difference?
There is no difference, under the hood it's all varlena (variable length array).
Check this article from Depesz: http://www.depesz.com/index.php/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/
A couple of highlights:
To sum it all up:
char(n) – takes too much space when dealing with values shorter than n (pads them to n), and can lead to subtle errors because of adding trailing
spaces, plus it is problematic to change the limit
varchar(n) – it's problematic to change the limit in live environment (requires exclusive lock while altering table)
varchar – just like text
text – for me a winner – over (n) data types because it lacks their problems, and over varchar – because it has distinct name
The article does detailed testing to show that the performance of inserts and selects for all 4 data types are similar. It also takes a detailed look at alternate ways on constraining the length when needed. Function based constraints or domains provide the advantage of instant increase of the length constraint, and on the basis that decreasing a string length constraint is rare, depesz concludes that one of them is usually the best choice for a length limit.
As "Character Types" in the documentation points out, varchar(n), char(n), and text are all stored the same way. The only difference is extra cycles are needed to check the length, if one is given, and the extra space and time required if padding is needed for char(n).
However, when you only need to store a single character, there is a slight performance advantage to using the special type "char" (keep the double-quotes — they're part of the type name). You get faster access to the field, and there is no overhead to store the length.
I just made a table of 1,000,000 random "char" chosen from the lower-case alphabet. A query to get a frequency distribution (select count(*), field ... group by field) takes about 650 milliseconds, vs about 760 on the same data using a text field.
(this answer is a Wiki, you can edit - please correct and improve!)
UPDATING BENCHMARKS FOR 2016 (pg9.5+)
And using "pure SQL" benchmarks (without any external script)
use any string_generator with UTF8
main benchmarks:
2.1. INSERT
2.2. SELECT comparing and counting
CREATE FUNCTION string_generator(int DEFAULT 20,int DEFAULT 10) RETURNS text AS $f$
SELECT array_to_string( array_agg(
substring(md5(random()::text),1,$1)||chr( 9824 + (random()*10)::int )
), ' ' ) as s
FROM generate_series(1, $2) i(x);
$f$ LANGUAGE SQL IMMUTABLE;
Prepare specific test (examples)
DROP TABLE IF EXISTS test;
-- CREATE TABLE test ( f varchar(500));
-- CREATE TABLE test ( f text);
CREATE TABLE test ( f text CHECK(char_length(f)<=500) );
Perform a basic test:
INSERT INTO test
SELECT string_generator(20+(random()*(i%11))::int)
FROM generate_series(1, 99000) t(i);
And other tests,
CREATE INDEX q on test (f);
SELECT count(*) FROM (
SELECT substring(f,1,1) || f FROM test WHERE f<'a0' ORDER BY 1 LIMIT 80000
) t;
... And use EXPLAIN ANALYZE.
UPDATED AGAIN 2018 (pg10)
little edit to add 2018's results and reinforce recommendations.
Results in 2016 and 2018
My results, after average, in many machines and many tests: all the same (statistically less than standard deviation).
Recommendation
Use text datatype, avoid old varchar(x) because sometimes it is not a standard, e.g. in CREATE FUNCTION clauses varchar(x)≠varchar(y).
express limits (with same varchar performance!) by with CHECK clause in the CREATE TABLE e.g. CHECK(char_length(x)<=10). With a negligible loss of performance in INSERT/UPDATE you can also to control ranges and string structure e.g. CHECK(char_length(x)>5 AND char_length(x)<=20 AND x LIKE 'Hello%')
On PostgreSQL manual
There is no performance difference among these three types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used instead.
I usually use text
References: http://www.postgresql.org/docs/current/static/datatype-character.html
In my opinion, varchar(n) has it's own advantages. Yes, they all use the same underlying type and all that. But, it should be pointed out that indexes in PostgreSQL has its size limit of 2712 bytes per row.
TL;DR:
If you use text type without a constraint and have indexes on these columns, it is very possible that you hit this limit for some of your columns and get error when you try to insert data but with using varchar(n), you can prevent it.
Some more details: The problem here is that PostgreSQL doesn't give any exceptions when creating indexes for text type or varchar(n) where n is greater than 2712. However, it will give error when a record with compressed size of greater than 2712 is tried to be inserted. It means that you can insert 100.000 character of string which is composed by repetitive characters easily because it will be compressed far below 2712 but you may not be able to insert some string with 4000 characters because the compressed size is greater than 2712 bytes. Using varchar(n) where n is not too much greater than 2712, you're safe from these errors.
text and varchar have different implicit type conversions. The biggest impact that I've noticed is handling of trailing spaces. For example ...
select ' '::char = ' '::varchar, ' '::char = ' '::text, ' '::varchar = ' '::text
returns true, false, true and not true, true, true as you might expect.
Somewhat OT: If you're using Rails, the standard formatting of webpages may be different. For data entry forms text boxes are scrollable, but character varying (Rails string) boxes are one-line. Show views are as long as needed.
A good explanation from http://www.sqlines.com/postgresql/datatypes/text:
The only difference between TEXT and VARCHAR(n) is that you can limit
the maximum length of a VARCHAR column, for example, VARCHAR(255) does
not allow inserting a string more than 255 characters long.
Both TEXT and VARCHAR have the upper limit at 1 Gb, and there is no
performance difference among them (according to the PostgreSQL
documentation).
The difference is between tradition and modern.
Traditionally you were required to specify the width of each table column. If you specify too much width, expensive storage space is wasted, but if you specify too little width, some data will not fit. Then you would resize the column, and had to change a lot of connected software, fix introduced bugs, which is all very cumbersome.
Modern systems allow for unlimited string storage with dynamic storage allocation, so the incidental large string would be stored just fine without much waste of storage of small data items.
While a lot of programming languages have adopted a data type of 'string' with unlimited size, like C#, javascript, java, etc, a database like Oracle did not.
Now that PostgreSQL supports 'text', a lot of programmers are still used to VARCHAR(N), and reason like: yes, text is the same as VARCHAR, except that with VARCHAR you MAY add a limit N, so VARCHAR is more flexible.
You might as well reason, why should we bother using VARCHAR without N, now that we can simplify our life with TEXT?
In my recent years with Oracle, I have used CHAR(N) or VARCHAR(N) on very few occasions. Because Oracle does (did?) not have an unlimited string type, I used for most string columns VARCHAR(2000), where 2000 was at some time the maximum for VARCHAR, and in all practical purposes not much different from 'infinite'.
Now that I am working with PostgreSQL, I see TEXT as real progress. No more emphasis on the VAR feature of the CHAR type. No more emphasis on let's use VARCHAR without N. Besides, typing TEXT saves 3 keystrokes compared to VARCHAR.
Younger colleagues would now grow up without even knowing that in the old days there were no unlimited strings. Just like that in most projects they don't have to know about assembly programming.
I wasted way too much time because of using varchar instead of text for PostgreSQL arrays.
PostgreSQL Array operators do not work with string columns. Refer these links for more details: (https://github.com/rails/rails/issues/13127) and (http://adamsanderson.github.io/railsconf_2013/?full#10).
If you only use TEXT type you can run into issues when using AWS Database Migration Service:
Large objects (LOBs) are used but target LOB columns are not nullable
Due to their unknown and sometimes large size, large objects (LOBs) require more processing
and resources than standard objects. To help with tuning migrations of systems that contain
LOBs, AWS DMS offers the following options
If you are only sticking to PostgreSQL for everything probably you're fine. But if you are going to interact with your db via ODBC or external tools like DMS you should consider using not using TEXT for everything.
character varying(n), varchar(n) - (Both the same). value will be truncated to n characters without raising an error.
character(n), char(n) - (Both the same). fixed-length and will pad with blanks till the end of the length.
text - Unlimited length.
Example:
Table test:
a character(7)
b varchar(7)
insert "ok " to a
insert "ok " to b
We get the results:
a | (a)char_length | b | (b)char_length
----------+----------------+-------+----------------
"ok "| 7 | "ok" | 2

How would I diagnose what error seems to lead to non-functional underscore wildcard queries in Postgresql 15?

I am working through a quick refresher ('SQL Handbook' by Flavio Copes), and any LIKE or ILIKE query I use with the underscore wildcard returns no results.
The table is created as such:
CREATE TABLE people (
names CHAR(20)
);
INSERT INTO people VALUES ('Joe'), ('John'), ('Johanna'), ('Zoe');
Given this table, I use the following query:
SELECT * FROM people WHERE names LIKE '_oe';
I expect it to return
names
1
Joe
2
Zoe
Instead, it returns
names
The install is PostgreSQL 15 (x64), pgAdmin 4, and PostGIS v3.3.1
Using char(20) means all strings are exactly 20 chars long, being padded with spaces out to that length. The spaces make it not match the pattern, as there is nothing in the pattern to accommodate spaces at the end.
If you make the pattern be '_oe%' it would work. Or better yet, don't use char(20).

Postgresql ORDER BY not working as expected

Let's try this simple example to represent the problem I'm facing.
Assume this table:
CREATE TABLE testing1
(
id serial NOT NULL,
word text,
CONSTRAINT testing1_pkey PRIMARY KEY (id)
);
and that data:
insert into testing1 (word) values ('Heliod, God');
insert into testing1 (word) values ('Heliod''s Inter');
insert into testing1 (word) values ('Heliod''s Pilg');
insert into testing1 (word) values ('Heliod, Sun');
Then I want to run this query to get the results ordered by the word column:
SELECT
id, word
FROM testing1
WHERE UPPER(word::text) LIKE UPPER('heliod%')
ORDER BY word asc;
But look at the output, it's not ordered. I would expect the rows to be in that order, using their ids: 2, 3, 1, 4 (or, if I use the word's values: Heliod's Inter, Heliod's Pilg, Heliod, God, Heliod, Sun). This is what I get:
I thought that maybe something could confuse postgresql because of the WHERE criteria I used, but the below happens if I just order by on the rows:
Am I missing something here? I couldn't find anything in the docs about ordering values that contain quotes (I suspect that the quotes cause that behaviour because of their special meaning in postgresql, but I may be wrong).
I am using UTF-8 encoding for my database (not sure if it matters though) and this issue is happening on Postgresql version 12.7.
The output of
show lc_ctype;
is
"en_GB.UTF-8"
and the output of
show lc_collate;
is
"en_GB.UTF-8"
That is the correct way to order the rows in en_US.UTF-8. It does 'weird' (to someone used to ASCII) things with punctuation and whitespace, skipping on a first pass and considering it only for otherwise tied values.
If you don't want those rules, maybe use the C collation instead.
Indeed, I've tried #jjanes's suggestion to use the C collation and the output is the one I would expect:
SELECT
id, word
FROM testing1
ORDER BY word collate "C" ;
How weird, I have been using postgresql for some years now and I never noticed that behaviour.
Relevant section from the docs:
23.2.2.1. Standard Collations
On all platforms, the collations named default, C, and POSIX are available. > Additional collations may be available depending on operating system
support. The default collation selects the LC_COLLATE and LC_CTYPE values
specified at database creation time. The C and POSIX collations both
specify “traditional C” behavior, in which only the ASCII letters “A”
through “Z” are treated as letters, and sorting is done strictly by
character code byte values.

Postgres CHAR check constraint not evaluating correctly

My goal is to have a column which accepts only values of the type letters and underscores in Postgres.
For some reason though I'm not managing to get Postgres to enforce the regex correctly.
Things I tried:
Produces column which won't accept strings with digits
action_type CHAR(100) check (action_type ~ '^(?:[^[:digit:]]*)$')
Produces column which won't accept anything
action_type CHAR(100) check (action_type ~ '^(?:[^[:digit:] ]*)$')
Produces column which won't accept anything
action_type CHAR(100) check (action_type ~ '^([[:alpha:]_]*)$'),
I have tried using multiple variations of the above as well as using SIMILAR TO instead of '~'. From my experience the column either accepts everything or nothing, depends on the given constraint.
I'm running this on the timescaledb docker image locally which is running PostgreSQL 12.5.
Any help would be greatly appreciated as I am at my wits end.
Try this:
CREATE TABLE your_table(
action_type text
);
ALTER TABLE your_table ADD CONSTRAINT check_letters
CHECK (action_type ~ '^[a-z\_]*$' )
This should only allow a-z characters and underscore _ for action_type column.
Also note that this is case sensitive, if you need case insensitive match, use ~* instead of ~
And also, this allows empty string, if you don't want that, use '^[a-z\_]+$' instead

how to get db2 without any appended values

select rtrim(char(PKG_AGR_IDR)),rtrim(char(STA_DTE))
from test FETCH FIRST 10 ROW ONLY
"0010000010. 2014-03-14"
"0010000010. 2014-03-14"
I need data as below:
0010000010 2014-03-14
I am planning to write a script to do rtrim(char(fieldname)) is there any combination of functions with which i can get proper output for both fields.
One might presume that the OP might have been written more like the following, to better describe the scenario:
Some background about what is being done will be included, such that later references [such as to field_name] will be previously-explained rather than having to be intuited by a reviewer.
The intention is to enable dynamically generating an SQL SELECT statement that will retrieve a character-representation of the data from the columns of a specified TABLE. Given the DDL create table THE_SCHEMA.TEST ( PKG_AGR_IDR NUMERIC(10, 0), STA_DTE DATE ) and given the following DML used to populate that TABLE with a sample-row insert into THE_SCHEMA.TEST VALUES(10000010. '2014-03-14'), what is desired is to obtain a result-set [limited to the first ten rows for the purpose of testing] that would include the data from each column [of the TABLE named TEST in THE_SCHEMA] as VARCHAR data, as produced from the following query that would have been generated from the metadata stored in the SYSCOLUMNS catalog VIEW:
select rtrim(char(PKG_AGR_IDR)),rtrim(char(STA_DTE))from testFETCH FIRST 10 ROW ONLY
The single expression generated as 'RTRIM(CHAR(' CONCAT COLUMN_NAME CONCAT '))' from the SYSCOLUMNS data, as seen twice in the query noted just prior, seems unable to provide desirable results when applied to a column-name irrespective the value of the DATA_TYPE of the COLUMN_NAME being formatted by that character-expression. Specifically, for example, the result of the dynamically generated query select RTRIM(CHAR(PKG_AGR_IDR)), RTRIM(CHAR(STA_DTE)) from THE_SCHEMA.TEST FETCH FIRST 10 ROW ONLY produces the following output:
0010000010. 2014-03-14
However the expected\desired output would be:
0010000010 2014-03-14
Is there any expression like RTRIM(CHAR(column_name)) that will function for all the columns in a TABLE, to obtain the data as character-string, regardless the data-type of the columns, whether they be numeric, varchar or date?
Yet even with that more complete description of the scenario\background:
The claims about what is the output from the original expression are unexpected from the CHAR scalar effecting Decimal to Character casting, at least for the DB2 for i SQL for which the zero-scale packed decimal (DECIMAL) and zoned decimal (NUMERIC) SQL data types are represented without a decimal separator [aka decimal point] despite the optional decimal-character as the second argument. As well the CHAR scalar omits leading zeroes when casting from numeric. Thus the DB2 for i SQL would have obtained a result of the string '10000010' rather than either of '0010000010.' or '10000010.'
I suppose the issue may be specific to the DB2 for Z or the DB2 LUW, and perhaps this topic was incorrectly tagged with DB2 for i? Or perhaps there may be a[n unstated] concern about an apparent incompatibility betwixt the DB2 variants? Yet having read the documentation, the described results seem contrary to what is documented, so I suspect the actual problem for the OP may be due to having encountered a defect [in whatever is the unstated variant of the DB2 and release level that is being used].?
I do not expect that there will be any one expression that will perform what is desired for each of NUMERIC, VARCHAR, and DATE [nor for each of INTEGER, SMALLINT, NUMERIC, DECIMAL, VARCHAR, and DATE]. For omission of the decimal point, the DB2 for i SQL is probably the most like what is expressed as desired, but then the leading zeroes are always trimmed http://www.ibm.com/support/knowledgecenter/ssw_ibm_i_72/db2/rbafzscachar.htm
... Leading zeros are not returned. Trailing zeros are returned. If the scale of decimal-expression is zero, the decimal character is not returned. ...
The DB2 LUW SQL seems at least somewhat incoherent with regard to the topic of leading zeroes, as example 6 suggests none and then example 7 shows they are there, but like the above doc reference, clearly there should be no leading zero characters http://www.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000777.html
... Leading zeros are not included. Trailing zeros are included. ... If the scale of decimal-expression is zero, the decimal character is not returned. ...
I did not research a DB2 for Z doc link.
I would expect that the solution will entail using a CASE expression, perhaps for the DATA_TYPE value. That is what I did coding something similar, though I just used VARCHAR casting scalar and did not do any trimming. However my requirement for CASE was not about keeping leading zero characters, instead mostly for choosing the correct decimal-separator character. And because the second argument decimal-character [for CHAR or VARCHAR] is disallowed for the INTEGER numeric types [sqlcode -171 aka SQL0171], the CASE expression for just the numeric types would be sufficiently resolved using just the following expression CASE WHEN DATA_TYPE IN ('INTEGER', 'SMALLINT', 'BIGINT') THEN ', ' concat DecSep concat ')' ELSE ')' appended to the 'VARCHAR(' concat where DecSep was the one-character variable having either the comma or period as the chosen decimal separator. Yet because the second argument [for CHAR or VARCHAR] is specific to the data type of the first argument, the character and date\time data types had their own CASE expression CASE WHEN DATA_TYPE IN ('DATE', 'TIME') THEN ', ' concat StdFmt concat ')' ELSE ')' appended to the 'VARCHAR(' concat where StdFmt was the three-character variable having one of the standards format specifications of ISO, USA, EUR, or JIS.
Not sure what you are asking. Remove double quotes? remove dot?
You can do a substr by providing the first and last position and also concatenate the two values.
select substr(trim(PKG_AGR_IDR), 2, 11) || ' ' || trim(char(STA_DTE))
from test FETCH FIRST 10 ROW ONLY