Encountered an issue in IB (tested XE7 and IB 2017) when changing the size of a domain definition from something like VarChar(6) to VarChar(10). The end result is the field utilizing the domain ends up allowing 2 more characters (12 in this case) than the specified size.
Here are the steps to reproduce:
Create a new IB database
Create a domain definition of type VarChar(6)
Add new table "TestTable" with the following new fields:
TestPK Integer, not null, Primary Key
TestDomainField <-- use domain definition from step above
Test entry into the new table. It will allow 6 characters max (correct)
Alter the domain definition to VarChar(10)
Test entry in the table. It will allow 12 characters instead of the 10 specified on the domain (incorrect)
Metadata selection also confirms the allowed 12 characters:
SELECT distinct r.RDB$FIELD_NAME AS field_name,
f.RDB$FIELD_LENGTH AS field_length
FROM RDB$RELATION_FIELDS r
LEFT JOIN RDB$FIELDS f ON r.RDB$FIELD_SOURCE = f.RDB$FIELD_NAME
WHERE r.RDB$RELATION_NAME='TestTable'
While I don't know exactly what is going on here, my hunch it is some sort of character set issue. IOW, some character sets allow multi-byte characters. So, you might experiment with adding a character set to your domain definition and see if that makes a difference.
Related
I've got a Postgres ORDER BY issue with the following table:
em_code name
EM001 AAA
EM999 BBB
EM1000 CCC
To insert a new record to the table,
I select the last record with SELECT * FROM employees ORDER BY em_code DESC
Strip alphabets from em_code usiging reg exp and store in ec_alpha
Cast the remating part to integer ec_num
Increment by one ec_num++
Pad with sufficient zeors and prefix ec_alpha again
When em_code reaches EM1000, the above algorithm fails.
First step will return EM999 instead EM1000 and it will again generate EM1000 as new em_code, breaking the unique key constraint.
Any idea how to select EM1000?
Since Postgres 9.6, it is possible to specify a collation which will sort columns with numbers naturally.
https://www.postgresql.org/docs/10/collation.html
-- First create a collation with numeric sorting
CREATE COLLATION numeric (provider = icu, locale = 'en#colNumeric=yes');
-- Alter table to use the collation
ALTER TABLE "employees" ALTER COLUMN "em_code" type TEXT COLLATE numeric;
Now just query as you would otherwise.
SELECT * FROM employees ORDER BY em_code
On my data, I get results in this order (note that it also sorts foreign numerals):
Value
0
0001
001
1
06
6
13
۱۳
14
One approach you can take is to create a naturalsort function for this. Here's an example, written by Postgres legend RhodiumToad.
create or replace function naturalsort(text)
returns bytea language sql immutable strict as $f$
select string_agg(convert_to(coalesce(r[2], length(length(r[1])::text) || length(r[1])::text || r[1]), 'SQL_ASCII'),'\x00')
from regexp_matches($1, '0*([0-9]+)|([^0-9]+)', 'g') r;
$f$;
Source: http://www.rhodiumtoad.org.uk/junk/naturalsort.sql
To use it simply call the function in your order by:
SELECT * FROM employees ORDER BY naturalsort(em_code) DESC
The reason is that the string sorts alphabetically (instead of numerically like you would want it) and 1 sorts before 9.
You could solve it like this:
SELECT * FROM employees
ORDER BY substring(em_code, 3)::int DESC;
It would be more efficient to drop the redundant 'EM' from your em_code - if you can - and save an integer number to begin with.
Answer to question in comment
To strip any and all non-digits from a string:
SELECT regexp_replace(em_code, E'\\D','','g')
FROM employees;
\D is the regular expression class-shorthand for "non-digits".
'g' as 4th parameter is the "globally" switch to apply the replacement to every occurrence in the string, not just the first.
After replacing every non-digit with the empty string, only digits remain.
This always comes up in questions and in my own development and I finally tired of tricky ways of doing this. I finally broke down and implemented it as a PostgreSQL extension:
https://github.com/Bjond/pg_natural_sort_order
It's free to use, MIT license.
Basically it just normalizes the numerics (zero pre-pending numerics) within strings such that you can create an index column for full-speed sorting au naturel. The readme explains.
The advantage is you can have a trigger do the work and not your application code. It will be calculated at machine-speed on the PostgreSQL server and migrations adding columns become simple and fast.
you can use just this line
"ORDER BY length(substring(em_code FROM '[0-9]+')), em_code"
I wrote about this in detail in this related question:
Humanized or natural number sorting of mixed word-and-number strings
(I'm posting this answer as a useful cross-reference only, so it's community wiki).
I came up with something slightly different.
The basic idea is to create an array of tuples (integer, string) and then order by these. The magic number 2147483647 is int32_max, used so that strings are sorted after numbers.
ORDER BY ARRAY(
SELECT ROW(
CAST(COALESCE(NULLIF(match[1], ''), '2147483647') AS INTEGER),
match[2]
)
FROM REGEXP_MATCHES(col_to_sort_by, '(\d*)|(\D*)', 'g')
AS match
)
I thought about another way of doing this that uses less db storage than padding and saves time than calculating on the fly.
https://stackoverflow.com/a/47522040/935122
I've also put it on GitHub
https://github.com/ccsalway/dbNaturalSort
The following solution is a combination of various ideas presented in another question, as well as some ideas from the classic solution:
create function natsort(s text) returns text immutable language sql as $$
select string_agg(r[1] || E'\x01' || lpad(r[2], 20, '0'), '')
from regexp_matches(s, '(\D*)(\d*)', 'g') r;
$$;
The design goals of this function were simplicity and pure string operations (no custom types and no arrays), so it can easily be used as a drop-in solution, and is trivial to be indexed over.
Note: If you expect numbers with more than 20 digits, you'll have to replace the hard-coded maximum length 20 in the function with a suitable larger length. Note that this will directly affect the length of the resulting strings, so don't make that value larger than needed.
I am trying to run a query that has a where clause with a string from a column of type VARCHAR(50) through PHP, yet for some reason it does not work in either PHP or MySQLWorkbench. My database looks like:
Database Picture:
The table title is 'paranoia' where the column 'codename' is VARCHAR(50) and 'target' is VARCHAR(50). The query I am trying to run takes the form, when searching for a codename entry clearly named '13Brownie' with no spaces, as follows:
UPDATE paranoia SET target='sd' WHERE codename='13Brownie'
Yet for some reason passing a string to the argument for codename is ineffective. The WHERE clause works when I do codename=7 or codename=3 and returns those respective integer codenames, and I can do codename=0 to get all the other lettered codenames. The string input works in neither MySQLWorkbench or the PHP script I will be using to update such selected rows, but again the integer input does.
It seems like the WHERE clause is only taking the integer values of my string input or the column is actually made up of the integer values of each entry, but the column codename is clearly defined as VARCHAR(50). I have been searching for hours but to no avail.
It is likely that there are white-space characters in the data. Things to try:
SELECT * FROM paranoia WHERE codename like '13%'
SELECT * FROM paranoia WHERE codename = '13Brownie '
SELECT codename, LEN(codename) FROM paranoia
VARCHAR(10) is a valid type to accept a string of at most 10 characters. I think this can possibly happen because of a foreign key constraint enforced with another table. check if you have this constraint using the "relation view" if you are on phpmyadmin.
I have to change the value of a determined column in postgre but I never worked before with maskbits, (it's a legacy code). The current value is 1 but it's stored as a integer maskbit so if I try to do
update table set field=0 where ...
It doesn't work and it keeps returning me 1. So I read the documentation and tried to do a AND:
update table set field=field&0 where ...
But it doesn't worked too. Then I tried to cover all the 30 bits, without success too:
update table set field=field&0000000000000000000000000000000 where ...
Can someone please show me how to properly change the value of a integer maskbit on postgresql?
EDIT: I found this case here in StackOverflow
UPDATE users SET permission = permission & ~16
which seen's to be the closest to mine so I tried to do
UPDATE table SET field = field & ~1
because that's the only bit I have to deactivate but it still active and retuning 1 when I do a SELECT
PostgreSQL may be pickier than you're used to when it comes to combining numbers of different types. You can't just use a 0, you need to use a bitstring constant of the right length, or explicitly cast an integer to the proper bit(n) type.
Bitstring constants are entered like B'00101', which would have type bit(5).
Some examples you can type into psql:
select 5;
select (5::bit(5));
select (30::bit(5));
select ~(1::bit(5));
select ~B'00001';
select (B'00101' & B'11110');
select (B'00101' & ~(1::bit(5)));
I am using Ubuntu and PostgreSql 8.4.9.
Now, for any table in my database, if I do select table_name.name from table_name, it shows a result of concatenated columns for each row, although I don't have any name column in the table. For the tables which have name column, no issue. Any idea why?
My results are like this:
select taggings.name from taggings limit 3;
---------------------------------------------------------------
(1,4,84,,,PlantCategory,soil_pref_tags,"2010-03-18 00:37:55")
(2,5,84,,,PlantCategory,soil_pref_tags,"2010-03-18 00:37:55")
(3,6,84,,,PlantCategory,soil_pref_tags,"2010-03-18 00:37:55")
(3 rows)
select name from taggings limit 3;
ERROR: column "name" does not exist
LINE 1: select name from taggings limit 3;
This is a known confusing "feature" with a bit of history. Specifically, you could refer to tuples from the table as a whole with the table name, and then appending .name would invoke the name function on them (i.e. it would be interpreted as select name(t) from t).
At some point in the PostgreSQL 9 development, Istr this was cleaned up a bit. You can still do select t from t explicitly to get the rows-as-tuples effect, but you can't apply a function in the same way. So on PostgreSQL 8.4.9, this:
create table t(id serial primary key, value text not null);
insert into t(value) values('foo');
select t.name from t;
produces the bizarre:
name
---------
(1,foo)
(1 row)
but on 9.1.1 produces:
ERROR: column t.name does not exist
LINE 1: select t.name from t;
^
as you would expect.
So, to specifically answer your question: name is a standard type in PostgreSQL (used in the catalogue for table names etc) and also some standard functions to convert things to the name type. It's not actually reserved, just the objects that exist called that, plus some historical strange syntax, made things confusing; and this has been fixed by the developers in recent versions.
According to the PostgreSQL documentation, name is a "non-reserved" keyword in PostgreSQL, SQL:2003, SQL:1999, or SQL-92.
SQL distinguishes between reserved and non-reserved key words. According to the standard, reserved key words are the only real key words; they are never allowed as identifiers. Non-reserved key words only have a special meaning in particular contexts and can be used as identifiers in other contexts. Most non-reserved key words are actually the names of built-in tables and functions specified by SQL. The concept of non-reserved key words essentially only exists to declare that some predefined meaning is attached to a word in some contexts.
The suggested fix when using keywords is:
As a general rule, if you get spurious parser errors for commands that contain any of the listed key words as an identifier you should try to quote the identifier to see if the problem goes away.
I just inherited a project that has code similar to the following (rather simple) example:
DECLARE #Demo TABLE
(
Quantity INT,
Symbol NVARCHAR(10)
)
INSERT INTO #Demo (Quantity, Symbol)
SELECT 127, N'IBM'
My interest is with the N before the string literal.
I understand that the prefix N is to specify encoding (in this case, Unicode). But since the select is just for inserting into a field that is clearly already Unicode, wouldn't this value be automatically upcast?
I've run the code without the N and it appears to work, but am I missing something that the previous programmer intended? Or was the N an oversight on his/her part?
I expect behavior similar to when I pass an int to a decimal field (auto-upcast). Can I get rid of those Ns?
Your test is not really valid, try something like a Chinese character instead, I remember if you don't prefix it it will not insert the correct character
example, first one shows a question mark while the bottom one shows a square
select '作'
select N'作'
A better example, even here the output is not the same
declare #v nvarchar(50), #v2 nvarchar(50)
select #v = '作', #v2 = N'作'
select #v,#v2
Since what you look like is a stock table why are you using unicode, are there even symbols that are unicode..I have never seen any and this includes ISIN, CUSIPS and SEDOLS
Yes, SQL Server will automatically convert (widen, cast down) varchar to nvarchar, so you can remove the N in this case. Of course, if you're specifying a string literal where the characters aren't actually present in the database's default collation, then you need it.
It's like you can suffix a number with "L" in C et al to indicate it's a long literal instead of an int. Writing N'IBM' is either being precise or a slave to habit, depending on your point of view.
One trap for the unwary: nvarchar doesn't get automatically converted to varchar, and this can be an issue if your application is all Unicode and your database isn't. For example, we had this with the jTDS JDBC driver, which bound all parameter values as nvarchar, resulting in statements effectively like this:
select * from purchase where purchase_reference = N'AB1234'
(where purchase_reference was a varchar column)
Since the automatic conversions are only one way, that became:
select * from purchase where CONVERT(NVARCHAR, purchase_reference) = N'AB1234'
and therefore the index of purchase_reference wasn't used.
By contrast, the reverse is fine: if purchase_reference was an nvarchar, and an application passed in a varchar parameter, then the rewritten query:
select * from purchase where purchase_reference = CONVERT(NVARCHAR, 'AB1234')
would be fine. In the end we had to disable binding parameters as Unicode, hence causing a raft of i18n problems that were considered less serious.