key size exceeds implementation restriction for index on expression with UDF call - firebird

Firebird allows indexing on expressions since version 2.0. That includes using calls to user defined functions (UDF).
Currently, I am trying to add an expression index to this table:
CREATE TABLE M_ADSN_STRING_DATA (
ID DMN_AUTOINC NOT NULL /* DMN_AUTOINC = INTEGER NOT NULL */,
CLTREF DMN_REFID /* DMN_REFID = INTEGER NOT NULL */,
ATTRIBUTEDATA DMN_AFT_STRING /* DMN_AFT_STRING = VARCHAR(320) NOT NULL */
);
/******************************************************************************/
/**** Unique constraints ****/
/******************************************************************************/
ALTER TABLE M_ADSN_STRING_DATA ADD CONSTRAINT UNQ_M_ADSN_STRING_DATA UNIQUE (CLTREF, ATTRIBUTEDATA);
/******************************************************************************/
/**** Primary keys ****/
/******************************************************************************/
ALTER TABLE M_ADSN_STRING_DATA ADD CONSTRAINT PK_M_ADSN_STRING_DATA PRIMARY KEY (ID);
/******************************************************************************/
/**** Foreign keys ****/
/******************************************************************************/
ALTER TABLE M_ADSN_STRING_DATA ADD CONSTRAINT FK_M_ADSN_STRING_DATA_CLT FOREIGN KEY (CLTREF) REFERENCES M_CLIENT (ID) ON DELETE CASCADE ON UPDATE CASCADE;
/******************************************************************************/
/**** Indices ****/
/******************************************************************************/
CREATE INDEX M_ADSN_STRING_DATA_AD_UC ON M_ADSN_STRING_DATA COMPUTED BY (UPPER(ATTRIBUTEDATA));
Note, that it already has an expression index called M_ADSN_STRING_DATA_AD_UC.
The index I want to use should look like this:
CREATE INDEX M_ADSN_STRING_DATA_AD_DIG
ON M_ADSN_STRING_DATA
COMPUTED BY (F_DIGITS(ATTRIBUTEDATA));
Unfortunately, this gives me an error message.
Unsuccessful metadata update
key size exceeds implementation restriction for index "M_ADSN_STRING_DATA_AD_DIG"
I read Firebird FAQ entrys #213 and #211, and this SO question as well.
F_DIGITS is a UDF of FreeAdhocUDF library. Initially, it was declared as
DECLARE EXTERNAL FUNCTION F_DIGITS
CSTRING(32760)
RETURNS CSTRING(32760) FREE_IT
ENTRY_POINT 'digits' MODULE_NAME 'FreeAdhocUDF';
As my maximum input and output length is only 320 chars, I changed it to
DECLARE EXTERNAL FUNCTION F_DIGITS
CSTRING(320)
RETURNS CSTRING(320) FREE_IT
ENTRY_POINT 'digits' MODULE_NAME 'FreeAdhocUDF';
to fit the index size requirements. My databases pagesize is 16384. So, I'd think, my key can be up to 4096 bytes.
The domain DMN_AFT_STRING of column ATTRIBUTEDATA is declared as
CREATE DOMAIN DMN_AFT_STRING AS
VARCHAR(320) CHARACTER SET ISO8859_1
NOT NULL
COLLATE DE_DE_CS_SF;
Why does the key size exceed?

Long story short: Have you tried to turn it off and on again?
It looks like one has to disconnect and connect after changing the UDF declaration and before adding the expression index.
Now, it works properly. The key size does not exceed anymore.

Related

How to limit the length of array text objects in PostgreSQL?

Is there any way to add a constraint on a column that is an array to limit length text objects?
I know that I can do this without constraint:
colA varchar(100)[] not null
I tried to do it in the following way:
alter table "tableA" ADD CONSTRAINT "colA_text_size"
CHECK ((SELECT max(length(pc)) from unnest(colA) as pc) <= 100) NOT VALID;
alter table "tableA" VALIDATE CONSTRAINT colA_text_size;
But got error: cannot use subquery in check constraint (SQLSTATE 0A000)
Try the following definition for your check constraint: (see demo, for demo I limit length to 25).
check (length(replace(array_to_string( text_array ,','), ',','')) <= 100)
What it does:
First the function array_to_string( ... ) converts the array to a csv.
The replace() function then removes the commas replacing them with the zero length string ''.
The length() function gets number of remaining characters in the string.
Finally that number is compared to the limit value (100) and the check constraint is either passed of failed.
References:
array_to_string(),
replace(), length()

Kafka/KsqlDb : Why is PRIMARY KEY appending chars?

I intend to create a TABLE called WEB_TICKETS where the PRIMARY KEY is equal to the key->ID value. For some reason, when I run the CREATE TABLE instruction the PRIMARY KEY value is appended with the chars 'JO' - why is this happening?
KsqlDb Statements
These work as expected
CREATE STREAM STREAM_WEB_TICKETS (
ID_TICKET STRUCT<ID STRING> KEY
)
WITH (KAFKA_TOPIC='web.mongodb.tickets', FORMAT='AVRO');
CREATE STREAM WEB_TICKETS_REKEYED
WITH (KAFKA_TOPIC='web_tickets_by_id') AS
SELECT *
FROM STREAM_WEB_TICKETS
PARTITION BY ID_TICKET->ID;
PRINT 'web_tickets_by_id' FROM BEGINNING LIMIT 1;
key: 5d0c2416b326fe00515408b8
The following successfully creates the table but the PRIMARY KEY value isn't what I expect:
CREATE TABLE web_tickets (
id_pk STRING PRIMARY KEY
)
WITH (KAFKA_TOPIC = 'web_tickets_by_id', VALUE_FORMAT = 'AVRO');
select id_pk from web_tickets EMIT CHANGES LIMIT 1;
|ID_PK|
|J05d0c2416b326fe00515408b8
As you can see the ID_PK value has the characters JO appended to it. Why is this?
It appears as though I wasn't properly setting the KEY FORMAT. The following command produces the expected result.
CREATE TABLE web_tickets_test_2 (
id_pk VARCHAR PRIMARY KEY
)
WITH (KAFKA_TOPIC = 'web_tickets_by_id', FORMAT = 'AVRO');

Column name in error text: value too long for type character

We have a table with 2 columns (both have the same type and size) and 2 constraints for them:
create table colors
(
color varchar(6)
constraint color_check check
((color)::text ~ '^[0-9a-fA-F]{6}$'::text),
color_secodandry varchar(6)
constraint color_secondary_check check
((color_secodandry)::text ~ '^[0-9a-fA-F]{6}$'::text),
);
In case of inserts with long values:
insert into colors (color, color_secondary) values ('ccaabb', 'TOO_LONG_TEXT');
insert into colors (color, color_secondary) values ('TOO_LONG_TEXT', 'ccaabb');
we'll get the same errors for two error cases:
ERROR: value too long for type character varying(6) (SQLSTATE 22001)
PostgreSQL validates length for that columns before make inserts, so our checks never run. Is there a way to understand, which column has an invalid data?
The issue you are having is the order of evaluation for the intended values. You told Postgres to not allow a length over 6 (character varying(6)) you also specified additional certain criteria those values have to satisfy. What is happening is Postgres validates the length criteria and throws an exception when the value fails, in that case the check constraint is not preformed as Postgres works on an exit on first failure. The check constraint is processed only after the length passes. Example:
create table test1( id integer generated always as identity
, color6 character varying (6)
constraint color6_check check (color6 ~ '^[0-9a-fA-F]{6}$')
, color60 character varying (60)
constraint color60_check check (color60 ~ '^[0-9a-fA-F]{6}$')
) ;
insert into test1( color6 ) values ('aabbccdd') ;
/* Result
SQL Error [22001]: ERROR: value too long for type character varying(6)
ERROR: value too long for type character varying(6)
*/
insert into test1( color60 ) values ('aabbccdd') ;
/* Result
SQL Error [23514]: ERROR: new row for relation "test1" violates check constraint "color60_check"
Detail: Failing row contains (3, null, aabbccdd).
ERROR: new row for relation "test1" violates check constraint "color60_check"
*/
Notice the only difference between them is the length specification for the column being inserted. Yet they fail, but for a different reasons. Since both the length specification and the check constraint enforce the length you need to decide now how you want to handle the 2 conditions: a separate error for each condition or a single error for both. (IMHO: separate messages)

How to reset the auto generated primary key in PostgreSQL

My class for the table topics is as below. The primary key is autogenerated serial key. While testing, I deleted rows from the table and was trying to re-insert them again. The UUID is not getting reset.
class Topics(db.Model):
""" User Model for different topics """
__tablename__ = 'topics'
uuid = db.Column(db.Integer, primary_key=True)
topics_name = db.Column(db.String(256),index=True)
def __repr__(self):
return '<Post %r>' % self.topics_name
I tried the below command to reset the key
ALTER SEQUENCE topics_uuid_seq RESTART WITH 1;
It did not work.
I would appreciate any form of suggestion!
If it's indeed a serial ID, you can reset the owned SEQUENCE with:
SELECT setval(pg_get_serial_sequence('topics', 'uuid'), max(uuid)) FROM topics;
See:
How to reset postgres' primary key sequence when it falls out of sync?
But why would the name be uuid? UUID are not integer numbers and not serial. Also, it's not entirely clear what's going wrong, when you write:
The UUID is not getting reset.
About ALTER SEQUENCE ... RESTART:
Postgres manually alter sequence
In order to avoid duplicate id errors that may arise when resetting the sequence try:
UPDATE table SET id = DEFAULT;
ALTER SEQUENCE seq RESTART;
UPDATE table SET id = DEFAULT;
For added context:
'table' = your table name
'id' = your id column name
'seq' = find the name of your sequence with:
SELECT pg_get_serial_sequence('table', 'id');

Indexing over composite types in Postgres

Given the following, what can I expect of the resulting index?
CREATE TYPE instant AS (
epoch_seconds timezonetz,
nanos integer
);
CREATE TABLE event AS (
label text,
occurrence instant
);
CREATE INDEX idx_event_occurrence ON event (occurrence);
Will postgres automatically create a composite index over all the fields in instant? Would this index then use the left field as the primary and the right as the secondary? Would there be any reason to do the following instead?
CREATE INDEX idx_event_occurrence ON event (
(occurrence).epoch_seconds,
(occurrence).nanos
);