DB2 column constraint for inserting values within restricted length - db2

Is there a constraint available in DB2 such that when a column is restricted to a particular length, the values are trimmed to appropriate length before insertion. For Eg. if a column has been specified to be of length 5 , inserting a value 'overflow' will get inserted as 'overf'.
Can CHECK constraint be used here? My understanding of CHECK constraint is that it will allow insertions or not allow them but it can not modify values to satisfy the condition.

A constraint isn't going to be able to do this.
A before insert trigger is normally the mechanism you'd use to modify data during an insert before it is actually placed in the table.
However, I'm reasonably sure it won't work in this case. You'd get an SQLCODE -404 (SQLSTATE 22001) "The Sql Statement specified contains a String that is too long." thrown before the trigger gets fired.
I see two possible options
1) Create a view over the table where the column is cast to a larger size. Then create an INSTEAD OF trigger on the view to substring the data during write.
2) Create and use a stored procedure that accepts a larger size and substrings the data then inserts it.

Related

Know which index caused an error during a bulk INSERT or bulk UPDATE in PostgreSQL

When I INSERT or UPDATE a list of rows in PostgreSQL and one of them is causing an error, how can I know which one exactly (its index in the input list) ?
For example, if I have a UNIQUE constraint on the name column, and if name two already exists, I want to know that the constraint violation is caused by the input row at index 1.
INSERT INTO table (id, name) VALUES ('0000', 'one'), ('0001', 'two');
I know PostgSQL will stop on the first error encountered, and therefore that we can't know all of the problematic rows. That's fine, I just need the first problematic index (if any).
Inserting each row separately is not a possibility since we want to optimize for performance as well.
Postgres gives you the exactly what you are asking for and actually. It provides the constraint name, the column(s), and the value(s). However, much of this is is subsequent details to the error message. You need to extract the complete message. See demo.

How to add a column to a table on production PostgreSQL with zero downtime?

Here
https://stackoverflow.com/a/53016193/10894456
is an answer provided for Oracle 11g,
My question is the same:
What is the best approach to add a not null column with default value
in production oracle database when that table contain one million
records and it is live. Does it create any locks if we do the column
creation , adding default value and making it as not null in a single
statement?
but for PostgreSQL ?
This prior answer essentially answers your query.
Cross referencing the relevant PostgreSQL doc with the PostgreSQL sourcecode for AlterTableGetLockLevel mentioned in the above answer shows that ALTER TABLE ... ADD COLUMN will always obtain an ACCESS EXCLUSIVE table lock, precluding any other transaction from accessing the table for the duration of the ADD COLUMN operation.
This same exclusive lock is obtained for any ADD COLUMN variation; ie. it doesn't matter whether you add a NULL column (with or without DEFAULT) or have a NOT NULL with a default.
However, as mentioned in the linked answer above, adding a NULL column with no DEFAULT should be very quick as this operation simply updates the catalog.
In contrast, adding a column with a DEFAULT specifier necessitates a rewrite the entire table in PostgreSQL 10 or less.
This operation is likely to take a considerable time on your 1M record table.
According to the linked answer, PostgreSQL >= 11 does not require such a rewrite for adding such a column, so should perform more similarly to the no-DEFAULT case.
I should add that for PostgreSQL 11 and above, the ALTER TABLE docs note that table rewrites are only avoided for non-volatile DEFAULT specifiers:
When a column is added with ADD COLUMN and a non-volatile DEFAULT is specified, the default is evaluated at the time of the statement and the result stored in the table's metadata. That value will be used for the column for all existing rows. If no DEFAULT is specified, NULL is used. In neither case is a rewrite of the table required.
Adding a column with a volatile DEFAULT [...] will require the entire table and its indexes to be rewritten. [...] Table and/or index rebuilds may take a significant amount of time for a large table; and will temporarily require as much as double the disk space.

PostgreSQL - How to make an auto-increment function that follows the row number?

I'm having a trouble finding an option to keep an auto-increment function that follows the column number/ID without having a whole complicated process. Is there a data-type like serial/identity that keeps track of the column ID, but also tracks it when it deletes it?
Here's what happens when I delete values (20) from a table and the ID doesn't match the row number anymore.

Appending array during Postgresql upsert & getting ambiguous column error

When trying to do a query like below:
INSERT INTO employee_channels (employee_id, channels)
VALUES ('46356699-bed1-4ec4-9ac1-76f124b32184', '{a159d680-2f2e-4ba7-9498-484271ad0834}')
ON CONFLICT (employee_id)
DO UPDATE SET channels = array_append(channels, 'a159d680-2f2e-4ba7-9498-484271ad0834')
WHERE employee_id = '46356699-bed1-4ec4-9ac1-76f124b32184'
AND NOT lower(channels::text)::text[] #> ARRAY['a159d680-2f2e-4ba7-9498-484271ad0834'];
I get the following error
[42702] ERROR: column reference "channels" is ambiguous Position: 245
The specific reference to channels it's referring to is the 'channels' inside array_append.
channels is a CITEXT[] data type
You may need to specify the EXCLUDED table in your set statement.
SET channels = array_append(EXCLUDED.channels, 'a159d680-2f2e-4ba7-9498-484271ad0834')
When using the ON CONFLICT DO UPDATE clause the values that aren't inserted because of the conflict are stored in the EXCLUDED table. Its an ephemeral table you don't have to actually make, the way NEW and OLD are in triggers.
From the PostgreSQL Manual:
conflict_action specifies an alternative ON CONFLICT action. It can be either DO NOTHING, or a DO UPDATE clause specifying the exact
details of the UPDATE action to be performed in case of a conflict.
The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the
existing row using the table's name (or an alias), and to rows
proposed for insertion using the special excluded table. SELECT
privilege is required on any column in the target table where
corresponding excluded columns are read.
Note that the effects of all per-row BEFORE INSERT triggers are reflected in excluded values, since those effects may have contributed
to the row being excluded from insertion.

Adding a new constraint in postgresql checks the rows added before?

Let's suppose I have a table called Clients(ID,Name,Phone) which has several rows in it, with some of them empty in the column «Phone».
If I decide to add a new NOT NULL constraint to said table in the column «Phone», does PostgreSQL will check the rows that were already in the table, or is it only going to work to the rows added after the constraint's declaration ?
I think the documentation is pretty clear:
SET/DROP NOT NULL
These forms change whether a column is marked to allow null values or
to reject null values. You can only use SET NOT NULL when the column
contains no null values.
So, using this form, you cannot add such a constraint without checking the previous values.
If you use add table_constraint, then you can do the same thing using a CHECK cosntraint:
ADD table_constraint [ NOT VALID ]
This form adds a new constraint to a table using the same syntax as
CREATE TABLE, plus the option NOT VALID, which is currently only
allowed for foreign key and CHECK constraints. If the constraint is
marked NOT VALID, the potentially-lengthy initial check to verify that
all rows in the table satisfy the constraint is skipped. The
constraint will still be enforced against subsequent inserts or
updates (that is, they'll fail unless there is a matching row in the
referenced table, in the case of foreign keys; and they'll fail unless
the new row matches the specified check constraints). But the database
will not assume that the constraint holds for all rows in the table,
until it is validated by using the VALIDATE CONSTRAINT option.
So, you cannot add a NOT NULL constraint using alter table. You can do essentially the same thing using CHECK. Then, you by-pass the checking using NOT VALID. Otherwise, the checking takes place.