Update the value for DB2 varchar for bit data column - db2

In a DB2 database ,I have a table with a column of data type 'varchar for bit data'. I want to update a record in this table, and set a value for this column. When I updated the record , I see a different value for this column than the value I set in the update statement. The value I want to update in this column is '895623'. Which value do I use in my update statement to achieve this? What would be the update statement for this in DB2?

Related

Is it possible to update a column(automatically) with "current_timestamp" in PostgreSQL using "Generated Columns"?

Is it possible to update a column (automatically) with "current_timestamp" in PostgreSQL using "Generated Columns", whenever the row gets update?
At present, I am using trigger to update the audit field last_update_date. But I am planning to switch to generated column
ALTER TABLE test ADD COLUMN last_update_date timestamp without time zone
GENERATED ALWAYS AS (current_timestamp) STORED;
Getting error while altering column
ERROR: generation expression is not immutable
No, that won't work, for the reason specified in the error.
Functions used in generated columns must always return the same value for the same arguments, that is, depend on nothing but the current database row. current_timestamp obviously is not of that kind.
If PostgreSQL did allow such functions to be used in generated columns, then the value of the column would change if the database is restored from a pg_dump, for example.
Use a BEFORE INSERT OR UPDATE trigger for this purpose.

How to get the inserted or updated object after insert and update sql in mybatis

Is there any way to get the newly inserted or updated object after insert/update sql query in mybatis? Or I have to run the select query to retrieve it?
You can retrieve the generated key from an insert statement (keyProperty, keyColumn, useGeneratedKeys).
You already have the values for properties/columns inserted/updated.
You cannot retrieve values from columns not inserted (column default value) or updated (record column current value).
Then you have to select these values.

Alter a Column from INTEGER To BIGINT

In my database I have several fields with INTEGER Type. I need to change some of them to BIGINT.
So my question is, can I just use the following command?
ALTER TABLE MyTable ALTER COLUMN MyIntegerColumn TYPE BIGINT;
Are the contained data be converted the correct way? After the convert is this column a "real" BIGINT column?
I know this is not possible if there are constraints on this column (Trigger, ForeingKey,...). But if there are no constraints is it possible to do it this way?
Or is it better to convert it by a Help-Column:
MyIntegerColumn -> MyIntegerColumnBac -> MyBigIntColumn
When you execute
ALTER TABLE MyTable ALTER COLUMN MyIntegerColumn TYPE BIGINT;
Firebird will not convert existing data from INTEGER to BIGINT, instead it will create a new format version for the table.
When inserting new rows or updating existing rows, the value will be stored as a BIGINT, but when reading Firebird will convert 'old' rows on the fly from INTEGER to BIGINT. This happens transparently for you as the user. This is to prevent needing to rewrite all existing rows, which could be costly (IO, garbage collection of old versions of rows, etc).
So please, do use ALTER TABLE .. ALTER COLUMN, do not do MyIntegerColumn -> MyIntegerColumnBac -> MyBigIntColumn. There are some exceptions to this rule, eg (potentially) lossy character set transformations are better done that way to prevent transliterations errors on select if a character does not exist in the new character set, or changing a (var)char column to be shorter (which can't be done with alter column).
To be a little more specific: when a row is written in the database it contains a format version (aka version count) of that row. The format version points to a description of a row (datatypes, etc) how Firebird should read that row. An alter table will create a new format version, and that format will be applied when writing new rows or updating existing rows. When reading an old row, Firebird will apply necessary transformation to present that row as the new format (for example adding new columns with their default values, transforming a data type of a column).
These format versions are also a reason why the number of alter tables are restricted: if you apply more than 255 alter tables on a single table you must backup and restore the database (the format version is a single byte) before further changes are allowed to that table.

incorrect data update on Sybase trigger execution

I have a table test_123 with the column as:
int_1 (int),
datetime_1 (datetime),
tinyint_1 (tinyint),
datetime_2 (datetime)
So when column datetime_1 is updated and the value at column tinyint_1 = 1 that time i have to update my column datetime_2 with column value of datetime_1
I have created the below trigger for this.. but with my trigger it is updating all datetime2 column values with datetime_1 column when tinyint_1 = 1 .. but i just want to update that particular row where datetime_1 value has updated( i mean changed)..
Below is the trigger..
CREATE TRIGGER test_trigger_upd
ON test_123
FOR UPDATE
AS
FOR EACH STATEMENT
IF UPDATE(datetime_1)
BEGIN
UPDATE test_123
SET test_123.datetime_2 = inserted.datetime_1
WHERE test_123.tinyint_1 = 1
END
ROW-level triggers are not supported in ASE. There are only after-statement triggers.
As commented earlier, the problem you're facing is that you need to be able to link the rows in the 'inserted' pseudo-table to the base table itself. You can only do that if there is a key -- meaning: a column that uniquely identifies a row, or a combination of columns that does so. Without that, you simply cannot identify the row that needs to be updated, since there may be multiple rows with identical column values if uniqueness is not guaranteed.
(and on a side note: not having a key in a table is bad design practice -- and this problem is one of the many reasons why).
A simple solution is to add an identity column to the table, e.g.
ALTER TABLE test_123 ADD idcol INT IDENTITY NOT NULL
You can then add a predicate 'test_123.idcol = inserted.idcol' to the trigger join.

db2 removing generated always on timestamp columns

I have a Column UPDATE_TIME
having the expression
TIMESTAMP NOT NULL GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP
how to remove the Generated always for Timestamps
I also tried
db2 "alter table xxxx alter column UPDATE_TIME drop expression"
Since it is defined as a ROW CHANGE TIMESTAMP
You have to drop the column and re-add it.
Why would you want to do this in the first place?