How to enforce column order with Liquibase? - postgresql

Setup:
liquibase 3.3.5
PostgreSQL 9.3
Windows 7 Ultimate
Having set the Liquibase.properties file with
changeLogFile = C://temp/changeset.xml,
I created a diff file with Liquibase (3.3.5).
liquibase.bat diffChangeLog
Examination of the changeset.xml file shows
-<addColumn tableName="view_dx">
<column type="int8" name="counter" defaultValueNumeric="0" defaultValue="0"/>
</addColumn
Problem is when
liquibase.bat update
is run, the changed table is NOT in the same column order as the reference table. This causes issues with the stored procedures using SETOF to return table rows.
Without destroying the existing table on the target database, how can column order be enforced using Liquibase?
TIA

I don't think that you can, in general, get Liquibase to enforce a column ordering. You will probably need to change your stored procedures to use column names rather than relying on position, which is a good habit to get into anyway.

Have you tried using afterColumn attribute of addColumn tag ?

Related

Is `CLUSTER` applied by `pg_dump`?

If CLUSTER is set on a table, then is it applied by pg_dump?
Specifically, the following:
Is it used to order the rows in the dump? If not, is there a way to do this?
Is it set on the table when using pg_restore? If not, is there a way to do this?
The dump will contain the statement
ALTER TABLE mytable CLUSTER ON anindex;
Restoring the dump will execute that statement. As the documentation explains,
This form selects the default index for future CLUSTER operations. It does not actually re-cluster the table.

"Attributes specified for column are incompatible with existing column definition"

It's been a while.
Using DB2 10 for z/OS, I've been asked to change a specific column in a table from decimal(7,2) to decimal(7,4). Sounds easy, right?
alter table MySchema.MyTable
alter column myColumn
set data type decimal(7,4);
But, DB2 responds with this error: "Attributes specified for column 'MYCOLUMN' are incompatible with existing column definition."
I had thought that converting from decimal(7,2) to decimal(7,4) would be pretty straightforward, but DB2 disagrees.
Outside of dropping the table and recreating it from scratch, what alternatives do I have?
Thanks in advance!
Dave
The reason Db2 doesn't like that change is you're going from from 99999.99 to 999.9999
Is that really what you want? Going from (7,2) to (9,4) would just add two more decimal places without losing any data and should be allowed by the Db.
Db2 for i gives a warning, but allows you to ignore the warning...
Create a new column ALTER ADD COLUMN of the right type, use an UPDATE to populate it, ALTER DROP COLUMN the old column. RENAME COLUMN so set the name of the original column.

Can we setup a table in postgres to always view the latest inserts first?

Right now when I create a table and do a
select * from table
I always see the first insert rows first. I'd like to have my latest inserts displayed first. Is it possible to achieve with minimal performance impact?
I believe that Postgres uses an internal field called OID that can be sorted by. Try the following.
select *,OID from table order by OID desc;
There are some limitations to this approach as described in SQL, Postgres OIDs, What are they and why are they useful?
Apparently the OID sequence "does" wrap if it exceeds 4B 6. So in essence it's a global counter that can wrap. If it does wrap, some slowdown may start occurring when it's used and "searched" for unique values, etc.
See also https://wiki.postgresql.org/wiki/FAQ#What_is_an_OID.3F
NB - in more recent version of Postgres this could be deprecated ( https://www.postgresql.org/docs/8.4/static/runtime-config-compatible.html#GUC-DEFAULT-WITH-OIDS )
Although you should be able to create tables with OID even in the most recent version if done explicitly on table create as per https://www.postgresql.org/docs/9.5/static/sql-createtable.html
Although the behaviour you are observing in the CLI appears consistent, it isn't a standard and cannot be depended on. If you are regularly needing to manually see the most recently added rows on a specific table you could add a timestamp field or some other sortable field and perhaps even wrap the query into a stored function .. I guess the approach depends on your particular use case.

Using IMPORT instead of LOAD in DB2

I wanted to prepare a load utility to load the data into DB2 table. The table has columns which contains GENERATEDALWAYS feature set.
So, I am not able to load an unloaded details from the table.
Is it possible to use import for tables having columns with GENERATEDALWAYS set?
Steps I did:
1. db2 "export to tbl.txt of del modified by coldel| select * from <schema.table> where col=value"
2. db2 "delete from <schema.table> where col=value"
3. db2 "import from tbl.txt of del modified by coldel| allow write access warningcount1 insert into <schema.table>"
The columns with "GENERATEDALWAYS" is having NEW Value after import. Is it possible to use import to populate GENERATEDALWAYS columns to have the old values?
Appreciate the assistance.
Thanks,
Mathew Liju
What you are asking is not possible. With IMPORT you can't override columns that have GENERATED ALWAYS. As #Peter Miehle suggests you could alter the table to specify that the column is GENERATED BY DEFAULT, but this may break other applications.
Your question's title implies that you don't want to use the LOAD utility (but you don't mention anything about it in the actual question). However, LOAD is the only way to write data into the table and maintain the values for the generated column as they exist in the file:
db2 "load from tbl.txt of del modified by generatedoverride insert into schema.table"
If you do this, be aware that:
DB2 does not check if there are conflicts with existing rows in the table. You would need to define a unique index on the column(s) in question to resolve this; this would cause DB2 to delete the rows that you just loaded in the DELETE phase of the load.
If your generated column(s) are using IDENTITY, make sure that you alter the column to ensure that future generated values do not conflict with the rows that you just inserted into the table.
maybe you can drop the "generation" from the column and add it after importing with the appropriate values again.
#Ian Bjorhovde has given you the options.
IMPORT actually does INSERTs in the background - ie, it first prepares a INSERT statement with parameter markers and uses the values in the input file for those markers.
In your SQL snapshot you will see INSERT statement that is used.
Anything that is not possible in an INSERT statement isn't possible with IMPORT (kind of .. )

PostgreSQL v7.4 ALTER TABLE to change column

I have a need to change the length of CHAR columns in tables in a PostgreSQL v7.4 database. This version did not support the ability to directly change the column type or size using the ALTER TABLE statement. So, directly altering a column from a CHAR(10) to CHAR(20) for instance isn't possible (yeah, I know, "use varchars", but that's not an option in my current circumstance). Anyone have any advice/tricks on how to best accomplish this? My initial thoughts:
-- Save the table's data in a new "save" table.
CREATE TABLE save_data AS SELECT * FROM table_to_change;
-- Drop the columns from the first column to be changed on down.
ALTER TABLE table_to_change DROP column_name1; -- for each column starting with the first one that needs to be modified
ALTER TABLE table_to_change DROP column_name2;
...
-- Add the columns back, using the new size for the CHAR column
ALTER TABLE table_to_change ADD column_name1 CHAR(new_size); -- for each column dropped above
ALTER TABLE table_to_change ADD column_name2...
-- Copy the data bace from the "save" table
UPDATE table_to_change
SET column_name1=save_data.column_name1, -- for each column dropped/readded above
column_name2=save_date.column_name2,
...
FROM save_data
WHERE table_to_change.primary_key=save_data.primay_key;
Yuck! Hopefully there's a better way? Any suggestions appreciated. Thanks!
Not PostgreSQL, but in Oracle I have changed a column's type by:
Add a new column with a temporary name (ie: TMP_COL) and the new data type (ie: CHAR(20))
run an update query: UPDATE TBL SET TMP_COL = OLD_COL;
Drop OLD_COL
Rename TMP_COL to OLD_COL
I would dump the table contents to a flat file with COPY, drop the table, recreate it with the correct column setup, and then reload (with COPY again).
http://www.postgresql.org/docs/7.4/static/sql-copy.html
Is it acceptable to have downtime while performing this operation? Obviously what I've just described requires making the table unusable for a period of time, how long depends on the data size and hardware you're working with.
Edit: But COPY is quite a bit faster than INSERTs and UPDATEs. According to the docs you can make it even faster by using BINARY mode. BINARY makes it less compatible with other PGSQL installs but you won't care about that because you only want to load the data to the same instance that you dumped it from.
The best approach to your problem is to upgrade pg to something less archaic :)
Seriously. 7.4 is going to be removed from "supported versions" pretty soon, so I wouldn't wait for it to happen with 7.4 in production.