How to convert a dynamic column to a static column in Apache Phoenix? - apache-phoenix

I have a table, MY_T, where I have been adding some data to a dynamic column, COL_D. This works fine and I even have a view for easy access to COL_D. After some time I realized that COL_D is generic enough to apply to all rows so I wanted to make it a fixed part of the table.
alter table MY_T add COL_D INTEGER
But this doesn't work.
ERROR 1010 (42M01): Not allowed to mutate table.
Cannot drop column referenced by VIEW columnName=MY_T
[SQL State=42M01, DB Errorcode=1010]
I tried dropping the column from the view (that worked) but it still can't be added to the table.
Is there a way to achieve this?
I'm using Phoenix 4.4.0 as part of HDP 2.4.2 (if that matters).

Related

Ambiguous column reference "ctid" in SELECT with more than one table

I'm using CRecordset to query one table, but I use a second table to filter data. If in my GetDefaultSQL override method I return a table list with more than one table then I get this ERROR: column reference "ctid" is ambiguous. I know what a "ctid" column is, but I don't use it in my code. It's inserted into the original SQL statement by ODBC driver. How to fix this? How to tell the ODBC driver not to insert the "ctid" column?
I tried to call CRecordset::Open with readOnly parameter, as I assume that ODBC needs ctid to update the row, and I don't need to update them. But the error remains.
Also tried to add a primary key to the second table that was missing it, thinking if a table has a primary key then ODBC can use that instead of 'ctid', but again no luck. Makes sense though, because I don't fetch any column of that second table, and the second table is used just for filtering.
If I make a DB view to work around the issue, I get ERROR: column "ctid" does not exist.
You have to call CRecordset::Open with two parameters changed:
m_pSet->Open(CRecordset::snapshot, NULL, CRecordset::readOnly);
Then you can fetch both the joined tables and the view without errors. No "ctid" then.

How can I know when a set of constraints were added to a specific table?

I am working with a table that has many constraints.
I am receiving errors while importing a clean data in the table.
I want to know at what time these constraints were added to the table so that I can have an idea whether it was after or before the bulk import in that table.
How can I find out when a constraint was added to a table (the date of creation of constraint)
I use PostgreSQL 10
PostgreSQL doesn't record this information in the metadata.
If you have log_statement = 'ddl', you might find the information in the log file.

"Attributes specified for column are incompatible with existing column definition"

It's been a while.
Using DB2 10 for z/OS, I've been asked to change a specific column in a table from decimal(7,2) to decimal(7,4). Sounds easy, right?
alter table MySchema.MyTable
alter column myColumn
set data type decimal(7,4);
But, DB2 responds with this error: "Attributes specified for column 'MYCOLUMN' are incompatible with existing column definition."
I had thought that converting from decimal(7,2) to decimal(7,4) would be pretty straightforward, but DB2 disagrees.
Outside of dropping the table and recreating it from scratch, what alternatives do I have?
Thanks in advance!
Dave
The reason Db2 doesn't like that change is you're going from from 99999.99 to 999.9999
Is that really what you want? Going from (7,2) to (9,4) would just add two more decimal places without losing any data and should be allowed by the Db.
Db2 for i gives a warning, but allows you to ignore the warning...
Create a new column ALTER ADD COLUMN of the right type, use an UPDATE to populate it, ALTER DROP COLUMN the old column. RENAME COLUMN so set the name of the original column.

Can we setup a table in postgres to always view the latest inserts first?

Right now when I create a table and do a
select * from table
I always see the first insert rows first. I'd like to have my latest inserts displayed first. Is it possible to achieve with minimal performance impact?
I believe that Postgres uses an internal field called OID that can be sorted by. Try the following.
select *,OID from table order by OID desc;
There are some limitations to this approach as described in SQL, Postgres OIDs, What are they and why are they useful?
Apparently the OID sequence "does" wrap if it exceeds 4B 6. So in essence it's a global counter that can wrap. If it does wrap, some slowdown may start occurring when it's used and "searched" for unique values, etc.
See also https://wiki.postgresql.org/wiki/FAQ#What_is_an_OID.3F
NB - in more recent version of Postgres this could be deprecated ( https://www.postgresql.org/docs/8.4/static/runtime-config-compatible.html#GUC-DEFAULT-WITH-OIDS )
Although you should be able to create tables with OID even in the most recent version if done explicitly on table create as per https://www.postgresql.org/docs/9.5/static/sql-createtable.html
Although the behaviour you are observing in the CLI appears consistent, it isn't a standard and cannot be depended on. If you are regularly needing to manually see the most recently added rows on a specific table you could add a timestamp field or some other sortable field and perhaps even wrap the query into a stored function .. I guess the approach depends on your particular use case.

PostgreSQL v7.4 ALTER TABLE to change column

I have a need to change the length of CHAR columns in tables in a PostgreSQL v7.4 database. This version did not support the ability to directly change the column type or size using the ALTER TABLE statement. So, directly altering a column from a CHAR(10) to CHAR(20) for instance isn't possible (yeah, I know, "use varchars", but that's not an option in my current circumstance). Anyone have any advice/tricks on how to best accomplish this? My initial thoughts:
-- Save the table's data in a new "save" table.
CREATE TABLE save_data AS SELECT * FROM table_to_change;
-- Drop the columns from the first column to be changed on down.
ALTER TABLE table_to_change DROP column_name1; -- for each column starting with the first one that needs to be modified
ALTER TABLE table_to_change DROP column_name2;
...
-- Add the columns back, using the new size for the CHAR column
ALTER TABLE table_to_change ADD column_name1 CHAR(new_size); -- for each column dropped above
ALTER TABLE table_to_change ADD column_name2...
-- Copy the data bace from the "save" table
UPDATE table_to_change
SET column_name1=save_data.column_name1, -- for each column dropped/readded above
column_name2=save_date.column_name2,
...
FROM save_data
WHERE table_to_change.primary_key=save_data.primay_key;
Yuck! Hopefully there's a better way? Any suggestions appreciated. Thanks!
Not PostgreSQL, but in Oracle I have changed a column's type by:
Add a new column with a temporary name (ie: TMP_COL) and the new data type (ie: CHAR(20))
run an update query: UPDATE TBL SET TMP_COL = OLD_COL;
Drop OLD_COL
Rename TMP_COL to OLD_COL
I would dump the table contents to a flat file with COPY, drop the table, recreate it with the correct column setup, and then reload (with COPY again).
http://www.postgresql.org/docs/7.4/static/sql-copy.html
Is it acceptable to have downtime while performing this operation? Obviously what I've just described requires making the table unusable for a period of time, how long depends on the data size and hardware you're working with.
Edit: But COPY is quite a bit faster than INSERTs and UPDATEs. According to the docs you can make it even faster by using BINARY mode. BINARY makes it less compatible with other PGSQL installs but you won't care about that because you only want to load the data to the same instance that you dumped it from.
The best approach to your problem is to upgrade pg to something less archaic :)
Seriously. 7.4 is going to be removed from "supported versions" pretty soon, so I wouldn't wait for it to happen with 7.4 in production.