I'm working on HBase 0.98.12-hadoop2 and phoenix-4.7.0
I created table on phoenix to map with existing table on HBase.
After index testing, It failed to drop table with ERROR.
Error: ERROR 1010 (42M01): Not allowed to mutate table. tableName=my_table (state=42M01,code=1010)
To fix this, I tried to set immutable_rows to true but it didn't work.
0: jdbc:phoenix:localhost:2181:/hbase> alter table "my_table" set immutable_rows=false;
16/07/25 17:04:42 WARN query.ConnectionQueryServicesImpl: Attempt to cache older version of my_table: current= 3, new=3
No rows affected (0.041 seconds)
0: jdbc:phoenix:localhost:2181:/hbase> drop table "my_table";
Error: ERROR 1010 (42M01): Not allowed to mutate table. tableName=my_table(state=42M01,code=1010)
How can I drop it? Any advice would be appreciated.
I take a look into SYSTEM.CATALOG and I found something strange.
I don't know why and when it was inserted into there though,
after deleting it I could finally drop the table.
There has to be some kind of reference to the table you are going to drop.
In my case there was a view that was referencing the table, so, first thing was to execute the drop view to delete that reference and after that the drop table command worked.
Related
I have to write a sql script to modify a lot of types of columns in my db2 database.
Everything goes well excpet for one specific table (script used is the same as others tables) and db2 returns always an error I don't understand.
Here is my script :
ALTER TABLE "TEST"."CLIENT"
ALTER COLUMN C_CODE
SET DATA TYPE CHAR(16 OCTETS);
and the error :
SQL Error [42997]: Function not supported (Reason code = "21")..
SQLCODE=-270, SQLSTATE=42997, DRIVER=4.26.14
I try to modify some others columns on the same table, but I always receive the same error.
Do you, by any chance, have an idea?
Thanks in advance
The error SQL0270N (sqlcode = -270) has many possible causes, and the specific cause is indicated by the "reason code".
In this case the "reason code 21" means:
A column cannot be dropped or have its length, data type, security,
nullability, or hidden attribute altered on a table that is a base
table for a materialized query table.
The documentation for this sqlcode on Db2-LUW is at:
https://www.ibm.com/docs/en/db2/11.5?topic=messages-sql0250-sql0499#sql0270n
Search for SQL0270N on that page, and notice the suggested user response:
To drop or alter a column in a table that is a base table for a materialized query table, perform the following steps:
1. Drop the dependent materialized query table.
2. Drop the column of the base table, or alter the length, data type, nullability, or hidden attribute of this column.
3. Re-create the materialized query table.
I'm getting an error when trying to update a table. The SQL statement is:
UPDATE dda_accounts SET TYPE_SK = TYPE_SK - 10 WHERE TYPE_SK > 9;
The error I get is:
SQL Error [57007]: Operation not allowed for reason code "7" on table
"BANK_0002_TEST.DDA_ACCOUNTS".. SQLCODE=-668, SQLSTATE=57007, DRIVER=4.27.25
SQLSTATE 57007 says that there's something incomplete after an ALTER TABLE was executed.
I found this resolution, but it's not clear if it can be fixed or the only way to recover the table is using a backup.
Running a select statement works, only the update fails. What is the way to fix this table?
You need to REORG the table to recover, see this page for details.
When you get an error like this, lookup the SQL066N code with the reason code "7".
This shows:
The table is in the reorg pending state. This can occur after an ALTER
TABLE tatement containing a REORG-recommended operation.
Be aware the the previous alter table (that put the table into this state of reorg needed) might have happened quite some time ago, possibly without your knowledge.
If you lack the authorisation to perform reorg table inplace "BANK_0002_TEST.DDA_ACCOUNTS" , then contact your DBA for assistance. The DBA may choose to also reorg indexes at the same time, and to perform runstats (docs) on the table following completion of the reorg, and to check whether anything else needs rebinding.
I'm using CRecordset to query one table, but I use a second table to filter data. If in my GetDefaultSQL override method I return a table list with more than one table then I get this ERROR: column reference "ctid" is ambiguous. I know what a "ctid" column is, but I don't use it in my code. It's inserted into the original SQL statement by ODBC driver. How to fix this? How to tell the ODBC driver not to insert the "ctid" column?
I tried to call CRecordset::Open with readOnly parameter, as I assume that ODBC needs ctid to update the row, and I don't need to update them. But the error remains.
Also tried to add a primary key to the second table that was missing it, thinking if a table has a primary key then ODBC can use that instead of 'ctid', but again no luck. Makes sense though, because I don't fetch any column of that second table, and the second table is used just for filtering.
If I make a DB view to work around the issue, I get ERROR: column "ctid" does not exist.
You have to call CRecordset::Open with two parameters changed:
m_pSet->Open(CRecordset::snapshot, NULL, CRecordset::readOnly);
Then you can fetch both the joined tables and the view without errors. No "ctid" then.
I’m trying to drop temporary tables created by Redshift.
I use the following query to find all the temp tables in the cluster:
select name, count(distinct id)
from stv_tbl_perm
where temp = 1
group by 1
The table i'm trying to drop called $stg_inappshourly.
I've tried to drop it in both of the following methods:
drop table $stg_inappshourly
drop table stg_inappshourly
The first one returns a syntax error. The second one drops the actual table.
Any ideas how to drop it?
Solved.
The reason this table kept existing is because its session had an error and it didn't close as expected.
The only way I found to remove this table was rebooting the Redshift instance.
I have a need to change the length of CHAR columns in tables in a PostgreSQL v7.4 database. This version did not support the ability to directly change the column type or size using the ALTER TABLE statement. So, directly altering a column from a CHAR(10) to CHAR(20) for instance isn't possible (yeah, I know, "use varchars", but that's not an option in my current circumstance). Anyone have any advice/tricks on how to best accomplish this? My initial thoughts:
-- Save the table's data in a new "save" table.
CREATE TABLE save_data AS SELECT * FROM table_to_change;
-- Drop the columns from the first column to be changed on down.
ALTER TABLE table_to_change DROP column_name1; -- for each column starting with the first one that needs to be modified
ALTER TABLE table_to_change DROP column_name2;
...
-- Add the columns back, using the new size for the CHAR column
ALTER TABLE table_to_change ADD column_name1 CHAR(new_size); -- for each column dropped above
ALTER TABLE table_to_change ADD column_name2...
-- Copy the data bace from the "save" table
UPDATE table_to_change
SET column_name1=save_data.column_name1, -- for each column dropped/readded above
column_name2=save_date.column_name2,
...
FROM save_data
WHERE table_to_change.primary_key=save_data.primay_key;
Yuck! Hopefully there's a better way? Any suggestions appreciated. Thanks!
Not PostgreSQL, but in Oracle I have changed a column's type by:
Add a new column with a temporary name (ie: TMP_COL) and the new data type (ie: CHAR(20))
run an update query: UPDATE TBL SET TMP_COL = OLD_COL;
Drop OLD_COL
Rename TMP_COL to OLD_COL
I would dump the table contents to a flat file with COPY, drop the table, recreate it with the correct column setup, and then reload (with COPY again).
http://www.postgresql.org/docs/7.4/static/sql-copy.html
Is it acceptable to have downtime while performing this operation? Obviously what I've just described requires making the table unusable for a period of time, how long depends on the data size and hardware you're working with.
Edit: But COPY is quite a bit faster than INSERTs and UPDATEs. According to the docs you can make it even faster by using BINARY mode. BINARY makes it less compatible with other PGSQL installs but you won't care about that because you only want to load the data to the same instance that you dumped it from.
The best approach to your problem is to upgrade pg to something less archaic :)
Seriously. 7.4 is going to be removed from "supported versions" pretty soon, so I wouldn't wait for it to happen with 7.4 in production.