How to delete from apache phoenix querying by fields not in every index - apache-phoenix

I try to execute statement
DELETE FROM statistics WHERE statistic_id is null
and geting error:
java.sql.SQLException: ERROR 1027 (42Y86): All columns referenced in a WHERE clause must be available in every index for a table with immutable rows. tableName=STATISTICS
at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:386)
at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:389)
at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:553)
at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:541)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:296)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:294)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1254)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
my primary key is on field ID and I've secondary key on STATISTIC_ID

Phoenix require [1] that when you delete something from table with immutable rows, rows to delete should be filtered by all indexed columns. The one way to do so is to disable offending indices by
ALTER INDEX index_name ON table_name DISABLE;
DELETE FROM table_name WHERE condition;
And afterwards rebuild disabled indices:
ALTER INDEX index_name ON table_name REBUILD;
However, keep in mind that this operation takes significant amount of time and resources.
[1] https://phoenix.apache.org/secondary_indexing.html#Immutable_Tables

Related

Future index creation on partition tables postgres [duplicate]

I am using postgresql 14.1, and I re-created my live database using parititons for some tables.
since i did that, i could create index when the server wasn't live, but when it's live i can only create the using concurrently but unfortunately when I try to create an index concurrently i get an error.
running this:
create index concurrently foo on foo_table(col1,col2,col3));
provides the error:
ERROR: cannot create index on partitioned table "foo_table" concurrently
now it's a live server and i cannot create indexes not concurrently and i need to create some indexes in order to improve performance. any ideas how do to that ?
thanks
No problem. First, use CREATE INDEX CONCURRENTLY to create the index on each partition. Then use CREATE INDEX to create the index on the partitioned table. That will be fast, and the indexes on the partitions will become the partitions of the index.
Step 1: Create an index on the partitioned (parent) table
CREATE INDEX foo_idx ON ONLY foo (col1, col2, col3);
This step creates an invalid index. That way, none of the table partitions will get the index applied automatically.
Step 2: Create the index for each partition using CONCURRENTLY and attach to the parent index
CREATE INDEX CONCURRENTLY foo_idx_1
ON foo_1 (col1, col2, col3);
ALTER INDEX foo_idx
ATTACH PARTITION foo_idx_1;
Repeat this step for every partition index.
Step 3: Verify that the parent index created at the beginning (Step 1) is valid. Once indexes for all partitions are attached to the parent index, the parent index is marked valid automatically.
SELECT * FROM pg_index WHERE pg_index.indisvalid = false;
The query should return zero results. If thats not the case then check your script for mistakes.

Unable to drop index on a db2 table

I have a table MAIN_SCHEMA.TEST in which I created a Index on a column CHECK_ID.
CHECK_ID is also a FOREIGN_KEY constraint in TEST table.
This table contains only 50 records.
By Mistake the index got created in Default schema DEFAULT_SCHEMA.CHECK_ID_IDX.
CREATE INDEX DEFAULT_SCHEMA.CHECK_ID_IDX(CHECK_ID ASC);
So I am trying to drop this index but the drop query gets stuck for long time.
DROP INDEX DEFAULT_SCHEMA.CHECK_ID_IDX.
there are no locks on this table when I checked.
Instead of dropping and recreating the index with the right schema, could you just try to RENAME the index? It requires the existing SCHEMA.NAME pair together with the new as input. It will not move any data, but just update the metadata.

How to reorg the indexes in DB2 database

I have to reorg all the index for table
I am getting the following error
SQL Error [23505]: One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the primary key, unique constraint or unique index identified by "2" constrains table "GMS4.SMS_PHYSICAL_CUSTOMER_DATA" from having duplicate values for the index key.. SQLCODE=-803, SQLSTATE=23505, DRIVER=4.16.53
One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the primary key, unique constraint or unique index identified by "2" constrains table "GMS4.SMS_PHYSICAL_CUSTOMER_DATA" from having duplicate values for the index key.. SQLCODE=-803, SQLSTATE=23505, DRIVER=4.16.53
DB2 Version 10
Please help..
I'm assuming you're on DB2 for Linux/Unix/Windows, here.
Your problem is not that you need to reorg your tables. The problem is that you are trying to insert a row, but you have a unique index on that table, which is preventing the insert.
You can see the name of the index, and the columns it is unique for by using this query:
SELECT
I.INDSCHEMA
,I.INDNAME
,C.COLNAME
FROM SYSCAT.INDEXES I
JOIN SYSCAT.INDEXCOLUSE C
ON I.INDSCHEMA = C.INDSCHEMA
AND I.INDNAME = C.INDNAME
WHERE I.IID = #indexID
AND I.TABSCHEMA = #tableSchema
AND I.TABNAME = #tableName
ORDER BY C.COLSEQ
;
You can get all of the parameters needed for this query from your error message. In this case, #indexId would be 2, #tableSchema would be GMS4, and #tableName would be SMS_PHYSICAL_CUSTOMER_DATA.

PostgreSQL cannot drop index on partition

I have several partition tables with indexes on them. All indexes can be seen in response of
SELECT indexname FROM pg_catalog.pg_indexes;
But when I'm trying to make DROP INDEX my_index_name; it returns error declaring that there is no index my_index_name.
How can I drop those indexes?
Could be related to your search_path. Try dropping the index prefixed by the schema.
Eg.
SELECT schemaname,tablename,indexname FROM pg_indexes WHERE indexname = 'my_index_name'
Using the results of that query, drop the index:
DROP INDEX some_schema.your_index_name;

Sybase complains of duplicate insertion where none exists

I have moved some records from my SOURCE table in DB_1 into an ARCHIVE table in another DB_2 (ie. INSERTED the records from SOURCE into ARCHIVE and then DELETED the records from SOURCE.)
My SOURCE table has the following index created as SOURCE_1:
CREATE UNIQUE NONCLUSTERED INDEX SOURCE_1
ON dbo.SOURCE(TRADE_SET_ID, ORDER_ID)
The problem is - when I try to insert the rows back into SOURCE from ARCHIVE, Sybase throws the following error:
Attempt to insert duplicate key row in object 'SOURCE' with unique index 'SOURCE_1'
And, of course, subsequently fails the insertions.
I confirmed that my SOURCE table does not have these duplicates because the following query returned empty:
select * from DB_1.dbo.SOURCE
join DB_2.dbo.ARCHIVE
on DB_1.dbo.SOURCE.TRADE_SET_ID = DB_2.dbo.ARCHIVE.TRADE_SET_ID
AND DB_1.dbo.SOURCE.ORDER_ID = DB_2.dbo.ARCHIVE.ORDER_ID
If the above query returned nothing, then that means I haven not violated my unique index constraint on the 2 columns, however Sybase complains that I have.
Does anyone have any ideas on why this is happening?
If Sybase is anything like SQL Server in this regard (Which I'm more familiar with), I would suspect that the index is blocking the insert. Try disabling the index (along with any other indexes or autoincrement columns) on your archive version before copying over to it, then re-enabling. Its probable that Sybase would try to automatically create IDs for the insertions, which would interfere with the existing records.