What does: "error in materialized view refresh path" mean? - oracle12c

I have a query and when I run it I can insert the results fine into the table it needs to go into.
But when we run it as an MV refresh:
dbms_mview.refresh(mv, atomic_refresh => transactional) ;
we get this error:
Error report -
ORA-12008: error in materialized view refresh path
ORA-01722: invalid number
ORA-06512: at "LINK_OD_IREPORT.R_MV_IN_PLACE", line 26
ORA-06512: at line 1
12008. 00000 - "error in materialized view refresh path"
*Cause: Table SNAP$_<mview_name> reads rows from the view
MVIEW$_<mview_name>, which is a view on the master table
(the master may be at a remote site). Any
error in this path will cause this error at refresh time.
For fast refreshes, the table <master_owner>.MLOG$_<master>
is also referenced.
*Action: Examine the other messages on the stack to find the problem.
See if the objects SNAP$_<mview_name>, MVIEW$_<mview_name>,
<mowner>.<master>#<dblink>, <mowner>.MLOG$_<master>#<dblink>
still exist.
Does anyone know what this means?
I've tried querying these and it says they don't exist:
select * from $SNAPSHOT_mv_name;
select * from MVIEW$_mv_name;

Related

Postgres SQL ERROR: XX001: invalid page in block

This error has just started popping when I run queries against TABLE_A .......
ERROR: XX001: invalid page in block 38 of relation pg_tblspc/16402/PG_14_202107181/16404/125828
If I try a very simple query against the same table for example SELECT * FROM TABLE_A I get a similar error....
ERROR: invalid memory alloc request size 18446744073709551613
SQL state: XX000
Or another similar query select count(*) from TABLE_A gives me....
ERROR: could not access status of transaction 917520
DETAIL: Could not open file "pg_xact/0000": No such file or directory.
SQL state: 58P01
Based on this thread I tried this fix....
SET zero_damaged_pages = on;
VACUUM full TABLE_A;
REINDEX TABLE TABLE_A;
The 2nd command, VACUUM full TABLE_A produced another related error....
ERROR: found xmax 16384 from before relfrozenxid 379279265
SQL state: XX001
I think all these problems boil down to a simple case of file corruption at the OS level. I do have the ability to drop and re-create this table, but before I start I'd like to know if there's a quicker/simpler solution, and if there's any way of stopping this from happening again.

Adding an index in mysql workbench corrupted the table?

Step-by-step:
Right clicked on tbl > Table Inspector > Clicked "Columns" tab > Right click > Create Index >
In that section I left the following defaults:
Algo: Default
Locking: Default (allow as much concurrency as possible)
It gave a timeout error
I then tried to run a simple "SELECT * ", but it's timing out every time now.
I didn't think that adding an index can corrupt a table so I didn't do a backup and now in a bit of a panic mode... Is there anything that can be done to reverse this?
When doing the show full processlist I see the following:
A header
Another header
'Waiting for table metadata lock'
'CREATE INDEX idx_all_mls_2_Centris_No ON mcgillim_matrix.all_mls_2 (Centris_No) COMMENT '''' ALGORITHM DEFAULT LOCK DEFAULT'
In the processlist, it's clearly visible your index creation is waiting for metlock which means your table is already locked by another query which is like select distinct t1.broker_name and running from 3460 seconds.
You have two options here.
Let that SQL complete first. Then index will create.
Another, Kill that Select SQL which will not harm your system and can run later.
To kill query, You can find ID in information_schema.processlist. then simply run the below query.
kill ID;

Redshift COPY throws error but 'stl_load_errors' system table does not provide details

When I attempt to copy a CSV from S3 into a new table in Redshift (which normally works for other tables) I get this error
ERROR: Load into table 'table_name' failed. Check 'stl_load_errors'
system table for details.
But, when I run the standard query to investigate stl_load_errors
SELECT errors.tbl, info.table_id::integer, info.table_id, *
FROM stl_load_errors errors
INNER JOIN svv_table_info info
ON errors.tbl = info.table_id
I don't see any results related to this COPY. I see errors from previous failed COPY commands, but none related to the most recent one that I am interested in.
Please make sure that you are querying stl_load_errors table with same user you are performing COPY command. You can also try to avoid using ssv_table_info table in query or change INNER to LEFT join.

Put request failed : INSERT INTO "PARTITION_PARAMS" when executing an insert..select query with hundreds of fields

Executing an insert..select query over Tez on a Hortonworks HDP 3 cluster with hive3, I get the following error:
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. MetaException(message:
Put request failed : INSERT INTO "PARTITION_PARAMS" ("PARAM_VALUE","PART_ID","PARAM_KEY") VALUES (?,?,?) )
The destination table has 200 fields and it is partitioned by two fields. Performing some testing, the error dissapears when the destination table has 143 fields. If I change the names of destination table fields with shorter ones, I can get the query working without error with more fields, but I can't get it working with the 200 fields I need.
Hive Metastore is configured to use a PostgreSQL database
We where hitting HIVE-20221
We can get the query executing correctly, setting hive.stats.autogather=false

"No current record for fetch operation" for select insert

Can anyone see why I'm getting the "No current record for fetch operation" below?
I'm successfully skipping duplicate records by catching and not re-throwing the unique key violation exception below. There is another error however.
FOR SELECT
...
FROM
P_SELECT_CLAIM_FILE cf
ORDER BY
cf.DATESBM, cf.TIMESBM, cf.TRANSCDE
INTO
...variables
DO BEGIN
INSERT INTO
CLAIM_TABLE (...)
VALUES (...variables);
WHEN GDSCODE unique_key_violation DO
TX_ID=0;
WHEN GDSCODE no_cur_rec DO
/* Why does this happen?
-508 335544348 no_cur_rec No current record for fetch operation
*/
EXCEPTION E 'no_cur_rec ' || TX_ID;
END
The procedure P_SELECT_CLAIM_FILE contains another FOR SELECT INTO with lots of trimming and finally a SUSPEND command. This reads from a fixed width text file.
I'm tempted to change this into a single INSERT SELECT where not exists statement. I prefer to make a minimal fix instead however; the holidays already here.