Magento 1.7 cannot reindex product flat data... I get the following error when trying to reindex my database.
Product Flat Data index process unknown error:
exception 'PDOException' with message 'SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`d014505f`.<result 2 when explaining filename '#sql-1f6c_39a11d'>, CONSTRAINT `FK_CAT_PRD_FLAT_1_ENTT_ID_CAT_PRD_ENTT_ENTT_ID` FOREIGN KEY (`entity_id`) REFERENCES `catalog_product_entity` (`e)' in /www/htdocs/w00f5624/lib/Zend/Db/Statement/Pdo.php:228
Stack trace:#0 /www/htdocs/w00f5624/lib/Zend/Db/Statement/Pdo.php(228): PDOStatement- >execute(Array)
#1 /www/htdocs/w00f5624/lib/Varien/Db/Statement/Pdo/Mysql.php(110): Zend_Db_Statement_Pdo->_execute(Array)
#2 /www/htdocs/w00f5624/lib/Zend/Db/Statement.php(300): Varien_Db_Statement_Pdo_Mysql->_execute(Array)
#3 /www/htdocs/w00f5624/lib/Zend/Db/Adapter/Abstract.php(479): Zend_Db_Statement->execute(Array)
#4 /www/htdocs/w00f5624/lib/Zend/Db/Adapter/Pdo/Abstract.php(238): Zend_Db_Adapter_Abstract->query('ALTER TABLE `ca...', Array)
#5 /www/htdocs/w00f5624/lib/Varien/Db/Adapter/Pdo/Mysql.php(419): Zend_Db_Adapter_Pdo_Abstract->query('ALTER TABLE `ca...', Array)
#6 /www/htdocs/w00f5624/lib/Varien/Db/Adapter/Pdo/Mysql.php(340): Varien_Db_Adapter_Pdo_Mysql->query('ALTER TABLE `ca...')
#7 /www/htdocs/w00f5624/lib/Varien/Db/Adapter/Pdo/Mysql.php(2569): Varien_Db_Adapter_Pdo_Mysql->raw_query('ALTER TABLE `ca...')
#8 /www/htdocs/w00f5624/app/code/core/Mage/Catalog/Model/Resource/Product/Flat/Indexer.php(816): Varien_Db_Adapter_Pdo_Mysql->addForeignKey('FK_CAT_PRD_FLAT...', 'catalog_product...', 'entity_id', 'catalog_product...', 'entity_id', 'CASCADE', 'CASCADE')
#9 /www/htdocs/w00f5624/app/code/core/Mage/Catalog/Model/Resource/Product/Flat/Indexer.php(1390): Mage_Catalog_Model_Resource_Product_Flat_Indexer->prepareFlatTable(1)
#10 /www/htdocs/w00f5624/app/code/core/Mage/Catalog/Model/Product/Flat/Indexer.php(296): Mage_Catalog_Model_Resource_Product_Flat_Indexer->reindexAll()
#11 /www/htdocs/w00f5624/app/code/core/Mage/Catalog/Model/Product/Indexer/Flat.php(336): Mage_Catalog_Model_Product_Flat_Indexer->reindexAll()
#12 /www/htdocs/w00f5624/app/code/core/Mage/Index/Model/Process.php(209): Mage_Catalog_Model_Product_Indexer_Flat->reindexAll()
#13 /www/htdocs/w00f5624/app/code/core/Mage/Index/Model/Process.php(255): Mage_Index_Model_Process->reindexAll()
#14 /www/htdocs/w00f5624/shell/indexer.php(158): Mage_Index_Model_Process->reindexEverything()
#15 /www/htdocs/w00f5624/shell/indexer.php(198): Mage_Shell_Compiler->run()
#16 {main}
It seems Magento did not clean the table when you have deleted some informations; so you need to clean it manually, using this SQL query:
TRUNCATE TABLE ´catalog_product_flat_1´;
Then, run reindex process.
It's okay to empty that table; since Magento uses EAV tables to rebiuld (reindex) it again.
magento programatically re index
ID Code
1 catalog_product_attribute
2 catalog_product_price
3 catalog_url
4 catalog_product_flat
5 catalog_category_flat
6 catalog_category_product
7 catalogsearch_stock
8 cataloginventory_stock
9 tag_summary
for ($i = 1; $i <= 9; $i++) {
$process = Mage::getModel('index/process')->load($i);
$process->reindexAll();
}
I experienced the same issue today. To fix this, just locate the corrupted products by running
SELECT cpf.entity_id FROM catalog_product_flat_1 AS cpf LEFT JOIN catalog_product_entity AS cpe ON cpf.entity_id = cpe.entity_id WHERE ISNULL(cpe.entity_id);
You'll get a result like
+-----------+
| entity_id |
+-----------+
| 14029 |
| 14111 |
+-----------+
2 rows in set (0.01 sec)
Now you can just delete these products by running
DELETE FROM catalog_product_flat_1 where entity_id IN (14029,14111);
Note: You might need to change the "catalog_product_flat_1" table - the error message tells you which table contains the corrupted products.
I've got almost the same error:
(something like that) SQLSTATE[HY000]: General error: 1005 Can't create table 'databasename.#sql-4ebf-e07' (errno: 121)
Then I've researched that I have the same foreign key 'FK_CAT_PRD_FLAT_1_ENTT_ID_CAT_PRD_ENTT_ENTT_ID'
Even if I truncate a table like this:
SET FOREIGN_KEY_CHECKS=0;
TRUNCATE TABLE catalog_product_flat_1;
SET FOREIGN_KEY_CHECKS=1;
didn't solve my problem.
Even if I drop all tables in DB, I couldn't drop only 3 of them at all:
catalog_product_entity, and couple eav_ tables (I don't remember)
Only one way had helped me:
Make backup of current DB (do this before any changes, even if you've got message to reindex)
Drop DB (not all tables, but directly DB)
Create DB (check that you still have privileges)
Restore DB from backup and check Admin panel.
SET FOREIGN_KEY_CHECKS=0;
TRUNCATE TABLE catalog_product_flat_1;
TRUNCATE TABLE catalog_product_flat_2;
SET FOREIGN_KEY_CHECKS=1;
Worked for me.
Afterwards I could reindex from CLI.
Related
I've got Postgres 11 version and production server.
A procedure is rather slow and query (a piece of code in procedure) is something like that:
create temp table tmp_pos_source as
with ... (
...
)
, cte_emp as (
...
)
, cte_all as (
...
)
select ...
from cte_all;
analyze tmp_pos_source;
The query is slow and I want to create an index to improve speed.
create index idx_pos_obj_id on tmp_pos_source(pos_obj_id);
Where should I put it? After command ANALYZE or before?
It doesn't matter. The only time when it helps to ANALYZE a table after creating an index is when the index is on an expression rather than on a plain column. The reason is that PostgreSQL automatically collects statistics for each column, but statistics on expressions are only collected if there is an index or extended statistics on the expression.
I am loading table with 41 millions of files from sql dump to table.
PostgreSQL Server 14.3 is setuped with best practice from google (workmem, jobs etc).
Table have a lot of index. After loading of dump I have seen in cmd next:
...
INSERT 0 250
INSERT 0 250
INSERT 0 141
setval
----------
41349316
ALTER TABLE
ALTER TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
And by output it continue to do something. Process in CMD is not finished. No new line just as I had show.
I checked current activity with:
select * from pg_stat_activity where datname = 'dbname';
It show idle for column state google says that it show last command that was run in session. I checked after few hours and nothing is changed.
pg_stat_progress_create_index do not show nothing.
So I do not know what to do. Could indexing process is hanged? Or all fine and I should to wait? If yes what it's doing now? What I can\should to do?
UPD from next morning: Today I rechecked all
SELECT * FROM pg_stat_progress_create_index;
is still do not show nothing.
Console window did two new prints:
CREATE INDEX
CREATE INDEX
I again checked:
select * from pg_stat_activity where datname = 'dbname';
it show that active process is:
CREATE INDEX index_insert_status ON public.xml_files USING btree (insert_status);
But why pg_stat_progress_create_index do not show nothing??
PostgreSQL DB: v 9.4.24
create table my_a_b_data ... // with a_uuid, b_uuid, and c columns
NOTE: the my_a_b_data keeps the references to a and b table. So it keeps the uuids of a and b.
where: the primary key (a_uuid, b_uuid)
there is also an index:
create unique index my_a_b_data_pkey
on my_a_b_data (a_uuid, b_uuid);
In the Java jdbc-alike code, in the scope one single transaction: (start() -> [code (delete, insert)] ->commit()]) (org.postgresql:postgresql:42.2.5 driver)
delete from my_a_b_data where b_uuid = 'bbb';
insert into my_a_b_data (a_uuid, b_uuid, c) values ('aaa', 'bbb', null);
I found that the insert fails, because the delete is not yet deleted. So it fails because it can not be duplicated.
Q: Is it is some kind of limitation in PostgreSQL that DB can’t do a delete and insert in one transaction because PostgreSQL doesn’t update its indexes until the commit for the delete is executed, therefore the insert will fail since the id or key (whatever we may be using) already exists in the index?
What would be possible solution? Splitting in two transactions?
UPDATE: the order is exactly the same. When I test the sql alone in the SQL console. It works fine. We use JDBI library v 5.29.
there it looks like this:
#Transaction
#SqlUpdate("insert into my_a_b_data (...; // similar for the delete
public abstract void addB() ..
So in the code:
this.begin();
this.deleteByB(b_id);
this.addB(a_id, b_id);
this.commit();
I had a similar problem to insert duplicated values and I resolved it by using Insert and Update instead of Delete. I created this process on Python but you might be able to reproduce it:
First, you create a temporary table like the target table where you want to insert values, the difference is that this table is dropped after commit.
CREATE TEMP TABLE temp_my_a_b_data
(LIKE public.my_a_b_data INCLUDING DEFAULTS)
ON COMMIT DROP;
I have created a CSV (I had to merge different data to input) with the values that I want to input/insert on my table and I used the COPY function to insert them to the temp_table (temp_my_a_b_data).
I found this code on this post related to Java and COPY PostgreSQL - \copy command:
String query ="COPY tmp from 'E://load.csv' delimiter ','";
Use the INSERT INTO but with the ON_CONFLICT clause which you can decide to do an action when the insert cannot be done because of specified constrains, on the case below we do the update:
INSERT INTO public.my_a_b_data
SELECT *
FROM temp_my_a_b_data
ON CONFLICT (a_uuid, b_uuid,c) DO UPDATE
SET a_uuid = EXCLUDED.a_uuid,
b_uuid = EXCLUDED. c = EXCLUDED.c;`
Considerations:
I am not sure but you might be able to perform the third step without using the previous steps, temp table or copy from. You can just a loop over the values:
INSERT INTO public.my_a_b_data VALUES(value1, value2, null)
ON CONFLICT (a_uuid, b_uuid,c) DO UPDATE
SET a_uuid = EXCLUDED.a_uuid,
b_uuid = EXCLUDED.b_uuid, c = EXCLUDED.c;
I am using COPY table_name FROM STDIN to import data. It is
very efficient, but if there's any violation of duplicate keys, the whole
procedure will be stopped. Is there anyway to around this?
why does not postgresql just give a warning and copy rest of the data?
Here's the example :
select * from "Demo1";
Id | Name | Age
---+-------+-----
1 | abc | 20
2 | def | 22
COPY "Demo1" from STDIN;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> 3 pqr 25
>> 4 xyz 26
>> 5 abc 21
>> \.
ERROR: duplicate key value violates unique constraint "Demo1_Name_key"
DETAIL: Key ("Name")=(abc) already exists.
CONTEXT: COPY Demo1, line 3
Here "Name" field is having unique constraint. Since string "abc" is already present in table. Its ignoring whole process.
You could use either of these two methods to import data:
COPY FROM (to a temporary table). Weed out Primary-Key failures and import only valid data.
Use FDW (like this example). Foreign-Data-Wrappers is recommended for Live Feeds / very large data sets, since you don't need to create an temporary copy (for errors / skip columns / skip rows etc.), and can directly run a SELECT statement skipping any column / row and INSERT into the destination table.
I have two queries, insert and update. I did a benchmark through postgres console with a large dataset and found that postgres was not picking up the index. To solve this - I disabled seqscan for those two queries and got a huge performance boost; Postgres was able to pick up the indexes for scanning through the table.
Problem:
I am doing the same thing through jdbc
statement.executeUpdate("set enable_seqscan = off");
statement.executeUpdate("My_Insert_Query");
statement.executeUpdate("My_Update_Query");
statement.executeUpdate("set enable_seqscan = on");
But seems like postgres is not turning seq_scan off for and the queries are taking way too long to execute.
Master Table
Master_Id auto-generated
child_unique integer
Child Table
child_unique integer
Master_id integer
Insert into Master (child_unique) from Child as i WHERE NOT EXISTS ( SELECT * from Master where Master.child_unique = i.child_unique);
Update Child set Master_id = Master.Master_id from Master where Master.child_unique = Child.child_unique;
For every unique row in Child which is not present in Master- I insert that into my Master table and get the auto generated Master_ID and insert it back into the Child table.
Both tables have index on child_unique.
Index is picked up on the Master table where as it is not in the case of Child table.
How did I find out? Using Postgres's pg_stat_all_indexes table.
Firstly, I agree with Frank above - fix the real problem.
However, if you really want to disable seq-scans you've failed to provide any information to help you do so.
Are these statements all executed on the same connection? (turn your logging on/up in PostgreSQL's config file to find out)
Are there any other jdbc-generated bits being sent to the server? (logging again)
What does a "show enable_seqscan" return after the first statement?