SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '1708391' for key 'PRIMARY' magento - magento-1.7

I'm getting this error and it is keeping me busy all day. I get it when I log in on the admin page. The magento version is 1.7
things I have undertaken so far (among the things I still can remember)
clear browser cache
clear cache (db & files)
clear sessions
clear locks
but not work!!!

Related

TYPO3 : clear big mysql tables

On my TYPO3 6.2 website some SQL tables became quite big :
tx_realurl_urlcache 557Mo
cf_cache_hash 15.5Mo
tx_kesearch_stat_search 15.4Mo
tx_kesearch_stat_word 19.6Mo
sys_refindex 18.1Mo
Please notice that all the others tables (about 100 tables) are all combined > 15Mo ... so my question is simple :
-> Which one could I delete ? Is it safe or not ?
I have bad experiences in the past with TYPO3 database cleanup then I rather ask you for advice :)
TLDR: the only tables which could be cleared are cache tables, but they cost you performance and will build up again soon.
you might clear these tables and these propably will build up again, but you will suffer.
tx_realurl_urlcache - here realurl stores the generated urls, if you truncate it the url decoding might break/ some urls might be unknown = your page breaks
cf_cache_* - can be truncated but will be rebuild, meanwhile your server needs to rebuild the information. it is slower.
tx_kesearch_stat_search / tx_kesearch_stat_word - these two belong to the kesearch-extension and include the index information of your page. Truncating will terminate the search until the tables are rebuild
sys_refindex - here TYPO3 stores the references which will help you to avoid deleting used files or records. (normaly this index is rebuild with a scheduler task to get consistent data)
Do not delete the table! You can truncate some tables.
If you want to clean up some cache just flush all Typo3 caches in backend or just use the 'clear all cache' button inside the typo3-install tool.

What is the vcslog table in JediVCS used for?

Just wondering what the vcslog table is used for in JediVCS.
I received some consistancy errors on this table (during my backup procedure these errors were flagged by the database backend) and there is a chance that after repair some data went missing.
If the table is just acting as a log then this should be ok.
Some other info:
Was going to ask this quesion on the JediVCS news group but it appears to
be down
I do have recent backups that I could restore but would rather not as it
means finding and re-committing any intervening work.
I diffed all other tables and their data between the pre-fix and post-fix
versions of the VCS and they all match.
I tried to diff the vcslog table but the tool I have crashed as the table
has millions of records. (I think the tool ran out of memory doing the
diff)
Any info appreciated.
Peter Mayes
I ended up mailing the active JediVCS admin staff directly.
Thanks go to them for their prompt and helpul advice.
Basically the vcslog table holds information relating to actions taken by a user. As such its data is entirely optional, but recommended.
The least useful records are marked as type=g in the database. (This logs a 'get' operation by a user).
After deleting these records directly in the database the vcslog table reduced in size by 97%. (I suspect the large number of 'get' logs is due to our nighly autobuilds).
The database has been stable since the clear. (Some six weeks ago)
Here are some other topics in the JediVCS help manual I was pointed at.
Just in case they prove helpful to someone. These detail the logging behaviour inside the JediVCS client:
"Project history"
"Application server options"
"Write VCS log"

Slony "duplicate key value violates unique constraint" error

I have a problem which goes on for longer time. I use slony to replicate database from master to slave and from that slave to three other backup servers. Once a 2-3 weeks there is a key duplication problem that happens only on one specific table (big but not biggest in database).
It started to occur like year ago on Postgres 8.4 and slony 1 and we switched to 2.0.1. Later we upgraded it to 2.0.4, and we succesfuly upgraded slony to 2.1.3 and it's our current version. We started fresh replication on same computers and it was all going well until today. We got the same duplication key error on same table (with different keys every time of course).
Solution to clean it up is just to delete invalid key on slaves (it spreads across all nodes) and it's all working again. Data is not corrupted. But source of problem remains unsolved.
In googles I found nothing related to this problem (we did not used truncate on any table, we did not change the structure of table).
Any ideas what can be done about it?
When this problem occured in our setup, it turned out that the schema of the master database was older than the slaves' and didn't have the UNIQUE constraint for this particular column. So, my advice would be:
make sure the master table has in fact the constraint
if not:
clean the table
add the constraint
else:
revoke write privileges from all clients except slony for the replicated tables.
As Craig has said usually this is a write transaction to a replica. So the first things to do is to verify permissions. If this keeps happening, what you can do is start logging connections of the readers of the replicas and keep them around so when the issue happens, you can track down where the bad tuple came from. This can generate a LOT of logs however so you probably want to see to what extent you can narrow this down first. You presumably know which replica this is starting on, so you can go from there.
A particular area of concern I would spot would be what happens if you have a user defined function which writes. A casual observer might not spot that in the query, nor might a connection pooler.

strange data remains

Does anyone heard or experienced about following phenomenon ?
Using postgresql 9.0.5 on Windows
= table structure =
[parent] - [child] - [grandchild]
I found out a record remained strangely on the [child] table.
This record exists violating the restriction of foreign key.
these tables store transaction data of my application
all the above tables have numeric PRIMARY KEY
all these tables have FOREIGN KEY restriction (between parent and child, grandchild)
my application updates each record status along with the transaction progress
my app copies this record to archive tables (same structure, same restrictions)
once the all status changed to "normal_end".
then, delete these records when it finished copy them to the archive tables.
the status of remained record on the [child] table was NOT "normal_end" but "processing".
but the status of copied data (same ID) in archive table was "normal_end".
no error reported at pg_log
I felt it very strange...
I suspect that the deleted data might came back to active !?
Can deleted data be active unexpected?
There should never be data that violates a foreign key constraint (except during a transaction with deferred constraints).
A deleted row should stay deleted once the transaction is committed. That's one of the requirements of ACID. However the correct working of PostgreSQL relies on the correct functioning of your os and hardware. When postgresql fsyncs a file it should really be written to disk or a non volatile cache. Unfortunatly it sometimes happens that disks or controllers tell the system the write has finished while it hasn't and is still in a volatile cache. If you have a raid controller with RAM but no battery make sure the controllers cache is set to write-through.
Personally I have seen PostgreSQL have incorrect data once, it had a duplicate row (same primary key) this was after a crash on a windows xp machine (this was most likely a 9.0.x). Windows XP machines are not very reliable running postgresql. They often give strange network errors.

Best way to keep the TYPO3 sys_log nice & clean?

I have this MySQL
DELETE FROM sys_log
WHERE sys_log.tstamp < UNIX_TIMESTAMP(ADDDATE(NOW(), INTERVAL -2 MONTH))
ORDER BY sys_log.tstamp ASC
LIMIT 10000
Is this good for keeping the sys_log small, if I cronjob it?
Yes and No
It IS NOT if you care about your record history.
You can revert changes to records (content, pages etc.) using the sys_history table. The sys_history tables and sys_log tables are related. When you truncate sys_log, you also loose the ability to rollback any changes to the system. Your clients may not like that.
It IS if you only care about the sys_log size.
Truncating the table via cron is fine.
In TYPO3 4.6 and up you can use the Table garbage collection scheduler task als pgampe says. For TYPO3 versions below 4.5 you can use the tablecleaner extension. If you remove all records from sys_log older than [N] days, you will also retain your record history for [N] days. That seems to be the best solution to me.
And please try to fix what is filling your sys_log in the first place ;-)
There is a scheduler task for this.
It is called Table garbage collection (scheduler).
In TYPO3 4.7, it can only clean the sys_log table. Starting from TYPO3 6.0, it can also clean the sys_history table. You can configure the number of days and what tables to clean.
Extensions may register further tables to clean.
Yes, it is.
See also other suggestions by Jochen Weiland about keeping TYPO3 installation clean and small
Since TYPO3 9, the history is no longer stored using sys_log.
You can safely delete records from sys_log.
See Breaking Change #55298.
For version before TYPO3 v9, sys_history referenced sys_log, so:
if you delete records from sys_log, you should make sure sys_history is not referencing the records you want to delete or delete these as well, if intended (see example DB queries below)
For versions before v9 (to delete only records in sys_log which are not referenced by sys_history):
DELETE FROM sys_log WHERE NOT EXISTS
(SELECT * FROM sys_history WHERE sys_history.sys_log_uid=sys_log.uid)
AND recuid=0 AND tstamp < $timestamp LIMIT $limit
Feel free to optimize this for your requirements.
What you can also do safely (without affecting sys_history) is deleting records with sys_log.error != 0.
Some more recommendations:
Set your debugging level to verbose (Warnings) on development but errors-only in production
Regularly look at the sys log and eliminate problems. You can delete the specific error from the sys_log once you have taken care of the problem (see sys_log.error != 0, sys_log.details). You can do this with a database command or on newer TYPO3 versions use the "SYSTEM: log" in the backend and use the "Delete similar errors" button:
You can also consider, doing a truncate sys_log and truncate sys_history together with using the lowlevel cleaner and delete records with deleted=1 on a major version upgrade. Be sure to talk with someone in close vicinity to the editors first though, as this will remove the entire history. Be sure that you will want to do that.
For the scheduler task "Table garbage collection" see the documentation: https://docs.typo3.org/c/typo3/cms-scheduler/master/en-us/Installation/BaseTasks/Index.html
Another common cause for large sys_log tables are issues/errors in one of the extensions used in the TYPO3 installation.
A common example when an old version of tx_solr is used:
Core: Error handler (FE): PHP Warning: Invalid argument supplied for foreach() in typo3conf/ext/solr/classes/class.tx_solr_util.php
Core: Error handler (FE): PHP Warning: array_reverse() expects parameter 1 to be array, null given in typo3conf/ext/solr/classes/class.tx_solr_util.php line 280
This set of records will pop up in sys_log every minute or so which leads to millions of records in a short period of time.
Luckily, these kind of records don't have any effect on the record history in sys_history and the associated rollback functionality, so it's safe to delete them.
If you have a large sys_log this will likely cause issues with LOCK timeouts, so you'll have to limit the delete query:
delete from sys_log where details LIKE 'Core:%' LIMIT 200000;