I am running offline compaction to reduce aem repository size. but it is throwing error [05:28:37.939 [main] ERROR o.a.j.o.p.segment.SegmentTracker - Segment not found:
3ff5d2ae-2b7f-412b-bfff-1dcdf0613315. Creation date delta is 15 ms.
org.apache.jackrabbit.oak.plugins.segment.SegmentNotFoundException: Segment 3ff5
d2ae-2b7f-412b-bfff-1dcdf0613315 not found].
Compaction was working fine earlier, when we were using AEM 6.1 only with communities FP4 and have oak version 1.2.7.
Problem occurred after installation of communities FP5, FP6, service pack 2 and CFP3 in AEM 6.1, now oak version upgraded to 1.2.18 and we are using oak jar version 1.2.18 to perform the compaction.
When I Google this error then found that our segments have been corrupted and we have to restore our segment with last good condition.
Then we have found a command this [java -jar D:\aem\oakfile\oak-run-1.2.18.jar check -d1 --bin=-1 -p D:\aem\crx-quickstart\repository\segmentstore] to find the last good condition segment where we can restore. But when we are running this command to find the previous good segment then this command is keep running to infinite without end.
Can anyone let me know how I can fix this?
Related
I have successfully upgraded my localhost copy of SAP Commerce 2005 to 2105, and I'm now in the process of importing the 2105 platform into my Eclipse IDE. This import process runs for a long time and eventually errors out with the following error:
java.lang.OutOfMemoryError: Java heap space
I've tried increasing heap size for Eclipse multiple times, but I still end up running out of memory. I'm using the Hybris/Eclipse plugin to do this.
It appears that the smartedit module has significantly increased in size and seems to be the culprit in getting the platform to load.
I've read that it's not possible to load the 2105 platform into Eclipse with the smartedit projects included, and I've also read that upgrading the version of the Hybris/Eclipse plugin does not help either.
What is everyone else doing to solve this problem? I've tried several times loading individual projects, making sure to exclude any project with the name 'smartedit' in it, but it still runs for a long time and then exits with the out of memory error.
Without stacktrace hard to say what is going on your PC. MAYBE that issue is same to already raised on github.com on plugin repository.
https://github.com/SAP/hybris-commerce-eclipse-plugin/issues/99
I have tested that and closed. There is solution proposed to that particular case. Maybe will work out with yours
I am looking for a Version Control tool for SQLite database. So on exploring I came to know about Fossil which is recommended by SQLite also.
I am using latest version 2.7 for Windows and the problem I am facing is on using it in server mode and committing few files, it gets crashed frequently giving 'database is locked' error.
At first instance I thought that I am using the server and cloned copy on same system so it might be crashing due to this reason. But when I have started server on another system and even committing to it from a different system the result is same, It got crashed again.
Here's the screenshot of the crashed fossil server
Can anyone point me to the right direction that what I am doing wrong here?
Indeed, version 2.7 still had some wrinkles with a newly added backoffice functionality.
In general, the backoffice processing can be turned off via setting 'fossil set backoffice-disable true', see help on backoffice-disable
This would most likely resolve the issue you experienced.
Meanwhile, the recently released version 2.8 has those wrinkles resolved.
Recently I upgraded one of our replicas from MongoDB version 2.4.3 to 2.6.3. It was a simple restart of the 3 mongodb replicas after upgrading of the binaries. After a few days, the same queries that were running for nearly a year on 2.4.3 started to build up causing very high load on the server. The load avg which used to be less than 1 all the time now spiked to over 300. Quick resolution was to failover and restart the mongod process which would bring the load down. But the new Primary would behave the same way in matter of a few hours and I would be forced to failover again and restart mongod. After a few such occurrences I downgraded the replicas to 2.4.10 and this seem to have resolved the load issue or queues building. Can anyone vouch for the theory to be true if you might have experienced a similar problem?
I am using NetBeans as my IDE. After working a few hours, I got the following problems
1. It got stuck with scan for external changes suspended
2. and after this came, the auto-load also got fails. It shows please wait... only.
Due to this I am planning to change my IDE. Is there any way to overcome this? I thought it was due to the issue with my slow computer. I just formatted and upgraded it. Then also it shows the same issue.
My NetNeans is the small package with PHP and HTML only.
3. It also cause high CPU usage sometimes
My operating system is Windows 8.1 with 4 GB memory and i3 processor
You should close your old projects that you are not working. You clears your netbeans cache as well.
You can get help about clear netbeans cache from here
I am trying for clustering openfire 3.7.1, but still not succeed and don't know what the problem is. Here are the steps:
First, I install Clustering Plugin 1.2.0 from Plugins menu.
Then I go to Server -> Server Manager -> Clustering menu and got java.lang.NoClassDefFoundError: com/tangosol/net/Invocable exception. Searching on the forum, I found that Clustering Plugin needs Oracle Coherence. So I download Oracle Coherence v3.4.2 and copy all jar files from lib dir into openfire lib dir. Then I restart openfire and now clustering menu seems ok, no exception occurred.
I try to enable clustering from clustering menu. It states that enabling clustering may take 30 seconds. But after clicking the Save settings button, the process won't stop even after 10 minutes. So I stopped the process and restart openfire.
I login again and everything seems good. Accessing again the Clustering menu, it shows that the clustering is enabled and there is 1 node listed and running. But when I click the nodes link, it does nothing. Also when I try to access 'Users/Group' menu, it shows HTTP ERROR 500 with org.jivesoftware.util.cache.DefaultCache cannot be cast to com.jivesoftware.util.cache.ClusteredCache exception.
My machine specs are:
Ubuntu 12.04
Openfire 3.7.1
Core i5 with 8 GB memory.
That seems be a problem with 1.2.0 Plugin
Look at this post for further help:
http://community.igniterealtime.org/message/218486#218486