In my application I'm loading nearly 10,000 rule to Drools ( Drools 5.5.0.final). But application throws java.lang.OutOfMemoryError Exception. My jvm args ,
-Xms1024m
-Xmx1024m
Can anyone help me to resolve this. I also used BigMemory ( http://terracotta.org/products/bigmemory) but still getting the same error.
Thanks in advance !
Use a profiler like VisualVM (free and very easy to run) to see how memory over time graph and take a heap snapshot just before it goes OutOfMemory.
Especially that graph can tell you some interesting things (by adding some Thread.sleep's in your code) which could give you and us a clue what's causing it:
how much memory you consume before starting anything drools (so just having your dataset in memory)
how much memory having the rules in memory consumes (so the KnowledgeBase)
how memory evolves onces you start a drools session from that base and insert your dataset.
This may happen when you resuse the same 'statefulKnowledgeSession' for triggering process each time. In this case for each statefulKnowledgeSession.insert(fact) new facts are inserted but not removing prevous ones. Is that your case please remove/retract previously inserted facts before triggering new processinsatance using:
`ksession.retract(factHandle);
Related
I have created a model to generate a product that will be cycled through a list of machines. Technically the product list is for a single-day run, but I run the model for long durations to stabilise the model output.
The model can run properly for months until around 20 months, then suddenly stops without any error message as shown in the screenshot. I do not know how to debug this since I do not know where the error comes from.
Does anyone have a similar encounter and could advise on how to approach this issue? Could it be an issue of memory overload?
Without more details, it's hard to pinpoint the exact reason, but this generally happens if the run is stuck in an infinite While Loop or similar. So check all your loops where it's possible for such a scenario to happen and it's likely that one of them (or more) is causing the issue.
I have a semi-slow memory leak in a Talend joblet. I am using a tHashOutput/tHashInput pair in the middle of a joblet because I need to find out how many rows are in the flow. Therefore, I push them into a tHashOutput and later on reference tHashOutput_1_NB_LINE from the globalMap.
I have what I think are the proper options:
allRows - "append" is FALSE
tHashinput_1 - "Clear after reading" is TRUE
Yet, when I run this for a period of time, and analyzing with the Eclipse Memory Analyzer, I see objects building up over time. This is what I get after 12 hours:
This usage (64MB/12 hours) increases steadily and is unrelated to what the job is doing (i.e. actively pumping data or just idling - and this code while invoked for idling also). If I look inside the memory references in MAT, I can see strings that point me to this place in the code, like
tHashFile_DAAgentProductAccountCDC_delete_BPpuaT_jsonToDataPump_1_tHashOutput_2
(jsonToDataPump being the name of the joblet). Am I doing something wrong in using these hash components?
i believe you should set your garbage collector pace to minimum time duration so that it will take care of unused object in application
I'm getting this error when I try to insert 17000 vertex in the DB. The vertex are grouped as a multiple tree an the commit occur when a tree has bean fulled readed/stored. The first tree has 2300 vertex, the second has 5500 vertex and is in this point when it fail.
java.lang.IllegalStateException: Cannot begin a transaction while a hook is executing
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.begin(ODatabaseDocumentTx.java:2210)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.begin(ODatabaseDocumentTx.java:2192)
at com.tinkerpop.blueprints.impls.orient.OrientTransactionalGraph.ensureTransaction(OrientTransactionalGraph.java:229)
at com.tinkerpop.blueprints.impls.orient.OrientTransactionalGraph.commit(OrientTransactionalGraph.java:177)
at net.odbogm.SessionManager.commit(SessionManager.java:351)
at com.quiencotiza.utilities.SetupInicial.loadRubros(SetupInicial.java:180)
at com.quiencotiza.utilities.SetupInicial.initDatabase(SetupInicial.java:48)
at com.quiencotiza.utilities.SetupInicial.main(SetupInicial.java:41)
It's a single thread app. It load the database with the initials records.
I have upgraded to 2.2.4 but I get the same error.
Thanks
Marcelo
Well. I solve the problem. It seem is something related to the activateOnCurrentThread() but don't know why it happened. What means that exception? Why it is throwing?
I know its an old topic, but maybe its will help someone,
Had the same problem, a lot threads with many queries and updates.
So I started to work with one thread (SingleThreadExecutor in Java) and solve it,
I guess there is a bug in the locks of hooks
I am getting ASRA abend while trying to read from a TSQ. Will ASRA occur if we try to read from a TSQ that is already deleted? what all could be the possible reasons?
ASRA is sort of a catch-all error that says CICS identified a program check state and ended your transaction for you. It could be anything. You can get more detail from the CICS started task and it's logs, or from your whatever ABEND reporting product your installation has installed.
However, if you are getting the ASRA while you are doing a READQ TS with the INTO(varname) option, make sure you own the storage of varname and that the length is enough to fit the largest possible record on the queue.
Also, if you use the length option, make sure that you have it set correctly. If you ask for 32k bytes from the TS queue into a 100 byte area, you will get an ASRA.
But all of the above is only one possible reason for it, you really need to determine what sort of ASRA you are getting.
If the Temporary Storage Queue is already deleted, you shouldn't get an ASRA but a QIDERR condition, which if not handled will give you a different abend, an AEYH abend.
ASRA's are just CICS codes for an S0C* abend, which I am going to presume in this case would be an S0C4 or protection exception. A protection exception happens when you try to write to (or sometimes read from) storage that you don't have permission to.
Windows Workflow Foundation has a problem that is slow when doing WF instances persistace.
I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish.
What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons.
I have the following questions:
Is this true? Will my performance be crap with that load(given WF persitance speed limitations)
How can I solve the problem?
We currently have two possible solutions:
1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database.
2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly).
I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at nstjelja#gmail.com
The number of hydrated executing wokflows will be determined by environmental factors memory server through put etc. Persistence issue really only come into play if you are loading and unloading workflows all the time aka real(ish) time in that case workflow may not be the best solution.
In my current project we also use WF with persistence. We don't have quite the same volume (perhaps ~2000 instances/month), and they are usually not as long to complete (they are normally done within 5 minutes, in some cases a few days). We did decide to split up the main workflow in two parts, where the normal waiting state would be. I can't say that I have noticed any performance difference in the system due to this, but it did simplify it, since our system sometimes had problems matching incoming signals to the correct workflow instance (that was an issue in our code; not in WF).
I think that if I were to start a new project based on WF I would rather go for smaller workflows that are invoked in sequence, than to have big workflows handling the full process.
To be honest I am still investigating the performance characteristics of workflow foundation.
However if it helps, I have heard the WF team have made many performance improvements with the new release of WF 4.
Here are a couple of links that might help (if you havn't seem them already)
A Developer's Introduction to Windows Workflow Foundation (WF) in .NET 4 (discusses performance improvements)
Performance Characteristics of Windows Workflow Foundation (applies to WF 3.0)
WF on 3.5 had a performance problem. WF4 does not - 20000 WF instances per month is nothing. If you were talking per minute I'd be worried.