Why am I getting a Null Pointer Exception in AnyLogic? - anylogic

I am trying to simulate vehicles parking on a parking place of a company. I have planned and unplanned cars. I have a schedule set up for the planned cars, which is working fine. The planned cars with the arrival and departure time and some other parameters are stored in the Anylogic Database.
For the source "unplannedCars" I wanted to inject them by the inject function call. The unplannedCar would be set on exit. But as soon as I try to inject some unplanned Cars, the model gives me a Null Pointer Exception Error. Has this something to do with my model taking the values of the Database?
I tried to fix it by adding the cars manually to a population and using the enter block. But there, I had the problem of using the road traffic library in combination with it.
Edit: I noticed the NullPointerException happens only if the "eOrV" block is used.
Edit2: I also tried to set default values for the agent and for the database. Now I get following Error with a NullPointerException: Error
Help is appreciated.
unplannedCarSourceImage plannedCarSourceImage Erorrmessage modelimage Errormessage2 SelectOutputBlock

The NPE in the SelectOutput is telling you that there is no field motortype in the incoming agent.
Likely, your Car agent type does not have such a field or it is indeed null, i.e. the String is completely empty and not initialized.
Make sure that agents passing through the SelectOutput have a field motortype of type String and that it contains some String.

I managed to create a workaround for that problem. As I have a database and a schedule for my agents, I also wanted to control them manually. So I needed to use the Schedule and the inject function. By using the schedule and set parameter from DB, I was able to solve the Null Pointer Exception problem. Still I dont know what caused the Null Pointer Exception as I had default values set for my agents.

Related

AnyLogic: Changing ResourcePool based on programmatically created schedule

I refer to a similar problem here.
I implemented a programmatically created schedule same as in the example model from AnyLogic Cloud. Then I added the suggested code in the capacity field.
Still, my problem is the following runtime error: "The parameter capacitySchedule cannot be changed dynamically". Does it just basically does not work with the resource pool compared to the Transporter Fleet presented in the similar problem? Unfortunately it does not work with a fake schedule either.
Here are some screenshots from my model. Thanks in advance.
To work with dynamic capacity of ResourcePool, I use a schedule shift by plan.
in some cases it meets the need.
The implementation of this is simple, in the schedule you give values 1,2,3 etc. They actually point to a position in the array.
Example of the schedule
And inside the ResourcePool you define it as follows:
Example of the ResourcePool by plane
In my example, at times when the schedule value is 4 the capacity of the ResourcePool is 0 and at other times it is as per my parameters.
Change the "Kapaziät definiert" to "By schedule".
Create a fake schedule object fakeSchedule (normally, not programmatically). Make sure it always returns 0 as the value.
Then, use this call for "Kapazität":
' mySchedule == null ? fakeSchedule : mySchedule`
This will tell the pool to use your schedule if it exists, else the fake one

How do I seize a subset of seized resources?

I have a pool of 25 agents (Operators). When an Order is generated, I seize a few Operators and move them to one of many different ProductionSuites as determined by a parameter in the Order.
Within the ProductionSuite, I have a variable of type ResourcePool that I would like to use to have these Operators perform tasks.
In the main window, I put this code in the "On seize unit:" code box:
agent.assignedSuite.suiteOperatorPool.addAgentToContents(unit);
but this triggers a NullPointerException error. Am I using the addAgentToContents method incorrectly?
You have not initialized your suiteOperatorPool variable, it's "initial value" field is empty. Hence, this is just an empty shell of type ResourcePool that cannot do anything, including adding agents to it.
You would need to initialize it properly using the ResourcePool API, but I don't think that is possible.
Also, you cannot have resources be part of 2 resource pools, as you are trying to do. You should think of a different way to solve your problem. Maybe rephrase the issue so we can think of alternatives. You might not need a RP at all but just use pure agent functionality...?

How can I know what component failed?

When using the On SubJob Error trigger, I would like to know what component failed inside the subjob. I have read that you can check the error message of each component and select the one that is not null. But I feel this practice is bad. Is there any variable that stores the identity of the component that failed?
I may be wrong, but I'm afraid there isn't. This is because globalVar elements are component-scoped (ie they are get/set by components themselves), not subjob-scoped (this would mean being set by Talend itself, or something). When the subjobError signal is triggered, you loose any component-based data coming from tFileInputDelimited. For this design reason, I don't think you will be able to solve your problem without iterating inside the globalMap searhcing for the error strings here and there.
Alternatively you can use tLogCatcher, which has a 'origin' column, to spot the offending component and eventually route to different recoverable subjobs depending on which component went to exception. This is not a design I trust too much, actually, because tLogCatcher is job-scoped, while OnSubjobError is directly linked to a specific subjob only. But It could work in simple cases

Is it possible to change the correlation in the middle of the workflow?

I have put an InitializeCorrelation activity in the beginning of the workflow and then I want to correlate on a different keys, so I've put another InitializeCorrelation activity with a different keys but I am getting this error:
The execution of an InstancePersistenceCommand was interrupted because the instance key 'a765c209-5adc-4f03-9dd2-1af5e33aab3b' was not associated to an instance. This can occur because the instance or key has been cleaned up, or because the key is invalid. The key may be invalid if the message it was generated from was sent at the wrong time or contained incorrect correlation data.
So, is it possible to change the correlation after the workflow started or not?
To answer the question explicitly, yes, you can change the data the correlation based on. You can do it in not only within a sequence but you can use different correlation data within each branch of a Parallel activity. Correlation can be initialized using an InitializeCorrelation or a SendReply activity as described here: http://msdn.microsoft.com/en-us/library/ee358755(v=vs.100).aspx.
As the Workflow Designer is not the strongest part of Visual Studio (XPath queries are never checked, sometimes even build errors are not reflected on activities, etc.), usually it is not always obvious what the problem is. So, I suggest the followings:
initialize a CorrelationHandle only once with the correlation type Query correlation for a specicific correlation data
initialize a new CorrelationHandle instance for a different correlation data
once a CorrelationHandle is initialized, it can be used later for multiple times for different Receive activities (Receive.CorrelatesOn, Receive.CorrelatesWith)
if correlation does not work it can be because of wrong XPath queries. These are not refreshed automatically if the OperationName or parameters' name are changed. It is advisable to regenerate them after renaming
it can be a good idea to turn off workflow persistance and NLB while you are testing - to let yourself concentrate on correlation-related problems
Have a look into Instances table in database where you store persisted instaces. One of the entries would probably have the suspended state, there is also a column with some error description. What have caused this error ? have you done some changes in workflow and deployed it ?

Salesforce.com: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record

In our production org, we have a system of uploading sales data into Salesforce using command line data loader. This data is loaded into a temporary object Temp. We have created a formula field (which combines three fields) to form a unique key. The purpose of the object is to reduce user efforts for creating the key manually.
There is an after insert trigger on Temp which calls an asynchronous method which upserts the data to another object SalesData using the key. The insert/update trigger on SalesData checks the various fields and creates/updates the records in another object SalesRecords. After the insertion/updation is complete, all the records in temp object Temp are deleted. The SalesRecords object does not have any trigger on it and is a child of another object Sales. The Sales object has some rollup fields which are summing up fields from SalesRecords object.
Lately, we are getting the below error for some of the records which are updated.
UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record
Please provide some pointers to resolve the issue
this could either be caused by conflicting DML operations in the various trigger execution or some recursive trigger execution. i would assume that the async executions cause multiple subsequent updates on the same records, probably on the SalesRecords object. I would recommend to try to simplify the process to avoid too many related trigger executions.
I'm a little surprised you were able to get this to work in the first place. After triggers should be used with caution and only when before triggers can't be. One reason for this is that you don't need to perform additional DML to make changes to records, since in before triggers you simply change the values and the insert/update commit happens automatically. But recursive trigger firings is the main problem with after triggers.
One quick way to avoid trigger re-entry is to use a public static Boolean in a class that states whether you're already in this trigger from the same thread of execution.
Something like:
public static Boolean isExecuting = false;
Once set to true, any trigger code that is a re-fire can be avoided with:
if(Class.isExecuting == false)
{
Class.isExecuting = true;
// Perform trigger logic
// ...
}
Additionally, since the order of trigger execution cannot be determined up front, you might be seeing an issue with deletions or other data changes that depend on other parts of your flow to finish first.
Also, without knowing the details of your custom unique 3-part key, I'd wonder if there's a problem there too such as whether it's truly unique or not. Case insensitivity is a common mistake and it's the reason there are 15 AND 18 character Ids in Salesforce. For example, when people export to Excel (a case-insensitive environment) and do VLOOKUPs, they would occasionally find the wrong record. The 3-digit calculated suffix was added to disambiguate for case-insensitive environments.
Googling for this same error lead me to this post:
http://boards.developerforce.com/t5/General-Development/Unable-to-obtain-exclusive-access-to-this-record/td-p/345319
Which points out some common causes for this to happen:
Sharing Rules are being calculated.
A picklist value has been replaced and replacement is in progress.
A custom index creation/removal is in progress.
Most unlikely one - someone else is already editing the same record that you are trying to access at the same time.
Posting here in case somebody else needs it.
I got this error multiple times today. Turned out one of our vendors was updating their installed package during that time in the same org. All kinds of things were going wrong also - some object validation exceptions were being thrown on DMLs, without any error message content.
Resolution
The error is shown when a field update such as a roll-up summary field is being attempted on a parent object that already had a field update to cause the roll-up summary field to calculate. This could also occur if a trigger or another apex job running on the master object and it also attempting to do an update.
You can either reduce the batch size and try again or create separate smaller files to be imported if this issue occurs.