Error: root.rackStore.detach: The agent is not in a network - anylogic

There's a really weird issue happening in a model. I even noticed two unanswered questions about it on Stackoverflow. I tried to simplify the model as much as possible to understand the source of the problem with no luck.
The model is as simple as follows:
Source --> Rack Store --> Sink
Resource Pool assigned to Rack Store
In addition to a pallet rack and a home node for the resource.
The error I get after the agent is picked and stored in the rack is:
root.rackStore.detach: The agent is not in a network
That's how simple the model is:
I appreciate any support. Thank you.

After experimenting, I found the answer. My scenario was a bit more complex than the attached image. It had a combine element before the rack pick and Agent Location (combined) was specified as a node within the network. However, that was not enough apparently for AnyLogic to understand that the combined agent is inside the network, so on exit, I added agent.moveTo(node) which is the same node specified as the agent location and it worked.

The whole rack system in Process modelling Library needs to be properly connected by paths, It can't be in free space. I was also facing a similar issue, connecting everything meticulously by paths is the only thing that worked for me.
Its better to use transporters in such cases.

Related

Sync two offline masters when network available

I have a use case where I need to set up two physical stations at a venue. Each station will be running a couple of app servers and a mongodb server.
I can't rely on the venue's internet access so I need my app to be able to work offline and "sync" the dbs every once in a while.
I initially thought about having two masters that would somehow sync with a remote one but TIL that master-master replication is not possible with mongodb.
I've read about the active-active approach, however, that won't let me write to a different shard when offline.
I'm running out of ideas, any recommendation would be greatly appreciated.
------ Update on what I'm trying to achieve:
I'm working with a venue that has two entrances. The idea is to be able to capture some information from people attending the events (name, email, etc). After getting registered we will print a name tag with some of the info.
Everything sounds pretty easy, however, if possible, I would like to not rely on the venue's network (internet). So that's where I started struggling figuring out whats the best approach. I guess what I want is being able to have a remote mongo but if the network goes down somehow keep saving records locally and send them to the remote mongo instance when network is available again.
Extra considerations:
- Events last a couple of days, some people lose their name tag overnight, they should be able to go to either of the entrances and get it reprinted. So we should be able to find their info even if they registered in entrance A but they are asking for a reprint in entrance B.
More questions:
- Am I overthinking it? Maybe venue's network + a 4G/LTE modem as a backup should be enough? I would prefer not relying on it tho.
I believe you're overthinking things. Here's what I would do if faced with a similar situation:
From the description, it doesn't sound like the two sites need to be connected in real time at all. I would create a server on Entry A, another in Entry B, and consolidate their data each day after the day ended if required. This is because:
It's unlikely that one person will register in both sites within a single day. If they lost their tag on that day, I'll just tell them to go back to where they registered earlier and get it reprinted there. Worst case, you'll create a duplicate entry (should be obvious which is the duplicate since no one would lose their tag within seconds) but I would not anticipate hundreds of people all lost their tags within a day.
If the attendee lost their tag overnight, both servers will have synced data and should be able to reprint.
If you're concerned about the venue's Wifi access, just run cables from the server to the printing stations.
Personally, I would argue that the overnight sync is not really needed at all (see the likelihood of people registering twice). I would just collect the data from both servers after the event ended. That is, unless you have specific needs for the combined data from both entries during the 2nd day.
Note: please make sure you're running a minimum of 3-node replica set. Running a standalone instance for prod environment is not recommended. Hardware/disk corruption is a common event.

Route Costing in Anylogic

I am trying to simulate a manufacturing system that uses Automated Guided Vehicles (AGVs) to carrying loads around the network to be processed. While the AGVs are travelling, it is ideal for them to pick the fastest route to the destination (not necessarily the shortest).
Here is my model
I am kind of stuck at trying to implement a route costing algorithm, because I am not too familiar with the intricacies of this program yet. Can anyone kindly give me some rough idea on how it can be implemented in pseudo code with the following scenario:
The load needs to move from A to B and there are three possible paths. However, there is congestion in the red highlighted areas that will cause the load to take a longer time to reach point B.
How can I read the network to check for congestion and also calculate the various times needed to go to point B?

Server Pool VM Missing Xenserver

I have been looking around the web for an answer.
I have setup three XenServer's each with running VM's. I added the first two to a new pool without an problems by first detaching a shared SR which only contains the ISO images for the OS, then assigning the server to a pool and finally re-attaching the SR.
When it came to the third one, when I moved it into the server pool the VM's were gone. I have been trying to search the web high and low for an answer.
I know I havn't given a lot of information but I am new to the Cirtix scene and would greatly appreciate any help or at least a direction to head in.
Cheers

ksoap services accessing by maximum user and it crashes or not?

I want to clear my doubt.am using ksoap services and it work successfully.but if multiple user accessing at a time for example 1000+ user are accessing then it will crashes or not.in three tier architecture if 1 of the layer crashes by server load it impact to another.like flip-kart or amazon are crashes when multiple user accessing at a time.so i want to clear my point.how to recover this problem?if anybody have an idea please clear my point and if any problem how to resolve this?
we have no idea if it crashes or not under 1k+ users load. we don't know you setting and architecture. you have to do load tests and check it.
what happens it 1 layer crashes? it depends on what does it mean. it you mean 'exception will be thrown' the usually http servers handle this without any problems. they catch the exception, return error and are ready to handle another connection. but if there will be not enough memory or a segfault then whole runtime goes down. if other layers are handled by the same runtime then they go down as well.
how to make you application more resilient? depending on your budget and needs: separate, replicate, monitor and auto-scale. for example: keep different layers on different machines, keep multiple copy of the same service on different machines, working at the same time. have software (like mesos) that keeps required number of your services running (if some service goes down, the software will launch new instance of the service on new machine). use cloud to automatically use more machines when you have higher load.

Continuation of a process after a system crash/restart - Drools Flow

I've been playing with examples I downloaded with the book Drools JBoss Rules 5.0. To my relief they work :) Drools Flow has been my point of interest as a possible workflow engine replacement.
As I'm trying to wrap my head around things, I've been wondering how a premature death of a rulesflow process gets restarted? What I'm mean is say a process is bouncing from one node to another like expected, then the containing process dies due to a crash, restart or whatever. Is the current node/place of the ruleflow process retained, and can it just continue from that point on system restart? If so how?
The group I work for is very Java EE centric with JBoss being our favorite application server. I see examples of Drools leveraging Spring's persistence and bean lookup support.
Are there examples of doing the same with JBoss?
If you persist the state of the process instances and tasks in the database. Even if the VM was down and restart again, you can retrieve the process instances.
Use the
To create the session
ksession = JPAKnowledgeService.newStatefulKnowledgeSession(kbase,null,env)
To load the session with the session id.
ksession = JPAKnowledgeService.loadStatefulKnowledgeSession( sessionId, kbase,
You only need to know the session id. Session information will be store in SessionInfo table. Download the example project below.
http://dl.dropbox.com/u/2634115/drools-test.zip
The example is using Btm with H2 database, it also work well with mysql-connector-java-5.1.13 with Btm. Note that the process that are complete will be automatically deleted from the database.
You are looking at the basic concept of Process Migration. During what is known as strong migration, a process can be stopped on one machine and the entire state of the process migrated to another machine (including the program counter and all existing stacks). Before you go thinking that this is completely insane, think about this from a JVM perspective. Since you're application is already being run in virtual hardware; it isn't hard to stop the application and pick it back up where it left off since it is completely virtualized.
If you would like another example, look at VMWare; an entire machine can be paused and migrated to another machine and started again. It's very interesting stuff and usually relates mainly to Distributed Computing where you might have hundreds of agents that need to migrate from machine to machine as some go down for maintenance.
I realize that I didn't give an example of this through JBoss; but giving a background on what exactly you're looking for can give you a much better insight into what to look for going forward.