So, the problem I'm facing is the following.
I have a simple arcitechture where I simulate an unloading process where 2 cranes unload incoming docked ships. Using a seize-delay-release design, one incoming ship binds up one capcity point from the "cranes" resourcePool.
View descriptive image here:
In the event of only one arriving ship, how can I make both cranes work on one ship and from there cut the processing time in half?
If a second ship docks in, I want one of the two busy cranes to help out the newly arrived ship with unloading.
Are there any elegant ways of solving this issue?
Another option would be to model the offloading process in a bit more detail. This will negate the requirement of computing the remaining offloading time on a ship if service time depends on the number of cranes serving it in any time unit.
You can model the berths or docks where ships can park to be offloaded as waiting blocks or queues
When a ship enters the docks or berth they will generate a number of containers or what ever units that need to be offloaded in a separate logic stream, using source.inject(numberOfUnits)
You can model the number of cranes available at each berth programmatically by either increasing or decreasing the number of resources in the resource pool using resourcePool.set_capacity(numberOfUnits)
Thus if a ship arrives at a dock A and there is no ship at dock B, you increase the capacity of the cranes at dock A to 2. If a new ship then arrives at dock B you set the capacity for the resource pool at both docks to 1.
If the ship at dock A leaves, then you assign the cranes in the resource pool at dock B to 2 and at dock A to 0, thus simulating the movement of a crane from one dock to another.
The only minor issue here is that if you want to track the utilization of individual cranes you will need to store them separately as you will be destroying and creating resource units the whole time.
Yes, apply priorities. A new ship should have a higher priority and your second crane (still busy with ship 1) can be seized by the higher priority ship using task preemption. (Check how these work in the help and example models)
Only difficulty will be to compute the remaining time on ship 1 if service time depends on the number of cranes serving it in any time unit. Not impossible but will need some coding.
Related
I am developing a logistics simulation in the factory by Anylogic. It's a pick up and delivery problem, where the AGVs need to pick up the parcel and deliver to the target location. All the AGVs are traveling following paths. The paths have different speed limits.
My goal is to reduce the time of traffic jam or waiting time for jobs to be picked up.
I have the leading time, job delivered time - job generated time.
But I from here, I want to identify the time of traffic jam or waiting time.
Is there a way to calculate the time from one spot to the other considering different speed limit of paths without waiting time or traffic jam? So that I could subtract this from leading time.
Let me know if I need to clarify something.
There is no build-in way to do this, you have to do it yourself. I have 3 ideas:
You compute this mathematically in the model yourself, i.e. write a function that computes the length of the total path and you have the ideal speed already, voila
You run a separate experiment and turn off all speed limits and other traffic: record the time in that ideal case and use that to compare
Similarly, you could do this in the same experiment during a warmup period: drive a fake transporter along the path and compute the perfect durations
I have an IOT application which is architected in the following way :-
There are plants which has its own sets of devices.
Now the entire pipeline is deployed in kubernetes consisting of the following units :-
A job which wakes up every x seconds, reads data from all plants, pushes the data to a mqtt broker.
An MQTT broker.
A subscriber service which receives data from all plants and pushes it to a timeseries database.
Jobs running at intervals of 5min, 15min, 1hr, 4hr, 1day and performs downsampling on the data of all the projects and pushes it to separate downsampled tables.
Jobs running every day to check if there was in holes/gaps in the data and tries to fill it up if possible.
Now this works fine for few Plants, but when the number of plants increases it becomes difficult perform data retrieval, push, downsampling using single service/job as it becomes too much memory intensive and chokes at multiple places. As a temporary fix scaling it vertically fixes the issue to some extent but in that case I need to put all the pods in a single machine which I am scaling vertically and scaling multiple nodes vertically is quite expensive.
Hence I am planning to break down the system in a way so that I can scale horizontally and I am looking for suggestions for the possible ways I can achieve this.
I have a question about space between agents. In my model I have agents generated from a source and then they enter a delay, after the delay the agents go into a a queue with a capacity of 1 but I have a preemption option. The agents that go into the preemption are supposed to move along a circled path (I used a delay block for this) but there should always be a certain space between the agents, e.g. 100 meters. How can I incorporate this in my model to make sure my agents are not too close to each other?
One way you can control the distance between your agents is to move them on a path using a dummy transporter instead of the moveTo block. Transporters allow you to define a minimum distance to obstacle that prevents the agents from getting too close to each other.
Two options if you mean the static queue with agents actually waiting:
1) if your queue size is 500 meters, define the maximum amount of agents allowed in that queue to 6 (so you have 100 meters of distance between each agent)
2) Use the PML settings block from the PML palette and define an initial capacity of animation location equal to 6 (if your queues are 500 meters)... but this applies to all the model, so maybe it won't be good enough.
If you want them to have 100 meters space while they are moving towards their objective through the path that represents the queue, then the answer depends heavily on your model and it cannot be answered with the info provided... you need in this case to control the agent movements adding some logic... but I don't know what logic is suitable for you.
Hi,
In the CDMA cellular networks when MS (Mobile Station) need to change a BS(Base Station), exactly necessary for hand-off, i know that is soft hand-off (make a connection with a target BS before leaving current BS-s). But i want to know, because connection of MS remaining within a time with more than one BS, MS use the same code in CDMA to communicate with all BS-s or different code for different BS-s ?
Thanks in advance
For the benefit of everyone, i have touched upon few points before coming to the main point.
Soft Handoff is also termed as "make-before-break" handoff. This technique falls under the category of MAHO (Mobile Assisted Handover). The key theme behind this is having the MS to maintain a simultaneous communication link with two or more BS for ensuring a un-interrupted call.
In DL direction, it is achieved using different transmission codes(transmit same bit stream) on different physical channels in the same frequency by two or more BTS wherein the CDMA phone simultaneously receives the signals from these two or more BTS. In the active set, there can be more than one pilot as there could be three carriers involved in soft hand off. Also, there shall also be a rake receiver that shall do maximal combining of received signals.
In UL direction, MS shall operate on a candidate set where there could be more than 1 pilot that have sufficient signal strength for usage as reported by MS. The BTS shall tag each of the user's data with Frame reliability indicator that can provide details about the transmission quality to BSC. So, even though the signals(MS code channel) are received by both base stations, it is achieved by routing the signals to the BSC along with information of quality of received signals, which shall examine the quality based on the Frame reliability indicator and choose the best quality stream or the best candidate.
I have a problem grokking the concept of real-time (IMO badly named, different meaning in different contexts). I understand real-time software as a software where time is a key variable. Events must occur at given time. Say, railway switch change at 15:02 and the next one must be at 15:05 no matter what.
But how about this example. In game, when player's FPS drops below 16 game exits and tell user to upgrade his hardware or kill other applications. So when one iteration of the game loop takes more than 1/16 of a second the output of the program is completely different.
Is it real-time(ish)? Can it be considered as a Real Time Computing?
Your question is hard to understand, are you referring to Real Time Computing, or simulating real time, or something completely different?
Simulating real time: It is possible to simulate real-time in a game by polling for events. Store the time of an event, and then when it comes time to render a frame, the game should repeatedly 'fast forward' by moving the current time to the time of the next event and handle the event. This should repeat until there are no more events, or the time is 'current'.
This requires you to have anything that is a function of time (such as velocity, position, acceleration) be calculated according to the current time. This means you would not have these attributes periodically updated, and allows your game to be deterministic, as the 'game time' is no longer dependent upon real time. It also makes things like game speed and pausing very simple to implement.
If you're referring to the concept of real-time systems, then I would say there's not enough information to determine whether that 'game loop' is 'real-time'. It depends on the operating environment of the game, and the logic in the 'game loop'. According to wikipedia, a real-time deadline must be met, regardless of system load.
In the rapidly approaching canonical article Fix your Timestep!, Glenn Fielder addresses numerous ways to handle this issue. While the article focuses primarily on physics, the key points are applicable to any system that represents a function of time, to wit, things dealing with moving things.
The executive summary of that article (which is well worth reading) is this:
You can make your physics deterministic (well, as much as can be achieved with imperfect input) by using discrete physics timesteps. It looks like this:
Render as fast as possible
Pass in a time delta that represents how long steps previous took this frame
Process delta time modulo timestep number of physics steps
Store the remainder of delta that you weren't able to process in an accumulator
That accumulator gets added to the next frame's time buffer. This requires some fine tuning such that temporary lag spikes due to e.g. a rapidly spinning player (which necessitates a lot of visibility determination over time) don't end up putting you in an inescapable time debt. If you wanted to intelligently guard against such an occurrence, you could have a sentry look for dangerous levels of accumulated time, which you could respond to by perhaps dropping a video frame.
Another advantage to using discrete timesteps is that they behave well in multiplayer games. If you have an authoritative server or node in a peer-to-peer configuration, the server can ensure that all clients' physics simulations are running at the same physics timeline. Discrete time blocks also simplifies things in rollback based multiplayer.
Edit:
Disclaimer: I've never written software for real-time myself, only worked in a company that had!
In response to really-real real-life Real Time software, it's unlikely that anyone has made a game that could be qualified as this, at least in software. (I'm not sure how one would qualify games on ROMs or games that don't run under a host OS?) While your example would be an attempt at real-time software, most real-time software goes through a period of certification in which the maximum amount of time spent per instruction or on a logical block of operation is determined. Games might come close to this in a sense when, for example, platform licensors have requirements (as I believe XBLA does) regarding minimum 30fps or similar. However, these certifications are usually established through a period of testing rather than through mathematical proof.