I am working on car wash model, and I am running it for 100 hours but I want to store the data on daily basis ( after every 10 hours ) how can we do that. for example for every service block I have resource pool I want to see the utilization of resource pool in every 10 hours.
Create a cycle event that triggers every 10 hrs and writes data as you need it.
One simple way to print to the console: traceln(myResourcePool.utilization()
If you want to write to the dbase, check this help article.
Related
Part one:
How can I run a production line only for 15 hours per day?
My logic
Part Two:
How to implement a machine malfunctioning 3 times in 10 days?
I achieved the malfunctioning of a machine using this logic.
if(countAssembler==10){
self.suspend(agent);
create_MyDynamicEvent(2, HOUR,agent);
}
But malfunctioning is occurring on item count (i.e. countAssembler==10) right now. I want it to occur after 3 days.
part 1
To make the services work for a period of time every day, you need resources to be used in those services, and the resources to be subjected to a schedule. The schedule is an object that you can find in the process modeling library and there's a ton of documentation and examples on how to use them. Review the help documentation.
part 2
There's a block in the process modeling library called downtime. Again there's a ton of documentation and examples on how to use this for any model. Then you don't need any java code
Edited Version:
I'm actually modelling an airport check-in terminal. It works fine so far, but additional I'm still trying to implement a function, that allows my pedestrians not to enter the service-queue if the queue time exceeds a preselected value (e.g. already 15 Passengers in the queue) and therefore walks to some kind of backup Service that opens during this busy times.
Here is my approach:
Variable QueueSize returns permanently the actual Number of Passengers in the Queue.
Every time a ped enters the pedservice block CheckInEco, the function waitingTime() starts:
QueueSize = CheckInEco.size();
if (QueueSize > 15) CheckInEco.cancel(ped)
So, as soon as there are more than 15 Agents in the queue, number 16 should bypass and move to an alternate ServiceBlock, which I would connect to the ccl Port of the CheckInEco Service. But when building the model, I get this message: ped cannot be resolved to a variable?
According to Anylogic Help, it should be possible to use this cancel - call, but I'm not really experienced with it.. Maybe, someone can help me out?
You can simply use a select output block to prevent pedestrians from going into the service block if there are more than 16 pedestrians already in.
Your original question had to do with waiting time, you should follow the exact same approach. But with waiting time it gets more complicated since you don't want to take the average waiting time from the start of the simulation.... so you need to decide if you want to take the last 10 minutes, 1 hour etc and do you want to include the current waiting time of agents in the queue. Since this is the the questions anymore I am not going to add it here, perhaps ask a new question if this is still the case.
I'm running a similation where i would lige to know the total amount of time agents spends in a delay block. I can access the data when running single simulations in the Dataset log under flowchart_stats_time_in_state_log
https://imgur.com/R5DG51a
However i would like to to write the data from block 5 (spraying) to an output in order to store the data when running multiple simulations.
https://imgur.com/MwPBvO8
Im guessing that the value reffence should look something like the expression below. It is not working however so i would aprreciate it alot if anybody could help me out or suggest an alternate solution for getting the data.
flowchart_stats_time_in_state_log.total_seconds.spraying;
Btw. Time measures dose not work for this situation since i need to know the total amount of time spend in a block after a 12 hour shift. with time measures i do not get the data from the agents that are still in the block when the simulation ends.
Based on the goal of summing all processing times, you could solve it mathematically. Set the output equal to block.statsUtilization.mean() * capacity * time() calculated on simulation end.
For example, if you have a capacity of 1 and a run length of 100 minutes, then if you had a utilization of 50%; that means you had an agent in the block for 50 minutes. Utilization = time busy / total time. Because of this relationship, we can calculate how long agents were actually in the block.
Another alternative would be to have a variable to track time in block, incrementing when the agents leave. At end of the run, you would need to call a function to iterate over the agents still in the block to add their time. AnyLogic allows you to pretty easily loop over queues, delays, or anything that holds agents:
for( MyAgent agent : delayBlockName ){
variable += time() - agent.enterBlockTime;
}
To implement this solution, you would need to create your own agent (name it something better than MyAgent) with a variable for when the agent enters the block. You would then need to then mark the time each agent enters the block.
I read this in the celery documentation for Task.rate_limit:
Note that this is a per worker instance rate limit, and not a global rate limit. To enforce a global rate limit (e.g., for an API with a maximum number of requests per second), you must restrict to a given queue.
How do I put a rate limit on a celery queue?
Turns out it cant be done at queue level for multiple workers.
IT can be done at queue level for 1 worker. Or at queue level for each worker.
So if u say 10 jobs/ minute on 5 workers. Your workers will process upto 50 jobs per minute collectively.
So to have only 10 jobs running at a time you either chose one worker. Or chose 5 workers with a limit of 2/minute.
Update: How to exactly put the limit in settings/configuration:
task_annotations = {'tasks.<task_name>': {'rate_limit': '10/m'}}
or change the same for all tasks:
task_annotations = {'*': {'rate_limit': '10/m'}}
10/m means 10 tasks per minute, /s would mean per second. More details here: Task annotations setting
hey I am trying to find a way to do rate limit on queue, and I find out Celery can't do that, however Celery can control the rate per tasks, see this:
http://docs.celeryproject.org/en/latest/userguide/workers.html#rate-limits
so for a workaround, maybe you can set up one tasks per queue(which makes sense in a lot of situations), and put the limit on task.
You can set this limit in the flower > worker pane.
there is a specified blank space for entering your limit there.
The format that is suggested to be used is also like the below:
The rate limits can be specified in seconds, minutes or hours by appending “/s”, >“/m” or “/h” to the value. Tasks will be evenly distributed over the specified >time frame.
Example: “100/m” (hundred tasks a minute). This will enforce a minimum delay of >600ms between starting two tasks on the same worker instance.
I have a MarkLogic 7 database in which several documents are inserted and every document has its own created-on and released-on. Say for example if a document is inserted into the database at 1400 hrs and its released-on value is 1700 hrs then I need to POST this document to an external REST service at 1700 hrs.
I have tried the following options:
Configure a CPF pipeline such that whenever a document is inserted it's released-on value is read and a Scheduled Task is created to trigger based on the timestamp value read from released-on.
Following are the queries/ observations for this approach:
Since admin configuration manipulation APIs are not transactionally protected operations I need to force a lock on some URI in order to create Scheduled Tasks from within CPF action modules running in parallel.
For details read here
When I insert 1000 documents it takes around 20 minutes for the CPF action modules to trigger and create 1000 scheduled tasks based on the released-on value read from the inserted document.
How can I pass the URI of the document that triggered the CPF action module to the Schedule Task which got created from within the CPF action module based on the released-on value read from the document?
Configure a CPF pipeline such that whenever a document is inserted it's released-on value is read and xdmp:sleep() is called with the milliseconds remaining between current date Time and the value of released-on in the document.
Following are the queries/ observations for this approach:
The Task Server threads on which the CPF action modules are triggered remain occupied and are not released when xdmp:sleep() is called from within them due to which at any time CPF action module is triggered for 16 maximum documents and others remain in queue.
Is there any way we can configure the sleeping thread to become inactive and let other queued action modules to get triggered and when the sleep duration has been elapsed then it again becomes active?
Configure a muti-step CPF pipeline as described here in which the document keeps bouncing between two states till the time released-on timestamp has arrived.
Following are the queries/ observations for this approach:
Even when 30 documents were inserted the CPU utilization was observed to be 100%
In all the attempts a lot of system resources (CPU and RAM) get utilized even for as small as 1000 documents. I need to find an approach that can cater even 100K documents.
Please let me know in case there are any improvements that can be done in the above mentioned approaches or MarkLogic provides some other way to efficiently handle such scenarios.
Rather than CPF, you could set up a scheduled job that will run, say, every 10 minutes and look for documents that are ready to be published. That job would look for documents with released-on values between fn:current-dateTime() and the last time the job ran, which I would save in the database.
For each of those documents, you would spawn a task to POST the document, so that an error in one doesn't cause problems for the others. After looping through, save the current time in the database for the next time.
The 10-minute window can be as large or small as you like, depending on your tolerance for a little delay.