Anylogic, mutate the capacity of the resource dynamically - simulation

I have a model with a queue and two machines, one of which is used just in case of overcrowding of the queue in front of these resources.
My model has a simple Queue and a Delay block and I tried to mutate the Delay capacity based on a previous queue length using a function like this (written in Delay block capacity text field):
if (queue.size() > 5)
return 2;
else
return 1;
But it doesn't seem to work... is it possible to change the number of resources dynamically based on a condition?

the capacity value in the delay block is only considered in the beginning of the simulation, so it can only be considered as the initial value...
To change the capacity later, you can put some code in the on enter and on exit of the queue block:
delay.set_capacity(queue.size() > 5 ? 2 : 1);
Something like that.

Related

Anylogic: Queue timeout & condition

In the default settings it is possible to set a time in the queue after which an agent leaves the queue via outTimeOut. However, only a fixed time can be entered in the corresponding field, e.g. 12 hours. Is there a possibility to link these 12 hours with a condition? In my case, the agent should only leave the queue after 12 hours via outTimeOut if a certain condition is also met. In my case, if a variable varIN == 1.
Collect the time in queue statistics for each agent. Create a parameter called entryTime. When they enter the block, set agent.entryTime=time();
You can create an event that iterates through the queue every 1 second and removes the agents that meet your conditions from the queue (by using remove(Agent agent) function). That means if (time()-agent.entryTime>12)&&(agent.varIN==1), you will remove that agent.
Loop will look like this:
for (int i=0; i< yourQueue.size(); i++) {
YourAgentType currentAgent = ((YourAgentType)yourQueue.get(i));
if ((time()-currentAgent.entryTime>12)&&(currentAgent.varIN==1)){
yourQueue.remove(currentAgent);
}
}
You can use a function (returning a double) to define and calculate the most complex logic you like. If you provide an argument of type Agent (or the specific agent type flowing through the blocks), it can even account for your agent characteristics.
In the queue block timeout, simply call the function.

Anylogic - Assembler should stop working for 2 hours after 10 assemblies done

The "Assembler" should stop working for 2 hours after 10 assemblies are done.
How can I achieve that?
There are so many ways to do this depending on what it means to stop working and what the implications are for the incoming parts.. but here's one option
create a resourcePool called Machine, this will be used along with the technicians:
on the "on exit" action of the assembler do this (I use 9 instead of 10 because the out.count() doesn't count until the agent is completely out, so when it counts 9, it means that you have produced 10)
if(self.out.count()==9){
machine.set_capacity(0);
create_MyDynamicEvent(2, HOUR);
}
In your dynamice event (that you have to create) you will add the following code:
machine.set_capacity(1);
A second option is to have a variable countAssembler count the number of items produced... then
on exit you write countAssembler++;
on enter delay you write the following:
if(countAssembler==10){
self.suspend(agent);
create_MyDynamicEvent(2, HOUR,agent);
}
on the dynamic event you write:
assembler.resume(agent);
Don't forget to add the parameter needed in the dynamic event:
Create a variable called countAssembler of type int. Increment this as agents pass through the assembler. Also create a variable called assemblerStopTime. You also record the assembler stop time with assemblerStopTime=time()
Place a selectOutputOut block before the and let them in if countAssembler value is less than 10. Otherwise send to a Wait block.
Now, to maintain the FIFO rule, in the first selectOutputOut condition, you need to check also if there is any agent in the wait block and if the current time - assemblerStopTime is greater than 2. If there is, you free it and send to the assembler with wait.free(0) function. And send the current agent to wait. You also need to reset the countAssembler to zero.

"While loop" not working in my Anylogic model

I have the model which I posted before on Stack. I am currently running the iterations through 5 Flow Chart blocks contain enter block and service block. when agent fill service block 5 in flow chart 5, the exit block should start to fill block one and so on. I have used While infinite loop to loop between the five flow chart blocks but it isn't working.
while(true)
{
for (Curing_Drying currProcess : collection) {
if (currProcess.allowedDay == (int)time(DAY)) {
currProcess.enter.take(agent);
}
}
if (queue10.size() <= Throughtput1){
break;
}
}
Image for further illustration 1
Image for further illustration 2
Wondering if someone can tell me what is wrong in the code.
Based on the description and the pictures provided, it isn't clear why the while loop is necessary. The On exit action is executed for each Agent arrival to the Exit block. It seems that the intention is to find the appropriate Curing_Drying block based on number of days since the model start time? If so, then just iterating through the collection is enough.
Also, it is generally a good practice to provide more meaningful names to collections. Using simply collection doesn't say anything about the contents and can get pretty confusing later on.

How to minimize latency when reading audio with ALSA?

When trying to acquire some signals in the frequency domain, I've encountered the issue of having snd_pcm_readi() take a wildly variable amount of time. This causes problems in the logic section of my code, which is time dependent.
I have that most of the time, snd_pcm_readi() returns after approximately 0.00003 to 0.00006 seconds. However, every 4-5 call to snd_pcm_readi() requires approximately 0.028 seconds. This is a huge difference, and causes the logic part of my code to fail.
How can I get a consistent time for each call to snd_pcm_readi()?
I've tried to experiment with the period size, but it is unclear to me what exactly it does even after re-reading the documentation multiple times. I don't use an interrupt driven design, I simply call snd_pcm_readi() and it blocks until it returns -- with data.
I can only assume that the reason it blocks for a variable amount of time, is that snd_pcm_readi() pulls data from the hardware buffer, which happens to already have data readily available for transfer to the "application buffer" (which I'm maintaining). However, sometimes, there is additional work to do in kernel space or on the hardware side, hence the function call takes longer to return in these cases.
What purpose does the "period size" serve when I'm not using an interrupt driven design? Can my problem be fixed at all by manipulation of the period size, or should I do something else?
I want to achieve that each call to snd_pcm_readi() takes approximately the same amount of time. I'm not asking for a real time compliant API, which I don't imagine ALSA even attempts to be, however, seeing a difference in function call time on the order of being 500 times longer (which is what I'm seeing!) then this is a real problem.
What can be done about it, and what should I do about it?
I would present a minimal reproducible example, but this isn't easy in my case.
Typically when reading and writing audio, the period size specifies how much data ALSA has reserved in DMA silicon. Normally the period size specifies your latency. So for example while you are filling a buffer for writing through DMA to the I2S silicon, one DMA buffer is already being written out.
If you have your period size too small, then the CPU doesn't have time to write audio out in the scheduled execution slot provided. Typically people aim for a minimum of 500 us or 1 ms in latency. If you are doing heavy forms of computation, then you may want to choose 5 ms or 10 ms of latency. You may choose even more latency if you are on a non-powerful embedded system.
If you want to push the limit of the system, then you can request the priority of the audio processing thread be increased. By increasing the priority of your thread, you ask the scheduler to process your audio thread before all other threads with lower priority.
One method for increasing priority taken from the gtkIOStream ALSA C++ OO classes is like so (taken from the changeThreadPriority method) :
/** Set the current thread's priority
\param priority <0 implies maximum priority, otherwise must be between sched_get_priority_max and sched_get_priority_min
\return 0 on success, error code otherwise
*/
static int changeThreadPriority(int priority){
int ret;
pthread_t thisThread = pthread_self(); // get the current thread
struct sched_param origParams, params;
int origPolicy, policy = SCHED_FIFO, newPolicy=0;
if ((ret = pthread_getschedparam(thisThread, &origPolicy, &origParams))!=0)
return ALSA::ALSADebug().evaluateError(ret, "when trying to pthread_getschedparam\n");
printf("ALSA::Stream::changeThreadPriority : Current thread policy %d and priority %d\n", origPolicy, origParams.sched_priority);
if (priority<0) //maximum priority
params.sched_priority = sched_get_priority_max(policy);
else
params.sched_priority = priority;
if (params.sched_priority>sched_get_priority_max(policy))
return ALSA::ALSADebug().evaluateError(ALSA_SCHED_PRIORITY_ERROR, "requested priority is too high\n");
if (params.sched_priority<sched_get_priority_min(policy))
return ALSA::ALSADebug().evaluateError(ALSA_SCHED_PRIORITY_ERROR, "requested priority is too low\n");
if ((ret = pthread_setschedparam(thisThread, policy, &params))!=0)
return ALSA::ALSADebug().evaluateError(ret, "when trying to pthread_setschedparam - are you su or do you have permission to set this priority?\n");
if ((ret = pthread_getschedparam(thisThread, &newPolicy, &params))!=0)
return ALSA::ALSADebug().evaluateError(ret, "when trying to pthread_getschedparam\n");
if(policy != newPolicy)
return ALSA::ALSADebug().evaluateError(ALSA_SCHED_POLICY_ERROR, "requested scheduler policy is not correctly set\n");
printf("ALSA::Stream::changeThreadPriority : New thread priority changed to %d\n", params.sched_priority);
return 0;
}

Variable Multithread Access - Corruption

In a nutshell:
I have one counter variable that is accessed from many threads. Although I've implemented multi-thread read/write protections, the variable seems to still -in an inconsistent way- get written to simultaneously, leading to incorrect results from the counter.
Getting into the weeds:
I'm using a "for loop" that triggers roughly 100 URL requests in the background, each in its “DispatchQueue.global(qos: .userInitiated).async” queue.
These processes are async, once they finish they update a “counter” variable. This variable is supposed to be multi-thread protected, meaning it’s always accessed from one thread and it’s accessed syncronously. However, something is wrong, from time to time the variable will be accessed simultaneously by two threads leading to the counter not updating correctly. Here's an example, lets imagine we have 5 URLs to fetch:
We start with the Counter variable at 5.
1 URL Request Finishes -> Counter = 4
2 URL Request Finishes -> Counter = 3
3 URL Request Finishes -> Counter = 2
4 URL Request Finishes (and for some reason – I assume variable is accessed at the same time) -> Counter 2
5 URL Request Finishes -> Counter = 1
As you can see, this leads to the counter being 1, instead of 0, which then affects other parts of the code. This error happens inconsistently.
Here is the multi-thread protection I use for the counter variable:
Dedicated Global Queue
//Background queue to syncronize data access fileprivate let
globalBackgroundSyncronizeDataQueue = DispatchQueue(label:
"globalBackgroundSyncronizeSharedData")
Variable is always accessed via accessor:
var numberOfFeedsToFetch_Value: Int = 0
var numberOfFeedsToFetch: Int {
set (newValue) {
globalBackgroundSyncronizeDataQueue.sync() {
self.numberOfFeedsToFetch_Value = newValue
}
}
get {
return globalBackgroundSyncronizeDataQueue.sync {
numberOfFeedsToFetch_Value
}
}
}
I assume I may be missing something but I've used profiling and all seems to be good, also checked the documentation and I seem to be doing what they recommend. Really appreciate your help.
Thanks!!
Answer from Apple Forums:https://forums.developer.apple.com/message/322332#322332:
The individual accessors are thread safe, but an increment operation
isn't atomic given how you've written the code. That is, while one
thread is getting or setting the value, no other threads can also be
getting or setting the value. However, there's nothing preventing
thread A from reading the current value (say, 2), thread B reading the
same current value (2), each thread adding one to this value in their
private temporary, and then each thread writing their incremented
value (3 for both threads) to the property. So, two threads
incremented but the property did not go from 2 to 4; it only went from
2 to 3. You need to do the whole increment operation (get, increment
the private value, set) in an atomic way such that no other thread can
read or write the property while it's in progress.