Wind Turbine Maintenance tutorial model - simulation AnyLogic - anylogic

I have now set up the model and all is working. I was just wondering if there is somehow to prevent downtime by editing the scheduled event or even adding an extra scheduled event.
https://anylogic.help/tutorials/turbine-maintenance/1-different-types-of-agents.html
The link to the tutorial.
I guess I need to add an event and connect it to the failure rate somehow. Because right now the failure rate is static with a failure rate of 1/MTTF where MTTF = 50 days.
So, the question is how do I add an event to prevent downtime where it is connected with the failure rate in someway.

Related

pagerduty allow time to self resolve before alerting

Is there an easy way to setup pagerduty to hold alerts to allow time for the cloud to self resolve?
my team often gets woken up for alerts that self resolve before we have a chance to address it. If Pagerduty could hold the alert for 5 minutes we would avoid unactionable alerts.
You can use Event Orchestration or Event Rules at a service level to control this. What you are looking for is to create an alert but pause notifications. This will suspend the alert for a time period of your choice, allowing the alert to resolve itself. If the alert doesn't resolve within the time period an incident will open as expected.
Event Orchestration (should be available on basic+ tiers)
Once you define the conditions for the Orchestration rule you want to set an incident action to pause notifications for 300 seconds (5 minutes).
https://support.pagerduty.com/docs/event-orchestration#incident-actions
If you have the event intelligence package you can also look at the auto-pause feature which detects and pauses transient alerts for you.

Azure WebJob - Limit Processing Time to Specific Hours

I have an MVC web site, a storage queue and a WebJob. Users can request the generation of a set of reports by clicking a button on the web page. This inserts a message into the storage queue. In the past, the WebJob ran continuously and processed those requests fine. But the demand and size of the reports has grown to the point where the WebJob is slowing down the web app. I would like to still place the request message in the queue, but delay processing of all requests until the evening, when the web app is mostly idle. This would allow me to continue using the WebJob code and QueueTrigger functionality without having to waste resources by moving to a dedicated Worker Role, etc. The reports don't need to be generated immediately, so a delay is acceptable.
I don't see a built-in way to set a time window on processing. The only thing I have found is a powershell cmdlet for starting and stopping WebJobs (Start-AzureWebsiteJob / Stop-AzureWebsiteJob). So I was thinking that I could create a scheduled powershell job that runs at midnight, starts the webjob, lets it run, and then runs again early in the AM and stops it.
Does anyone know of a better option than this? Anything more "official" that perhaps I could not find?
One possible solution would be to hide the messages in the queue for a certain amount of time when they are inserted.
If you're using AddMessage method, you can specify this timespan value in initialVisibilityDelay parameter.
What this will do is ensure that the messages are not immediately visible in the queue to be picked by WebJob and will become visible only when this timespan elapses.
Will such a solution work for you?
Maybe I didn't fully understand your question, but couldn't you use "Triggered" WebJob that is triggered by CRON schedule? You can then limit it to specific hours
0 * 20-22 * * *
This example will run every minute from 8pm to 10pm

Sending Reminders for Tasks

I have recently been thinking about possible architecture for a simple task reminder system. User will schedule a task and reminder in form of SMS/email/android needs to be sent to all stakeholders at some x minutes before the task is scheduled to be performed(much in the same way google calendar works). The problem here is to send the reminder at that precise point in time. Here are the two possible approaches I can think of:
Cron: I can setup a cron to run every minute. This will scan the table for notifications which need to be sent in the next minute and simply sends the notifications. But, precision is lost as there is always the chance of that +/-1 min error.
Work Queues: I can simply put a message with appropriate delay in a queue at the time task was scheduled. Workers will send the notification as and when they receive the message. I can add as many workers as I want in case my real time behavior starts getting affected because of load. There are still a few issues. How to choose the appropriate work queue? I have evaluated RabbitMq and Beanstalk. While Rabbitmq follows standard AMQP protocol and is widely suggested, it doesn't provide the delay functionality out of the box. There are ways to simulate this using dead-letter-exchanges but this will not work in my case because the delay needs to be variable. Beanstalk supports this but the problem is that beanstalk queue resides entirely in memory which I don't like(but can live with). Any possible alternatives?
Third Approach: ??????. I am sure a simple desktop notification tool does neither of the two. What technology do they use to achieve the same thing?
We had the same scenario and we use Redis for long schedules even now reminders for up to 2 years. You can use Sorted Set where the timestamp is the score.
We use Beanstalkd delay jobs for those kind of reminders where we know it's relatively short term couple of hours, and there is no cancellations, as removing from beanstalkd a delayed message you need to retain the job id in a database for later removal, and that is no viable.
Although you mention memory limit, we use persistence on both Redis/Beanstalkd

CQRS/EventStore: How are failures to deliver events handled?

Getting into CQRS and I understand that you have commands (app layer) and events (from the domain).
In the simple case where events are to update the read model, do read model updates fail? If there is no "bug" then I cannot see them failing and as I am using EventStore, I know there is a commit flag which will retry failures.
So my question is do I have to do anything in addition to EventStore to handle failures?
Coming from a world where you do everything in one transaction and now things are done separately is worrying me.
Of course there may be cases where a published event will fail in the read models.
You have to make sure you can detect that and solve it.
The nice thing is that you can replay all the events again and again so you have the chance not only to fix the error. You can also test the fix by replaying every single event if you want.
I use NServiceBus as my publishing mechanism which allows me to use an error queue. Using my other logging tools together with the error queue I can easily determine what happened since I have the error log and the actual message that caused the error in the first place.

Windows Workflow: Persistence and Polling

I'm currently learning the WF framework, so bear with me; mostly I'm looking for where to start looking, not necessarily a direct answer. I just can't seem to figure out how to begin researching what I'd like in The Google.
Let's say I have a simple one-step workflow (much more complicated than that, but for simplicity's sake). This workflow needs to watch a certain record in the database to see when it changes. I don't have the capability to "push" via a trigger from the database when the row changes, so I need to poll for it every so often.
This workflow needs to be persisted to the database to be durable against restarts and whatnot as this is a long-running workflow. I'm trying to figure out the best way to get it to check every 3 minutes or so and also persist to the database. Do the persistence capabilities of the framework allow for that? It seems to be time-based. And since the workflow won't be reawakened by an external event, how does it reload from the database and check the same step it did previously again? Does it attempt the last unfulfilled activity automatically upon reloading?
Do "while" activities with a delay attached to it work at all, or can it be handled solely through the persistence services?
I'm not sure what you mean by "handled soley through persistence services"? Persistence refers only to the storing of an idle workflow.
You could have a Delay and a Code activity in a Sequence in a While loop. When in the Delay the workflow will go idle and may be persisted if necessary. However depending on how much state is needed when persisting the workflow and/or how many such workflows you would have running at any one time may mean that a leaner approach is necessary.
A leaner approach would be to externalise the DB watching and have some "DB watching" workflow service raise an event when the desired change has occured. This service would be added to Workflow runtime.
To that end you need a service contract which is defined by an Inteface with the [ExternalDataExchange] attribute. This interface in turn defines an event that the service will raise when the desired DB change is detected. It also defines a method that a Workflow can call to specify what what change this service should be looking for. The method should accept an instance GUID so that the requesting instance can be found when the DB change is detected.
In the workflow you use a CallExternalMethodActivity to call this services method. You then flow to a HandleExternalEventActivity which listen for the event. At this point the workflow will go idle and can be persisted. It will remain there until the service raises the event.