In the blocking world, it is highly recommended to set aggressive timeouts in order to fail fast and release the underlying resources (Section 5.1 of https://pragprog.com/book/mnee/release-it).
In the async/non-blocking world, requests are not blocking the main thread and the resources are available immediately for further processing. Timeouts are still necessary, however does it still make sense to set aggressive values?
In real-time software, network requests or control operations on machinery take a large amount of time in comparison to day-to-day software operations. For instance, telling a step motor to advance to a particular position may take seconds, while normal operations might take milliseconds. Let's say that a typical step motor advance takes n milliseconds, and one that goes the maximum distance takes m milliseconds.
An aggressive timeout would compute n and add a small fudge factor, perhaps 10%, and fail quickly if the goal wasn't reached in that time. As you stated, the aggressive timeout will allow you to release resources. A non-aggressive timeout of m plus epsilon would fail much more slowly, and tie up resources unnecessarily.
In the asynchronous software world, there a number of other choices between success and failure. An asynchronous operation might also calculate n plus 10%, and put up a progress bar (if user feedback is desired) and then show progress towards the estimated goal's end. When the timeout is reached, the progress bar would be full, but you might cause it to pulse or change color to indicate it was taking longer than expected. If the step motor still had not reached its goal after m milliseconds, then you could announce a failure.
In other cases, when the feedback is not important, then you could certainly use m plus epsilon as your timeout.
Related
Setup and Goal:
I want to solve a vehicle routing problem with time windows and optional stops. I am looking for a solution that serves as many stops as possible. There is no need to minimize arc costs, I am only interested in the number of serviced stops. For each stop I have defined a cost for not servicing it. I define the cost of an arc (i,j) to be the transit time in seconds because I my understanding is that this is useful in problems with time windows.
Problem:
The arc costs interfere with the penalties for dropping stops. The solver might drop a stop because getting there is too expensive. But I don't care about the cost of getting to a stop.
Possible Solutions:
Multiply the stop drop penalties by a large number so that is much more expensive to drop a stop than to travel between stops.
Remove the arc cost from the objective function but keep the arc cost available for the first solution strategy.
Question:
Is 2) possible in or-tools? What's the best practice for guiding the first solution strategy but not accumulating costs in doing so?
I want my RL agent to reach the goal as quickly as possible and at the same time to minimize the number of times it uses a specific resource T (which sometimes though is necessary).
I thought of setting up the immediate rewards as -1 per step, an additional -1 if the agent uses T and 0 if it reaches the goal.
But the additional -1 is completely arbitrary, how do I decide how much punishment should the agent get for using T?
You should use a reward function which mimics your own values. If the resource is expensive (valuable to you), then the punishment for consuming it should be harsh. The same thing goes for time (which is also a resource if you think about it).
If the ratio between the two punishments (the one for time consumption and the one for resource consumption) is in accordance to how you value these resources, then the agent will act precisely in your interest. If you get it wrong (because maybe you don't know the precise cost of the resource nor the precise cost of slow learning), then it will strive for a pseudo optimal solution rather than an optimal one, which in a lot of cases is okay.
I'm learning about the differences between Polling and Interrupts for I/O in my OS class and one of the things my teacher mentioned was that the speed of the I/O device can make a difference in which method would be better. He didn't follow up on it but I've been wracking my brain about it and I can't figure out why. I feel like using Interrupts is almost always better and I just don't see how the speed of the I/O device has anything to do with it.
The only advantage of polling comes when you don't care about every change that occurs.
Assume you have a real-time system that measures the temperature of a vat of molten plastic used for molding. Let's also say that your device can measure to a resolution of 1/1000 of a degree and can take new temperature every 1/10,000 of a second.
However, you only need the temperature every second and you only need to know the temperature within 1/10 of a degree.
In that kind of environment, polling the device might be preferable. Make one polling request every second. If you used interrupts, you could get 10,000 interrupts a second as the temperature moved +/- 1/1000 of a degree.
Polling used to be common with certain I/O devices, such as joysticks and pointing devices.
That said, there is VERY little need for polling and it has pretty much gone away.
Generally you would want to use interrupts, because polling can waste a lot of CPU cycles. However, if the event is frequent, synchronous (and if other factors apply e.g. short polling times...) polling can be a good alternative, especially because interrupts create more overhead than polling cycles.
You might want to take a look at this thread as well for more detail:
Polling or Interrupt based method
Is there a way to calculate the electricity consumed to load and render a webpage (frontend)? I was thinking of a 'test' made with phantomjs for example:
load a web page
scroll to the bottom
And measure how much electricity was needed. I can perhaps extrapolate from CPU cycle. But phantomjs is headless, rendering in real browser is certainly different. Perhaps it's impossible to do real measurements.. but with an index it may be possible to compare websites.
Do you have other suggestions?
It's pretty much impossible to measure this internally in modern processors (anything more recent than 286). By internally, I mean by counting cycles. This is because different parts of the processor consume different levels of energy per cycle depending upon the instruction.
That said, you can make your measurements. Stick a power meter between the wall and the processor. Here's a procedure:
Measure the baseline energy usage, i.e. nothing running except the OS and the browser, and the browser completely static (i.e. not doing anything). You need to make sure that everything is stead state (SS) meaning start your measurements only after several minutes of idle.
Measure the usage doing the operation you want. Again, you want to avoid any start up and stopping work, so make sure you start measuring at least 15 seconds after you start the operation. Stopping isn't an issue since the browser will execute any termination code after you finish your measurement.
Sounds simple, right? Unfortunately, because of the nature of your measurements, there are some gotchas.
Do you recall your physics classes (or EE classes) that talked about signal to noise ratios? Well, a scroll down uses very little energy, so the signal (scrolling) is well in the noise (normal background processes). This means you have to take a LOT of samples to get anything useful.
Your browser startup energy usage, or anything else that uses a decent amount of processing, is much easier to measure (better signal to noise ratio).
Also, make sure you understand the underlying electronics. For example, power is VA (voltage*amperage) where both V and A are in phase. I don't think this will be an issue since I'm pretty sure they are in phase for computers. Also, any decent power meter understands the difference.
I'm guessing you intend to do this for mobile devices. Your measurements will only be roughly the same from processor to processor. This is due to architectural differences from generation to generation, and from manufacturer to manufacturer.
Good luck.
discussing criterias for Operating-Systems every time I hear Interupt-Latency and OS-Jitter. And now I ask myself, what is the Difference between these two.
In my opinion the Interrupt-Latency is the Delay from occurence of an Interupt until the Interupt-Service-Routine (ISR) is entered.
On the contrary Jitter is the time the moment of entering the ISR differs over time.
Is this the same you think?
Your understanding is basically correct.
Latency = Delay between an event happening in the real world and code responding to the event.
Jitter = Differences in Latencies between two or more events.
In the realm of clustered computing, especially when dealing with massive scale out solutions, there are cases where work distributed across many systems (and many many processor cores) needs to complete in fairly predictable time-frames. An operating system, and the software stack being leveraged, can introduce some variability in the run-times of these "chunks" of work. This variability is often referred to as "OS Jitter". link
Interrupt latency, as you said is the time between interrupt signal and entry into the interrupt handler.
Both the concepts are orthogonal to each other. However, practically, more interrupts generally implies more OS Jitter.