Using NUT to manage a UPS during coastal storms - raspberry-pi

I am trying to build a UPS power management system for a coastal property that not only protects against routine (short) power outages, but has specific behavior when put in 'storm mode' for a major event (e.g. hurricane).
The idea is to have a Raspberry Pi connected by USB to the UPS running the NUT monitoring services. In 'storm mode', upon line power loss, the Pi is to run the UPS for 5 minutes every hour - long enough to get weather data, capture some photos from exterior cameras and upload that via an LTE hotspot to a web server (or if there is no cell service, store it on disk for later uploading).
I had originally thought to have a UPS capable of running the equipment continuously for 24-48 hours, but even though the power draw is not much, the UPS itself (no load) will only run 4-5 hours. Running for 5 minutes per hour should easily run for 2-3 days.
The question is, is it possible to turn the UPS on/off via the NUT services. I can find scant information on exactly what NUT is capable of and what some of the variables actually mean (e.g. ups.timer.shutdown). Are common UPS's such as the CyberPower CP1500PFCLCD capable of this (e.g. responding to USB commands after being turned off so an external controller can turn them back on even without line power)? Is this scheme feasible with common UPS equipment?

After some experimentation is appears this is feasible - the NUT documentation is very detailed on commands and parameters, but little on how to actually accomplish anything other than automate computer shutdown sequences.
The Pi itself can be powered from the UPS and with proper delays it can coordinate its own (and other equipment) power off and later power back on. Implementation could be in a shell script, Java, or other language of choice but the key sequence to power up for 5 minutes every hour is something like:
1. Pi boots and stabilizes
2. Collect environment data (weather conditions, flood sensors, camera photos) for 5 minutes
3. If internet connection (LTE hotspot) is ready, post to hosted web server else save to disk
4. Issue UPS command "load.off.delay 70" which schedules the power to go off in 70 seconds (enough time to complete the following steps)
5. If UPS status is "LB" (low battery) issue UPS command "shutdown.return" and shutdown the Pi. In 70 seconds the system is dead until power comes back on.
6. Issue UPS command "load.on.delay 3300" to schedule the UPS to turn on in 55 minutes. (There are now 2 scheduled events stored in the UPS)
7. Run an orderly shutdown of the Pi.
8. The UPS will turn off in 70 seconds, then turn back on in 55 minutes, repeat from #1.
The key to all this is that the UPS is capable of scheduling power on and power off events independently and when the load is turned off, it takes very little power for the UPS to keep its scheduler running. It should be able to run this sequence for at least 24 hours, maybe much longer.

Related

Can a process ask for x amount of time but take y amount instead?

If I am running a set of processes and they all want these burst times: 3, 5, 2 respectively, with the total expected time of execution being 10 time units.
Is it possible for one of the processes to take up more that what they ask for? For example even though it asked for 3 it took 11 instead because it was waiting on the user to enter some input. So the total execution time turns out to be 18.
This was all done in a non-preemptive cpu scheduler.
The reality is that software has no idea how long anything will take - my CPU runs at a different "nominal speed" to your CPU, both our CPUs keep changing their speed for power management reasons, and the speed of software executed by both our CPUs is effected by things like what other CPUs are doing (especially for SMT/hyper-threading) and what other devices happen to be doing at the time (their effect on caches, shared RAM bandwidth, etc); and software can't predict the future (e.g. guess when an IRQ will occur and take some time and upset the cache contents, guess when a read from memory will take 10 times longer because there was a single bit error that ECC needed to correct, guess when the CPU will get hot and reduce its speed to avoid melting, etc). It is possible to record things like "start time, burst time and end time" as it happens (to generate historical data from the past that can be analysed) but typically these things are only seen in fabricated academic exercises that have nothing to do with reality.
Note: I'm not saying fabricated academic exercises are bad - it's a useful tool to help learn basic theory before moving on to more advanced (and more realistic) theory.
Instead; for a non-preemptive scheduler, tasks don't try to tell the scheduler how much time they think they might take - the task can't know this information and the scheduler can't do anything with that information (e.g. a non-preemptive scheduler can't preempt the task when it takes longer than it guessed it might take). For a non-preemptive scheduler; a task simply runs until it calls a kernel function that waits for something (e.g. read() that waits for data from disk or network, sleep() that waits for time to pass, etc) and when that happens the kernel function that was called ends up telling the scheduler that the task is waiting and doesn't need the CPU, and the scheduler finds a different task to run that can use the CPU; and if the task never calls a kernel function that waits for something then the task runs "forever".
Of course "the task runs forever" can be bad (not just for malicious code that deliberately hogs all CPU time as a denial of service attack, but also for normal tasks that have bugs), which is why (almost?) nobody uses non-preemptive schedulers. For example; if one (lower priority) task is doing a lot of heavy processing (e.g. spending hours generating a photo-realistic picture using ray tracing techniques) and another (higher priority) task stops waiting (e.g. because it was waiting for the user to press a key and the user did press a key) then you want the higher priority task to preempt the lower priority task "immediately" (e.g. because most users don't like it when it takes hours for software to respond to their actions).

interrupt scheduling in .Net Micro Framework

.NET MF doesn't support preemptive interrupts. Once a process is completed or the 20 msec timer assigned by the scheduler times out, the interrupts can be processed.
Is there any way to change this 20 msec to a shorter time, or change the scheduler process and make it like an real-time scheduler?
Alternatively, assuming the 20 msec delay can be tolerated to begin the interrupt processing, but the exact time of interrupt occurrence at is a must-to-know factor. I think with time argument in an InterruptPort event handler, one can work backwards and determine the time at which the iteruppt got queued.
However, how about if serial port is used, and the time of data arrival to the port must be known? Is there any way that we can determine at what time data arrived to the serial port, or its corresponding interrupt was queued by the framework? Thanks.
The .NET micro framework was designed to be used on memory constrained micro controllers without an MMU (Memory Management Unit). The .NET Micro framework, even with all its multiple threading model and TCP/IP stack support, does not run on a classical real time operating system structure.
If you need strict real time capability, please check out Windows CE 6.0 R3 and Windows Compact 7 which are hard real time operating systems and have native 32 bit real time support.
Hope my answer helps you.

Basic client-server synchronization

Let do simple thing, we have a cloud, which client draws, and server which sends commands to move cloud. Assume what client 1 runs on 60 fps and Client 2 runs on 30 fps and we want kinda smooth cloud transition.
First problem - server have different fps with clients and if send move command every tick, it will start spamming commands much faster, then clients will draw.
Possible solution 1 - client sends "i want update" command after finishing frame.
Possible soolution 2 - server sends move cloud commands every x ms, but then cloud will not move smoothly. Can be combined with solution 3.
Possible solution 3 - server sends - "start move cloud with speed x" and "change "cloud direction" instead of "move cloud to x". But problem again is what checks for changing cloud dir on edge of screen, will trigger faster then cloud actually drawned on client.
Also Client 2 draws 2 times slower then Client 1, how compensate this?
How sync server logic with clients drawning in basic way?
Solution 3 sounds like the best one by far, if you can do it. All of your other solutions are much too chatty: they require extremely frequent communication between the client and server, much too frequent unless servers and clients have a very good network connection between them.
If your cloud movements are all simple enough that they can be sent to the clients as vectors such that the client can move the cloud along one vector for an extended period of time (many frames) before receiving new instructions (a new starting location and vector) from the server then you should definitely do that. If your cloud movements are not so easily representable as simple vectors then you can choose a more complex model (e.g. add instructions to transform the vector over time) and send the model's parameters to the clients.
If the cloud is part of a larger world and the clients track time in the world, then each of the sets of instructions coming from the server should include a timestamp representing the time when the initial conditions in the model are valid.
As for your question about how to compensate for client 2 drawing two times slower than client 1, you need to make your world clock tick at a consistent rate on both clients. This rate need not have any relationship with the screen refresh rate on either client.

Throttling in VBA

The Back Story
A little while back, I was asked if we could implement a mass email solution in house so that we would have better control over sensitive information. I proposed a two step plan: Develop a prototype in Excel/VBA/CDO for user familiarity, then phase a .Net/SQL server solution for speed and robustness.
What's Changed
3 Months into the 2nd phase, management decides to go ahead and outsource email marketing to another company, which is fine. The 1st problem is that management has not made a move on a company to go through, so I am still implicity obligated to make the current prototype work.
Still, the prototype works, or at least it did. The 2nd problem came when our Exchange 2003 Relay Server got switched with Exchange 2010. Turns out that more "safety" features are turned by default like Throttling Policies, which I have been helping the sysadmin iron out a server config that works. What's happening is that the after +100 emails get sent, the server starts rejecting the send requests with the following error:
The message could not be sent to the SMTP server. The transport error code is 0x800ccc67.
The server response was 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel
Unfortunately, we only get to test the server configuration when Marketing has something to send out, which is about once per month.
What's Next?
I am looking at Excel's VBA Timer Function to help throttle my main loop pump to help throttle the send requests. The 3rd problem here is, from what I understand from reading, is that the best precision I can get is 1 second on the timer. 1 email per second would be considerably longer ( about 4x-5x longer) as oppossed to the 5 email/sec we have been sending at. This turns a 3 hour process into a an all day process past the hours of staff availability. I suppose I can invert the rate by sending 5 emails for every second that passes, but the creates more of a burst affect as opposed a steady rate if had more precision on the timer. In my opinion, this creates a less controlled process and I am not sure how the server will handle bursts as opposed a steady rate. What are my options?
You can use the windows sleep API if you need finer timer control. It has it's units in milliseconds:
Private Declare Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As Long)
Public Sub Testing()
'do something
Sleep(1000) 'sleep for 1 second
'continue doing something
End Sub
I'm not very familiar with Exchange, so I can't comment on the throttling policies in place.

Looking for ideas on how to slow down or speed up time in a single process

First, let me start with I'm interested in reverse engineering the in game timing of both offline and online games, and experimenting with the dependencies these games have on local system clocks and - if it's online - the online clock, and seeing what happens with these games should I speed up or slow down the clock on a Microsoft Windows based system.
I found this utility (https://www.nirsoft.net/utils/run_as_date.html) which intercepts GetSystemTime, GetLocalTime, GetSystemTimeAsFileTime, NtQuerySystemTime, GetSystemTimePreciseAsFileTime calls made to Windows and returns back a corresponding date and time that I set - which can be in the future or the past.
The problem with this utility is - the time runs in complete synchronization with my system clock, so should I set the date and time for precisely one year in the future for this application - every second on my clock is one second on the intercepted call's clock.
What I would LIKE to do is decouple time a little more. Same type of application, where when I set the start date and time - and then - I set a 'relative speed' where 1/1 would be one second for every second in my real world or 1/5 might be for every one second in the real world is five seconds in the game world.
Any advice, hints, or clues where I can achieve the desired results?