I have a windows PowerShell script that needs to run 24/7.
Occasionally windows requires an automated update and re-boot. Is there any way I can detect when this is about to happen? I'd like to ensure the script does an orderly shutdown, rather than be abruptly terminated.
I'm not looking for a weeks notice of a re-boot, but five minutes warning would be long enough to ensure the script closes various database tables, and does some basic housekeeping rather than simply falling over in an ugly heap.
UPDATE - I know there are a lot of web articles describing how to detect when an update and/or reboot is pending, but none (that I can find) actually pin it down to a time. Some updates/reboots remain pending for hours or days. I'm looking for a flag or notification that 'this server will reboot within the next ten minutes' or something similar.
Related
I want to open two command line prompts (I am using CMDer) from the same directory and run different commands at the same time.
Would those two commands interrupt each other?
One is for compiling a web application I am building (takes like 7 minutes to compile), and the other is to see the history of the commands I ran (this one should be done quickly).
Thank you!
Assuming that CMDer does nothing else than to issue the same commands to the operating system as a standard cmd.exe console would do, then the answer is a clear "Yes, they do interfere, but it depends" :D
Break down:
The first part "opening multiple consoles" is certainly possible. You can open up N console windows and in each of them switch to the same directory without any problems (Except maybe RAM restrictions).
The second part "run commands which do or do not interfere" is the tricky part. If your idea is that a console window presents you something like an isolated environment where you can do things as you like and if you close the window everything is back to normal as if you never ever touched anything (think of a virtual machine snapshot which is lost/reverted when closing the VM) - then the answer is: This is not the case. There will be cross console effects observable.
Think about deleting a file in one console window and then opening this file in a second console window: It would not be very intuitive if the file would not have been vanished in the second console window as well.
However, sometimes there are delays until changes to the file system are visible to another console window. It could be, that you delete the file in one console and make a dir where the file is sitting in another console and still see that file in the listing. But if you try to access it, the operating system will certainly quit with an error message of the kind "File not found".
Generally you should consider a console window to be a "View" on your system. If you do something in one window, the effect will be present in the other, because you changed the underlying system which exists only once (the system is the "Model" - as in "Model-View-Controller Design Pattern" you may have heard of).
An exception to this might be changes to the environment variables. These are copied from the current state when a console window is started. And if you change the value of such a variable, the other console windows will stay unaffected.
So, in your scenario, if you let a build/compile operation run and during this process some files on your file system are created, read (locked), altered or deleted then this would be a possible conflicting situation if the other console window tries to access the same files. It will be a so called "race condition", that is, a non-deterministic process, which state of a file will be actual to the second console window (or both, if the second one also changes files which the first one wants to work with).
If there is no interference on a file level (reading the same files is allowed, writing to the same file is not), then there should be no problem of letting both tasks run at the same time.
However, on a very detailed view, both processes would interfere in that they need the same limited but vastly available CPU and RAM resources of your system. This should not pose any problems with the todays PC computing power, considering features like X separate cores, 16GB of RAM, Terabytes of hard drive storage or fast SSDs, and so on.
Unless there is a very demanding, highly parallelizable, high priority task to be considered, which eats up 98% CPU time, for example. Then there might be a considerable slow down impact on other processes.
Normally, the operating system's scheduler does a good job on giving each user-process enough CPU time to finish as quickly as possible, while still presenting a responsive mouse cursor, playing some music in the background, allowing a Chrome running with more than 2 tabs ;) and uploading the newest telemetry data to some servers on the internet, all at the same time.
There are techniques which make it possible that a file is available as certain snapshots to a given timestamp. The key word would be "Shadow Copy" under Windows. Without going into details, this technique allows for example defragmenting a file while it is being edited in some application or a backup could copy a (large) file while a delete operation is run at the same file. The operating system ensures that the access time is considered when a process requests access to a file. So the OS could let the backup finish first, until it schedules the delete operation to run, since this was started after the backup (in this example) or could do even more sophisticated things to present a synchronized file system state, even if it is actually changing at the moment.
I have an MVC web site, a storage queue and a WebJob. Users can request the generation of a set of reports by clicking a button on the web page. This inserts a message into the storage queue. In the past, the WebJob ran continuously and processed those requests fine. But the demand and size of the reports has grown to the point where the WebJob is slowing down the web app. I would like to still place the request message in the queue, but delay processing of all requests until the evening, when the web app is mostly idle. This would allow me to continue using the WebJob code and QueueTrigger functionality without having to waste resources by moving to a dedicated Worker Role, etc. The reports don't need to be generated immediately, so a delay is acceptable.
I don't see a built-in way to set a time window on processing. The only thing I have found is a powershell cmdlet for starting and stopping WebJobs (Start-AzureWebsiteJob / Stop-AzureWebsiteJob). So I was thinking that I could create a scheduled powershell job that runs at midnight, starts the webjob, lets it run, and then runs again early in the AM and stops it.
Does anyone know of a better option than this? Anything more "official" that perhaps I could not find?
One possible solution would be to hide the messages in the queue for a certain amount of time when they are inserted.
If you're using AddMessage method, you can specify this timespan value in initialVisibilityDelay parameter.
What this will do is ensure that the messages are not immediately visible in the queue to be picked by WebJob and will become visible only when this timespan elapses.
Will such a solution work for you?
Maybe I didn't fully understand your question, but couldn't you use "Triggered" WebJob that is triggered by CRON schedule? You can then limit it to specific hours
0 * 20-22 * * *
This example will run every minute from 8pm to 10pm
I have a set of scripts that run and spit out various bits of output. Sometimes they'll just stop until I hit enter. I have nothing in my script that prompts for information from the user.
At first I thought maybe it just wasn't flushing the output, but I've sat and waited to see what would happen and it doesn't act as if it had been processing in the background and just not flushing the output to the console (it would be further along).
The strange thing is that it happens at different points in the script.
Does anyone have any input on this? Anything I can look at specifically to identify this? This script will eventually be kicked off by another process and I can't have it randomly waiting and sitting.
Is it possible to control user crontab entries from a perl script that is run by that user? Let's say I want to read, write and delete entries.
I've looked at Schedule::Cron and can't quite understand it. Am I correct in assuming it has nothing to do with the actual crontab for each user?
Also, with regard to Schedule::Cron, is it correct that it is simply a program that must always be running on the system? So if system is turned off and on again, then it will not run (unlike cron - unless, of course the program is kicked off by a different system scheduler, like Cron; in that case, what's the point of it?)
Ideally, I'd like to do the same thing on Windows systems with task scheduler.
The key is that the script that controls scheduling behaviour (whether that is the crontab itself or something behaving like the crontab) needs to be able to exit, and the cron entries should remain. This is because the script will be called within an event loop that controls a GUI, so if the user exits the GUI, the program needs to exit, but the cron job that the user created needs to remain. Likewise, if the GUI restarts (and the event loop restarts), it should be possible to edit and delete scheduled tasks.
(EDIT: Schedule::At for one off jobs looks the business on *Nix systems. Still struggling with Windows however - the modules Win32::AdminMisc and Win32:TaskScheduler no longer look to be maintained)
The most promising option I can find is Config::Crontab.
Config::Crontab - Read/Write Vixie compatible crontab(5) files
Feel free to try searching yourself at the CPAN search site.
There are solutions fow Windows in the Win32 namespace (Win32::TaskScheduler). Out top of my head I don't know of anything that would work cross-platform.
This is a shared hosting environment. I control the server, but not necessarily the content. I've got a client with a Perl script that seems to run out of control every now and then and suck down 50% of the processor until the process is killed.
With ASP scripts, I'm able to restrict the amount of time the script can run, and IIS will simply shut it down after, say, 90 seconds. This doesn't work for Perl scripts, since it's running as a cgi process (and actually launches an external process to execute the script).
Similarly, techniques that look for excess resource consumption in a worker process will likely not see this, since the resource that's being consumed (the processor) is being chewed up by a child process rather than the WP itself.
Is there a way to make IIS abort a Perl script (or other cgi-type process) that's running too long? How??
On a UNIX-style system, I would use a signal handler trapping ALRM events, then use the alarm function to start a timer before starting an action that I expected might timeout. If the action completed, I'd use alarm(0) to turn off the alarm and exit normally, otherwise the signal handler should pick it up to close everything up gracefully.
I have not worked with perl on Windows in a while and while Windows is somewhat POSIXy, I cannot guarantee this will work; you'll have to check the perl documentation to see if or to what extent signals are supported on your platform.
More detailed information on signal handling and this sort of self-destruct programming using alarm() can be found in the Perl Cookbook. Here's a brief example lifted from another post and modified a little:
eval {
# Create signal handler and make it local so it falls out of scope
# outside the eval block
local $SIG{ALRM} = sub {
print "Print this if we time out, then die.\n";
die "alarm\n";
};
# Set the alarm, take your chance running the routine, and turn off
# the alarm if it completes.
alarm(90);
routine_that_might_take_a_while();
alarm(0);
};
The ASP script timeout applies to all scripting languages. If the script is running in an ASP page, the script timeout will close the offending page.
An update on this one...
It turns out that this particular script apparently is a little buggy, and that the Googlebot has the uncanny ability to "press it's buttons" and drive it crazy. The script is an older, commercial application that does calendaring. Apparently, it displays links for "next month" and "previous month", and if you follow the "next month" too many times, you'll fall off a cliff. The resulting page, however, still includes a "next month" link. Googlebot would continuously beat the script to death and chew up the processor.
Curiously, adding a robots.txt with Disallow: / didn't solve the problem. Either the Googlebot had already gotten ahold of the script and wouldn't let loose, or else it simply was disregarding the robots.txt.
Anyway, Microsoft's Process Explorer (http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) was a huge help, as it allowed me to see the environment for the perl.exe process in more detail, and I was able to determine from it that it was the Googlebot causing my problems.
Once I knew that (and determined that robots.txt wouldn't solve the problem), I was able to use IIS directly to block all traffic to this site from *.googlebot.com, which worked well in this case, since we don't care if Google indexes this content.
Thanks much for the other ideas that everyone posted!
Eric Longman
Googling for "iis cpu limit" gives these hits:
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/38fb0130-b14b-48d5-a0a2-05ca131cf4f2.mspx?mfr=true
"The CPU monitoring feature monitors and automatically shuts down worker processes that consume large amounts of CPU time. CPU monitoring is enabled for individual application pools."
http://technet.microsoft.com/en-us/library/cc728189.aspx
"By using CPU monitoring, you can monitor worker processes for CPU usage and optionally shut down the worker processes that consume large amounts of CPU time. CPU monitoring is only available in worker process isolation mode."