CRUD cron entries from Perl script - perl

Is it possible to control user crontab entries from a perl script that is run by that user? Let's say I want to read, write and delete entries.
I've looked at Schedule::Cron and can't quite understand it. Am I correct in assuming it has nothing to do with the actual crontab for each user?
Also, with regard to Schedule::Cron, is it correct that it is simply a program that must always be running on the system? So if system is turned off and on again, then it will not run (unlike cron - unless, of course the program is kicked off by a different system scheduler, like Cron; in that case, what's the point of it?)
Ideally, I'd like to do the same thing on Windows systems with task scheduler.
The key is that the script that controls scheduling behaviour (whether that is the crontab itself or something behaving like the crontab) needs to be able to exit, and the cron entries should remain. This is because the script will be called within an event loop that controls a GUI, so if the user exits the GUI, the program needs to exit, but the cron job that the user created needs to remain. Likewise, if the GUI restarts (and the event loop restarts), it should be possible to edit and delete scheduled tasks.
(EDIT: Schedule::At for one off jobs looks the business on *Nix systems. Still struggling with Windows however - the modules Win32::AdminMisc and Win32:TaskScheduler no longer look to be maintained)

The most promising option I can find is Config::Crontab.
Config::Crontab - Read/Write Vixie compatible crontab(5) files
Feel free to try searching yourself at the CPAN search site.

There are solutions fow Windows in the Win32 namespace (Win32::TaskScheduler). Out top of my head I don't know of anything that would work cross-platform.

Related

Powershell - how to detect an imminent windows re-boot?

I have a windows PowerShell script that needs to run 24/7.
Occasionally windows requires an automated update and re-boot. Is there any way I can detect when this is about to happen? I'd like to ensure the script does an orderly shutdown, rather than be abruptly terminated.
I'm not looking for a weeks notice of a re-boot, but five minutes warning would be long enough to ensure the script closes various database tables, and does some basic housekeeping rather than simply falling over in an ugly heap.
UPDATE - I know there are a lot of web articles describing how to detect when an update and/or reboot is pending, but none (that I can find) actually pin it down to a time. Some updates/reboots remain pending for hours or days. I'm looking for a flag or notification that 'this server will reboot within the next ten minutes' or something similar.

Can I open and run from multiple command line prompts in the same directory?

I want to open two command line prompts (I am using CMDer) from the same directory and run different commands at the same time.
Would those two commands interrupt each other?
One is for compiling a web application I am building (takes like 7 minutes to compile), and the other is to see the history of the commands I ran (this one should be done quickly).
Thank you!
Assuming that CMDer does nothing else than to issue the same commands to the operating system as a standard cmd.exe console would do, then the answer is a clear "Yes, they do interfere, but it depends" :D
Break down:
The first part "opening multiple consoles" is certainly possible. You can open up N console windows and in each of them switch to the same directory without any problems (Except maybe RAM restrictions).
The second part "run commands which do or do not interfere" is the tricky part. If your idea is that a console window presents you something like an isolated environment where you can do things as you like and if you close the window everything is back to normal as if you never ever touched anything (think of a virtual machine snapshot which is lost/reverted when closing the VM) - then the answer is: This is not the case. There will be cross console effects observable.
Think about deleting a file in one console window and then opening this file in a second console window: It would not be very intuitive if the file would not have been vanished in the second console window as well.
However, sometimes there are delays until changes to the file system are visible to another console window. It could be, that you delete the file in one console and make a dir where the file is sitting in another console and still see that file in the listing. But if you try to access it, the operating system will certainly quit with an error message of the kind "File not found".
Generally you should consider a console window to be a "View" on your system. If you do something in one window, the effect will be present in the other, because you changed the underlying system which exists only once (the system is the "Model" - as in "Model-View-Controller Design Pattern" you may have heard of).
An exception to this might be changes to the environment variables. These are copied from the current state when a console window is started. And if you change the value of such a variable, the other console windows will stay unaffected.
So, in your scenario, if you let a build/compile operation run and during this process some files on your file system are created, read (locked), altered or deleted then this would be a possible conflicting situation if the other console window tries to access the same files. It will be a so called "race condition", that is, a non-deterministic process, which state of a file will be actual to the second console window (or both, if the second one also changes files which the first one wants to work with).
If there is no interference on a file level (reading the same files is allowed, writing to the same file is not), then there should be no problem of letting both tasks run at the same time.
However, on a very detailed view, both processes would interfere in that they need the same limited but vastly available CPU and RAM resources of your system. This should not pose any problems with the todays PC computing power, considering features like X separate cores, 16GB of RAM, Terabytes of hard drive storage or fast SSDs, and so on.
Unless there is a very demanding, highly parallelizable, high priority task to be considered, which eats up 98% CPU time, for example. Then there might be a considerable slow down impact on other processes.
Normally, the operating system's scheduler does a good job on giving each user-process enough CPU time to finish as quickly as possible, while still presenting a responsive mouse cursor, playing some music in the background, allowing a Chrome running with more than 2 tabs ;) and uploading the newest telemetry data to some servers on the internet, all at the same time.
There are techniques which make it possible that a file is available as certain snapshots to a given timestamp. The key word would be "Shadow Copy" under Windows. Without going into details, this technique allows for example defragmenting a file while it is being edited in some application or a backup could copy a (large) file while a delete operation is run at the same file. The operating system ensures that the access time is considered when a process requests access to a file. So the OS could let the backup finish first, until it schedules the delete operation to run, since this was started after the backup (in this example) or could do even more sophisticated things to present a synchronized file system state, even if it is actually changing at the moment.

Perl Job controller

I have several perl scripts for data download, validation, database upload etc. I need to write a job controller who can run these scripts in specified manner.
Is there any job controller module in perl?
There are a bunch of options and elements to what you're looking for.
Here for instance is a "job persistence engine"
http://metacpan.org/pod/Garivini
What I think you want might be more comprehensive. You could go big with something like "bamboo" which is a continuous integration/build system. There are several of those if you want to go down that route:
http://en.wikipedia.org/wiki/Continuous_integration
Or you could start with something like RabbitMQ, which bills itself as a message queuing system but has the ability to restart failed jobs and execute things in order, so it has some resilience built in, but you the actual job control software (what watches the queue and executes events?) might need to be written by you, using the Net::RabbitMQ module. I'm not sure.
http://metacpan.org/pod/Net::RabbitMQ
Here is a (Ruby) example of using RabbitMQ to manage job queuing.
How do I trigger a job when another completes?

How do you schedule execution of a Windows Workflow?

I'd like to move my scheduled tasks into workflows so I can better monitor their execution. Currently I'm using a Window's scheduled task to call a web service that starts the process. Is there a facility that you use to schedule execution of a sequence so that it occurs every N minutes?
My optimal solution would:
Easy to configure
Provide useful feedback on errors
Be 'fire and forget'
PS - Trying out AppFabric for Windows Server if that adds any options.
The most straightforward way I know of would be to make an executable for each workflow (could be console or windows app), and have it host the workflow through code.
This way you can continue to use scheduled tasks to manage the tasks, the main issue is feedback/monitoring the process. For this you could output to console, write to the event log, or even have a more advanced visualisation with a windows app - although you'd have to write this yourself (or google for something!). This MS Workflow Monitoring sample might be of interest, haven't used it myself.
Similar deal with errors, although writing to the event log would be the normal course of action in this case.
I'm not aware of any other hosts for WF, aside from things like Dynamics CRM, but that won't help you with what you're trying to do.
You need to use a scheduler. Either roll your own, use AppFabic as mentioned or use Quartz.NET:
http://quartznet.sourceforge.net/
If you use Quartz, it's either roll your own service host or use the ready-made one and configure it using xml. I rolled my own and it worked fine.
Autorun is another free option... http://autorun.codeplex.com/

So, who should daemonize? The script or the caller?

I'm always wondering who should do it. In Ruby, we have the Daemons library which allows Ruby scripts to daemonize themselves. And then, looking at God (a process monitoring tool, similar to monit) page, I see that God can daemonize processes.
Any definitive answer out there?
You probably cannot get a definitive answer, as we generally end up with both: the process has the ability to daemonize itself, and the process monitor has the ability to daemonize its children.
Personally I prefer to have the process monitor or script do it, for a few reasons:
1. if the process monitor wishes to closely follow its children to restart them if they die, it can choose not to daemonize them. A SIGCHLD will be delivered to the monitor when one of its child processes exits. In embedded systems we do this a lot.
2. Typically when daemonizing, you also set the euid and egid. I prefer not to encode into every child process a knowledge of system-level policy like uids to use.
3. It allows re-use of the same application as either a command line tool or a daemon (I freely admit that this rarely happens in practice).
I would say it is better for your script to do it. I don't know your process monitoring tool there, but I would think users could potentially use an alternative tool, which means that having the script do it would be preferable.
If you can envision the script run in non-daemon fashion, I would add an option to the script to enable or disable daemonization.