I have some CMD/ Powershell scripts which may be run from the command line or as a scheduled task.
When in scheduled task, some options may not be available (eg. GUI) and some need to be used differently (eg. error logging to file/ event log instead of screen).
Is there a way to find if the script is currently running as a scheduled task and the name of the scheduled task (for logging purpose)?
If needed, I can use advanced programming tools such ad .NET, C#, etc.
Related
I a looking to run a scheduled pipeline that executes a powershell script against machines configured in a deployment group.
So far I've only seen that I can use a "Run PowerShell on Target Machines" task or write the powershell script itself to target specific machines.
I'd rather somehow use the deployment group configuration because that is where we set up what machines apply to each environment. Is there a way to run a powershell script against any machines in a deployment group?
I have a task scheduled in ECS that is known to work both when run manually via the "Run New Task" option and when left to run as a cron expression.
However, the logs only seem to appear in CloudWatch when I run the task manually (using the same task definition, settings, etc.). When run via a scheduled task, the logs do not appear.
I've verified in CloudTrail that the task did run successfully when scheduled and there appear to be no errors, either.
Has anyone else come up against this? I've done some digging online, and while I have seen mention of it, no one has posted a resolution or answer as far as I can tell.
Thanks in advance.
I had developed a Job in Talend and built the job and automated to run the Windows Batch file from the below build
On the Execution of the Job Start Windows Batch file it will invoke the dimtableinsert job and then after it finishes it will invoke fact_dim_combine it is taking just minutes to run in the Talend Open Studio but when I invoke the batch file via the Task Scheduler it is taking hours for the process to finish
Time Taken
Manual -- 5 Minutes
Automation -- 4 hours (on invoking Windows batch file)
Can someone please tell me what is wrong with this Automation Process
The reason of the delay in the execution would be a latency issue. Talend might be installed in the same server where database instance is installed. And so whenever you execute the job in Talend, it will complete as expected. But the scheduler might be installed in the other server, when you call the job through scheduler, it would take some time to insert the data.
Make sure you scheduler and database instance is on the same server
Execute the job directly in the windows terminal and check if you have same issue
The easiest way to know what is taking so much time is to add some logs to your job.
First, add some tWarn at the start and finish of each of the subjobs (dimtableinsert and fact_dim_combine) to know which one is the longest.
Then add more logs before/after the components inside the jobs.
This way you should have a better idea of what is responsible for the slowdown (DB access, writing of some files, etc ...)
I have written a PowerShell script that I want to run in a daily basis. I want now to create an alert for the same in the system.
But since this needs to be done in not only my computer but also in my teammates' computers (around 10 computers) I was thinking whether it is possible to write a script that when it runs, it automatically schedules a task in the system.
I am writing an application to allow users to schedule one-time long-running tasks from a web application (Linux/Apache/CGI::Application). To do this I use the Schedule::At module which is the Perl interface to the "at" command. Since the scheduled tasks are not repeating, I am not considering "cron". I have two issues with "at" though:
Scheduling works fine when my CGI application runs under the suexec wrapper, but not when scheduled by the owner of the Apache process. How can I get scheduling to work in both environments (suexec and no-suexec)?
It appears that the processes scheduled by "at" or Schedule::At have no failure reporting, and I sometimes find that scheduled tasks fail silently. Is there some way to log the fact that the scheduled task (not the scheduler itself) has failed to run?
I am not fixed on "at" and am open to using other, more robust, scheduling methods if there are any.
Thank you for your attention.
I've heard good things about The Schwartz . It doesn't have a delay-until though; you'd submit the jobs via at, but that should solve both of the problems you list above, as long as your submit_job script was simple.
(as a caveat, I've only used Gearman, I think you'd want a reliable job queue for this, a "fire and forget" mechanism, so you can keep your submit_job dumb.)