I have to move an SAP background job (ABAP report for A/P) into Cronacle and can't figure out how to stop the job in SAP so I can start running it with the Cronacle schedule.
The job runs in SAP from user SAPSYS every morning at 7:15am, but if you look it up with sm37 there is no time scheduled for it and it's not triggered by an event; also, it's status is SCHEDULED.
I had our Cronacle team search by job number but they couldn't find any scripts pointing to that job. If you look at the finished job it shows that it's scheduled daily for 7:15am. Also, there is no predecessor or successor jobs listed. Is it possible it's being started from another job? How do I find out without deleting this one?
Some suggestion.
If you don't want to delete the scheduled job. try to rename it, and see if it still runs.
Make sure that the users you are using for sm37 has full authorization for the backround administration.
A previous job can schedule and release and create and whatever a new job. Look at what is running before the problematic job.
Look deeply at the dev traces. They somtimes hints about what is going on in the system.
In addtion to a previous job creating the new job explicitly it is also possible that the job is created manually by an ABAP program that is scheduled in another job. Doing a where-used on the function module OPEN_JOB and looking for Z* or Y* programs may give you a hint.
Another thing: Is this scheduled job ever actually excecuted (i.e. are there any previous "FINISHED" jobs with the same name). A Scheduled job will not run unless it is first released. So if it never runs it may be obsolete.
Thanks for the responses! It turned out to be a case of "newbie ignorance." When using SM37 to view the job I neglected to extend the search date to the next day. I don't know why it doesn't show the released job for the current day, but extending it to the next day showed it. That's a lesson I won't forget!
Related
I'm wanting to setup an ECS task to schedule various other application tasks.
The "tasks" this task will schedule will mostly involve calling restful endpoints in another load balanced service.
I know there are other ways to do this, using cloudwatch to trigger a lambda etc. However this seems overly complex for what I need.
I was planning to just make a very simple, light-weight apline based image with a crontab to do the triggering of the restful calls.
This all seems easy enough. The only concern I have is that I would want to prevent, as far as possible, having multiple instances of this task running, even if only for a short period of time.
If my CI/CD pipeline triggers an update to this cron task, then there may be a short period of time, where the old and new task will be running simultaneously.
There may therefore be a small chance that a cron task could be triggered twice.
What I would like to do, is to have ECS stop the currently running task completely, before attempting to start the new one.
This seems to be contrary to the normal way it wants to work, where it will ensure the new task is up, and healthy before stopping the old one.
Is this possible, and if so, how do I configure it?
It's not a problem if my crons don't run for a period of time, but it could be a problem if any get triggered more than once.
Instead of using ECS Service (which makes sure a particular number of tasks is always running and deploys via rolling or B/G deploy strategy which is not you desire) - how about using StopTask and RunTask api to control when a task is stopped and started - gives you complete control.
Instead of using scheduled tasks, you could create an ECS service and use scheduled scaling to scale the desired service count to 1 and back down to zero.
I have an MVC web site, a storage queue and a WebJob. Users can request the generation of a set of reports by clicking a button on the web page. This inserts a message into the storage queue. In the past, the WebJob ran continuously and processed those requests fine. But the demand and size of the reports has grown to the point where the WebJob is slowing down the web app. I would like to still place the request message in the queue, but delay processing of all requests until the evening, when the web app is mostly idle. This would allow me to continue using the WebJob code and QueueTrigger functionality without having to waste resources by moving to a dedicated Worker Role, etc. The reports don't need to be generated immediately, so a delay is acceptable.
I don't see a built-in way to set a time window on processing. The only thing I have found is a powershell cmdlet for starting and stopping WebJobs (Start-AzureWebsiteJob / Stop-AzureWebsiteJob). So I was thinking that I could create a scheduled powershell job that runs at midnight, starts the webjob, lets it run, and then runs again early in the AM and stops it.
Does anyone know of a better option than this? Anything more "official" that perhaps I could not find?
One possible solution would be to hide the messages in the queue for a certain amount of time when they are inserted.
If you're using AddMessage method, you can specify this timespan value in initialVisibilityDelay parameter.
What this will do is ensure that the messages are not immediately visible in the queue to be picked by WebJob and will become visible only when this timespan elapses.
Will such a solution work for you?
Maybe I didn't fully understand your question, but couldn't you use "Triggered" WebJob that is triggered by CRON schedule? You can then limit it to specific hours
0 * 20-22 * * *
This example will run every minute from 8pm to 10pm
I have set up a couple of daily tasks that update a SQL table and then sends out an email with a CSV attached. 5 of the scheduled tasks only complete successfully if the first task runs successfully. How would I add an argument in Task Scheduler to run the sequential tasks only if the first task was completed successfully?
The reasoning behind the request is due to the issue that sometimes the first script runs in a few minutes and other days it can take over an hour to complete. Any suggestions?
Thank you
It can be done! See here
http://blogs.msdn.com/b/davethompson/archive/2011/10/25/running-a-scheduled-task-after-another.aspx
In summary though say you have a task called Ping and you want a task called pong to run after it.
Create a task called Pong
Create an On an Event Trigger
Select Custom and edit the XML to be something like this
<QueryList>
<Query Id="0" Path="Microsoft-Windows-TaskScheduler/Operational">
<Select Path="Microsoft-Windows-TaskScheduler/Operational">*[EventData
[#Name='TaskSuccessEvent'][Data[#Name='TaskName']='\Ping']]</Select>
</Query>
</QueryList>
I dont think what you want is possible with the windows task scheduler. I would propose that you start the scripts that depend on the first one running successfully from the first script itself. That way you can be sure it has finished its work.
Also the title of your question is kind of misleading, something like "Creating dependencies in TaskScheduler" would fit better.
If your task that takes a varying amount of time leaves a Windows Event Log entry Event ID code specific to the successful completion of that task, you should be able to make your other tasks use the task scheduler trigger type "On an event" with the associated Log, Source, and Event ID.
If it doesn't, the other proposals are probably the only options left.
We have run into this same need several times. The 2 ways we have created the 'Dependency' type functionality is to:
Set the schedule to be run say every 30 minutes. In the startup of your app see if the dependency has been completed, if not exit, otherwise do you processing.
When there have been multiple dependencies we created an app that managed those. Each process that needed to be run depending on the others would be launched from the new Controller App (CA)'. The CA is scheduled to run every 30 minutes (or what every makes sense for your process) and it controls the multiple apps by checking of the dependencies and running the next app. We don't leave the CA running, we spawn the process to run and exit. Next time CA launches it checks dependencies and takes action needed or exits till launched again.
I'd like to move my scheduled tasks into workflows so I can better monitor their execution. Currently I'm using a Window's scheduled task to call a web service that starts the process. Is there a facility that you use to schedule execution of a sequence so that it occurs every N minutes?
My optimal solution would:
Easy to configure
Provide useful feedback on errors
Be 'fire and forget'
PS - Trying out AppFabric for Windows Server if that adds any options.
The most straightforward way I know of would be to make an executable for each workflow (could be console or windows app), and have it host the workflow through code.
This way you can continue to use scheduled tasks to manage the tasks, the main issue is feedback/monitoring the process. For this you could output to console, write to the event log, or even have a more advanced visualisation with a windows app - although you'd have to write this yourself (or google for something!). This MS Workflow Monitoring sample might be of interest, haven't used it myself.
Similar deal with errors, although writing to the event log would be the normal course of action in this case.
I'm not aware of any other hosts for WF, aside from things like Dynamics CRM, but that won't help you with what you're trying to do.
You need to use a scheduler. Either roll your own, use AppFabic as mentioned or use Quartz.NET:
http://quartznet.sourceforge.net/
If you use Quartz, it's either roll your own service host or use the ready-made one and configure it using xml. I rolled my own and it worked fine.
Autorun is another free option... http://autorun.codeplex.com/
I have a building block which sets up a Quartz job to send out emails every morning. The job is fired three times every morning instead of once. We have a hosted instance of Blackboard, which I am told runs on three virtual servers. I am guessing this is what is causing the problem, as the building block was previously working fine on a single server installation.
Does anyone have Quartz experience, or could suggest how one might prevent the job from firing multiple times?
Thanks,
You didn't describe in detail how your Quartz instance(s) are being instantiated and started, but be aware that undefined behavior will result if you run multiple Quartz instances against the same job store database at the same time, unless you enable clustering (see http://www.quartz-scheduler.org/docs/configuration/ConfigJDBCJobStoreClustering.html).
I guess I'm a little late responding to this, but we have a similar sort of scenario with our application. We have 4 servers running jobs, some of which can run on multiple servers concurrently, and some should only be run once. As Will's response said, you can look into the clustering features of Quartz.
Our approach was a bit different, as we had a home-grown solution in place before we switched to Quartz. Our jobs utilize a database table that store the cron triggers and other job information, and then "lock" the entry for a job so that none of the other servers can execute it. This keeps jobs from running multiple-times on the servers, and has been fairly effective so far.
Hope that helps.
I had the same issue before but I discovered that I was calling scheduler.scheduleJob(job, trigger); to update the job data while the job is running which is randomly triggered the job 5-6 times each run. I had to use the following to update the job data without updating the trigger scheduler.addJob(job, true);