AWS ECS Scheduled Task Not Logging to CloudWatch - amazon-ecs

I have a task scheduled in ECS that is known to work both when run manually via the "Run New Task" option and when left to run as a cron expression.
However, the logs only seem to appear in CloudWatch when I run the task manually (using the same task definition, settings, etc.). When run via a scheduled task, the logs do not appear.
I've verified in CloudTrail that the task did run successfully when scheduled and there appear to be no errors, either.
Has anyone else come up against this? I've done some digging online, and while I have seen mention of it, no one has posted a resolution or answer as far as I can tell.
Thanks in advance.

Related

AWS ECS won't start tasks: http request timed out enforced after 4999ms

I have an ECS cluster (fargate), task, and service I have had setup in Terraform for at least a year. I haven't touched it for a long while. My normal deployment for updating the code is to push a new container to the registry and then stop all tasks on the cluster with a script. Today, my service did not run a new task in response to that task being stopped. It's desired count is fixed at so it should.
I have go in an tried to manually run this and I'm seeing this error.
Unable to run task
Http request timed out enforced after 4999ms
When I try to do this, a new stopped task is added to my stopped tasks lists. When I look into that task the stopped reason is "Deployment restart" and two of them are now showing "Task provisioning failed." which I think might be tasks the service tried to start. But these tasks do not show a started timestamp. The ones I start in the console have a started timestamp.
My site is now down and I can't get it back up. Does anyone know of a way to debug this? Is AWS ECS experiencing problems right now? I checked the health monitors and I see no issues.
This was an AWS outage affecting Fargate in us-east-1. It's fixed now.

Task Scheduler did not launch task "\abc" because instance "(GUID)" of the same task is already running

I am scratching my head for the last 2 days because of this issue. This error is intermittent on the production server as sometimes the task scheduler works and sometimes not.
The same settings work in the development server.
I also checked the execution policy on both servers and it looks the same.
In your second screenshot, you can choose "Stop existing instance" in the latest dropdown list (if the task is already running). Then the retry option might trigger your task again correctly.

Pause Scheduled tasks in SCDF

Hi I'm running batch jobs via SCDF in openshift environment. All the jobs have been scheduled through the scheduling option in SCDF. Is there way to pause or Hold those jobs from executing instead of destroying the schedules ? Since the number of jobs are more, everytime we have to recreated the schedules for all of them.
Thanks.
We have an open issue: spring-cloud/spring-cloud-dataflow#3276 to add support for it.
Feel free to update the issue with your use-case requirements and the acceptance criteria. Better yet, it'd be great if you can contribute adding support for it in a PR; we would love to collaborate and release it.

Azure Devops: Queue a build to run in the evening

We are trying to queue from code a build but that should not run instantly but in the evening as our build pipeline is quite free in the evening and this job does not need to be run right away.
We are queuing around 20 or those builds on a daily basis and right now it is unfortunately blocking other builds. I know that we can use build priorities but it is not good enough as the build we want to "postpone" takes quite a long time and would block other builds if it would be started before the high importance build.
We also saw that it is possible to create a schedule but this sounds more like a build that should reoccur where we need the build to run only once.
There is a work-around to achieve running a build once at an appointed time using Azure CLI and CMD scheduled task. You can try to follow below steps.
1, you need to install Azure CLI. You can follow the steps in this blogs to get started with Azure CLI. [blog]:https://devblogs.microsoft.com/devops/using-azure-devops-from-the-command-line/
2, Create a CMD script like below and save it to your local disk, For more information about az pipelines commands go to https://learn.microsoft.com/en-us/cli/azure/ext/azure-devops/pipelines/build?view=azure-cli-latest#ext-azure-devops-az-pipelines-build-queue
az pipelines build queue --definition-name your-build-definition-name -o table
3,create a scheduled CMD task script using schtask.exe like below example, for more information visit https://www.windowscentral.com/how-create-task-using-task-scheduler-command-prompt
schtasks /create /tn "give-your-task-a-name" /tr "the-location-of-the-scripts-file-you-created-in-previous-step" /sc ONCE /st specify-the-time-to-run-your-build
You can save this script to your local disk too, Next time you can just run this scripts when you want to schedule your build to run in the evening.
Hope above steps can help you, This workaround seems tedious and need a little effort. But it is an once and for all work.
Azure Devops: Queue a build to run in the evening
Trigger build only once is not available for now. As you saw, there only as working days, time and time zone for schedule.
There has an user voice Scheduled builds - More flexible timing configuration which suggest more flexible time configuration including. You can vote and follow up for this user voice.
As the comment on that thread, we could Use cron syntax to specify schedules in a YAML file. As test, we can get a more detailed timing configuration, but we still could not schedule the build to run only once.
As workaround, we could schedule the build on a certain day of the week, after schedule build completed, Then we could disable the schedule manually or using the tool Azure DevOps CLI.
Hope this helps.

Quartz scheduled jobs failing without showing anything in scheduler logs

I am using Quartz scheduler to execute jobs. But while trying to schedule jobs for a future time, the jobs get triggered at the right time and immediately goes to failed state without displaying anything in the scheduler logs.
Could not find the root cause for this. But the issue was solved by pointing to a freshly created Quartz database.
The reason could be that the database might have got corrupted in some way.