We are trying to queue from code a build but that should not run instantly but in the evening as our build pipeline is quite free in the evening and this job does not need to be run right away.
We are queuing around 20 or those builds on a daily basis and right now it is unfortunately blocking other builds. I know that we can use build priorities but it is not good enough as the build we want to "postpone" takes quite a long time and would block other builds if it would be started before the high importance build.
We also saw that it is possible to create a schedule but this sounds more like a build that should reoccur where we need the build to run only once.
There is a work-around to achieve running a build once at an appointed time using Azure CLI and CMD scheduled task. You can try to follow below steps.
1, you need to install Azure CLI. You can follow the steps in this blogs to get started with Azure CLI. [blog]:https://devblogs.microsoft.com/devops/using-azure-devops-from-the-command-line/
2, Create a CMD script like below and save it to your local disk, For more information about az pipelines commands go to https://learn.microsoft.com/en-us/cli/azure/ext/azure-devops/pipelines/build?view=azure-cli-latest#ext-azure-devops-az-pipelines-build-queue
az pipelines build queue --definition-name your-build-definition-name -o table
3,create a scheduled CMD task script using schtask.exe like below example, for more information visit https://www.windowscentral.com/how-create-task-using-task-scheduler-command-prompt
schtasks /create /tn "give-your-task-a-name" /tr "the-location-of-the-scripts-file-you-created-in-previous-step" /sc ONCE /st specify-the-time-to-run-your-build
You can save this script to your local disk too, Next time you can just run this scripts when you want to schedule your build to run in the evening.
Hope above steps can help you, This workaround seems tedious and need a little effort. But it is an once and for all work.
Azure Devops: Queue a build to run in the evening
Trigger build only once is not available for now. As you saw, there only as working days, time and time zone for schedule.
There has an user voice Scheduled builds - More flexible timing configuration which suggest more flexible time configuration including. You can vote and follow up for this user voice.
As the comment on that thread, we could Use cron syntax to specify schedules in a YAML file. As test, we can get a more detailed timing configuration, but we still could not schedule the build to run only once.
As workaround, we could schedule the build on a certain day of the week, after schedule build completed, Then we could disable the schedule manually or using the tool Azure DevOps CLI.
Hope this helps.
Related
We have a cloud full of self-hosted Azure Agents running on custom AMIs. In some cases, I have some cleanup operations which I'd really like to do either before or after a job runs on the machine, but I don't want the developer waiting for the job to wait either at the beginning or the end of the job (which holds up other stages).
What I'd really like is to have the Azure Agent itself say "after this job finishes, I will run a set of custom scripts that will prepare for the next job, and I won't accept more work until that set of scripts is done".
In a pinch, maybe just a "cooldown" setting would work -- wait 30 seconds before accepting another job. (Then at least a job could trigger some background work before finishing.)
Has anyone had experience with this, or knows of a workable solution?
I suggest three solutions
Create another pipeline to run the clean up tasks on agents - you can also add demand for specific agents (See here https://learn.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml) by Agent.Name -equals [Your Agent Name]. You can set frequency to minutes, hours, as you like using cron pattern. As while this pipeline will be running and taking up the agent, the agent being cleaned will not be available for other jobs. Do note that you can trigger this pipeline from another pipeline, but if both are using the same agents - they can just get deadlocked.
Create a template containing scripts tasks having all clean up logic and use it at the end of every job (which you have discounted).
Rather than using static VM's for agent hosting, use Azure scaleset for Self hosted agents - everytime agents are scaled down they are gone and when scaled up they start fresh. This saves a lot of money in terms of sitting agents not doing anything when no one is working. We use this option and went away from static agents. We have also used packer to create the VM image/vhd overnight to update it with patches, softwares required, and docker images cached.
ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops
For those discovering this question, there's a much better way: run your self-hosted agent with the --once flag, documented here.
You'll need to wrap it in your own bash script, but something like this works:
while :
do
echo "Performing pre-job setup..."
echo "Waiting for job..."
./run.sh --once
echo "Cleaning up..."
sleep 2
done
Another option would be to use a ScaleSet VM setup which preps a new agent for each run and discards the VM when the run is done. It can prepare new VMs in the background while the job is running.
And I suspect you could implement your own IMaintenanceProvirer..
https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Agent.Worker/Maintenance/MaintenanceJobExtension.cs#L53
Im working on a simple deployment pipeline with azure devops. I created a deployment pipeline running on a self hosted ubuntu deployment group.
The pipeline looks like this:
Download artifacts from CI pipeline (created with dotnet publish)
Stop running deployment
Unzip the ASP.NET Core Web API to the deployment directory
Run new deployment with dotnet MyApp.dll
The first two steps work as expected. However, when the dotnet My App.dll command is run, the process runs for 10 seconds with following "error" message being printed at the end:
The STDIO streams did not close within 10 seconds of the exit event from process '/usr/bin/bash'. This may indicate a child process inherited the STDIO streams and has not yet exited.
The deployment task is successful despite the message and the app not running. I tried to work around this feature by using nohup & and relocating the command output. After some research I found that all processes started by a pipeline agent are stopped after the agent's work is done - meaning this behaviour is intended and my understanding of azure deployments/agents is wrong.
How do I deploy and run my app in an automated way on my own ubuntu machine using azure devops pipelines?
How do I deploy and run my app in an automated way on my own ubuntu machine using azure devops pipelines?
You are already on the right way.
All the process launched in the pipeline will be finished/clean up in “Finalize Job” step when the pipeline is over.
If you don't want the process to be closed, please try set variable Process.clean= false to stops the "finalize job" step from killing all processes.
But when you create a new pipeline next time, you need to close the app before starting it.
I had developed a Job in Talend and built the job and automated to run the Windows Batch file from the below build
On the Execution of the Job Start Windows Batch file it will invoke the dimtableinsert job and then after it finishes it will invoke fact_dim_combine it is taking just minutes to run in the Talend Open Studio but when I invoke the batch file via the Task Scheduler it is taking hours for the process to finish
Time Taken
Manual -- 5 Minutes
Automation -- 4 hours (on invoking Windows batch file)
Can someone please tell me what is wrong with this Automation Process
The reason of the delay in the execution would be a latency issue. Talend might be installed in the same server where database instance is installed. And so whenever you execute the job in Talend, it will complete as expected. But the scheduler might be installed in the other server, when you call the job through scheduler, it would take some time to insert the data.
Make sure you scheduler and database instance is on the same server
Execute the job directly in the windows terminal and check if you have same issue
The easiest way to know what is taking so much time is to add some logs to your job.
First, add some tWarn at the start and finish of each of the subjobs (dimtableinsert and fact_dim_combine) to know which one is the longest.
Then add more logs before/after the components inside the jobs.
This way you should have a better idea of what is responsible for the slowdown (DB access, writing of some files, etc ...)
I'm setting up a Kubeflow cluster on AWS EKS, is there a native way in Kubeflow that allows us to automatically schedule jobs i.e. (Run the workflow every X hours, get data every X hours, etc.)
I have tried to look for other things like Airflow, but i'm not really sure if it will integrate well with the Kubeflow environment.
That should be what a recurring run is for.
That would be using a run trigger, which does have a cron field, for specifying cron semantics for scheduling runs.
My requirement is:
Workflow should run daily at 2pm. Workflow has been scheduled to run at 2pm
We have lookup on master tables. Records with IDs that are not present in the master tables will get rejected.
These new IDs have to be loaded into the master tables manually and then the workflow has to be re-run.
Daily the same thing happens.
My question is -
Is it possible to schedule a workflow to run twice every day(one for the first run, the other to run after the master table is updated)?
If No, can I manually start a scheduled workflow? Will it make the workflow unscheduled?
Please, Can any one help me with this?
Informatica's scheduler is a weak spot. I guess using two copies of the same workflow with different schedules would be the easiest solution.
Got a solution for my problem.
Once a workflow is scheduled, even if a particular session has to be re-run manually, whole workflow has be run from the workflow manager.
If that particular session is run manually, scheduling will be gone.
So always run the workflow instead of a session, so that scheduling will remain.