I see many examples of waiting for input to implement a manual "promotion" stage.
This leaves the impression of many jobs running (it is possible in our pipeline to promote any of the last builds to a major release).
Is there a better way to achieve this, or, are there plans to have a real manual step that dosn't leave Jenkins looking like its running loads of jobs
Related
I'm trying to implement automatic backfills in Argo workflows, and one of the last pieces of the puzzle I'm missing is how to access the lastScheduledTime field from my workflow template.
I see that it's part of the template, and I see it getting updated each time a workflow is scheduled, but I can't find a way to access it from my template to calculate how many executions I might have missed since the last time the scheduler was online.
Is this possible? Or maybe, is this the best way to implement this functionality on Argo?
We have a rather large deployment surface, say 10 apps that get deployed. For patch releases we sometimes deploy only one app and I'd like to have a stage run either after all 10 are deployed or if only one is deployed. A simplified graph looks like the following. The "Do Something" step will only run if all three app stages run and I don't want to have to duplicate it for each app so looking for a better way. I guess that I could live with it if it just ran one time on any successful dependent stage (doesn't need to wait for all of them).
So I think there are a couple options for this. Will need to look at YAML Multi stage release pipelines. Specifically, deployment jobs
First depending on complexity of the "Do something" stage, it could be a job template and loaded into each of the app stages. I realize you mentioned you don't want it to run every time so this is just an option.
The second option is if the pipeline is being ran adhoc you will have the ability to select which stages are ran. Thus you could manually select the app stages to run and select the "do something stage" It will look something similar to this. The hard part will be working on the dependencies:
I'd assumed you'd want the "do something" to be dependent on the success of one of the previous stages.
Run Azure Release Pipeline stage if one of many dependent stages runs
I am afraidthere is no such out of way to achieve this at this moment.
That is because there is no "OR" syntax for the depend on. And we could not add the condition for the depend on.
You could submit this request condition "OR" to our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.
As workaround:
The main idea of the solution is: You could try to set depend on for the stage Secure with [], then add a Inline powershell task before other tasks. This task will call the REST API Definitions - Get to monitor whether all the stages in the current release pipeline have inprocess and queue states. If so, wait for 30 seconds, and then loop again until all other stages in the current release pipeline have no inprocess and queue states. Then next execute other tasks will be executed.
You could check my previous ticket for detailed info:
I am looking for a workflow system that supports the following needs:
dealing with a complex ETL pipeline with various kinds of APIs
(file-based, REST, console, databases, ...)
offers automated scheduling/orchestration on different execution environments (AWS, Azure, on-Premise clusters, local machine, ...)
has an option for "reactive" workflows i.e. workflows that can be triggered and executed instantaneously without unnecessary delay, are executed with highest priority and the same workflow can be started several times simultaneously
Especially the third requirement seems to be tricky to find. The purpose of this requirement is that a user should be able to send a query to activate a (computationally non-heavy) workflow and get back a result immediately instead of waiting some seconds or even minutes and multiple users might want to use the same workflow simultaneously. The reason this is important is that the ETL workflows and the user ("reactive") workflows share a substantial overlap and I do intend to reuse parts of these workflows instead of maintaining two sets of workflows that are executed by different tools.
Apache Airflow appears to be the natural choice for requirements 1. and 2. but does not seem to support the third requirement since it starts the execution in (lengthy) fixed time slots and does not allow for the simulataneous execution of several instances of the same DAG (workflow).
Are there any tools out there that support all these requirements or do I have to use two different workflow management tools or even have to stick to a (Python) script for the user workflows?
You can trigger a dag manually by using the CLI or the API. Have a look at this post: https://medium.com/#ntruong/airflow-externally-trigger-a-dag-when-a-condition-match-26cae67ecb1a
You'll have to test if you can execute multiple dag runs at the same time.
In general, I have a single workflow that I want to be able to monitor. The workflow should start whenever new files arrive or alternatively at certain scheduled times, i.e. I want to be able to insert new "jobs" to the workflow as they come, and process the files by going through multiple different tasks and steps. I want to be able to monitor each file going through the tasks.
The queues and distributing the load for each task might be managed by Celery, but it's not decided yet either.
I've looked at Apache Airflow, and as far as I understand at the moment, is geared more towards monitoring many different workflows, such that each workflow is mostly running from start to end, not adding new files to the beginning of the flow before the previous run ended.
Cadence workflow seems like can do what I need, but also seems to be a bit of an overkill.
I'm not expecting a specific final solution here, but I would appreciate suggestions to more such solutions that I can look into and can fit the above.
Luigi - https://luigi.readthedocs.io/en/stable/
Extremely light-weight and fast compared to Airflow.
I’m about to write a workflow in CRM that calls itself every day. This is a recursive workflow.
It will run on half a million entities each day and deactive the record if it was not been upodated in the past 3 days.
I’m worried about performance has anyone else done this.
I haven't personally implemented anything like this, but that's 500,000 records that are floating around in the DB that the async service has to keep track of, which is going to tax your hardware. In addition, CRM keeps track of recursive workflow instances. I don't have the exact specs in front of me, but if a workflow calls itself a set number of times within a certain timeframe, CRM will kill the workflow.
Could you just write a console app that asks the Crm Service for records that haven't been updated in three days, and then deactivate them? Run it as a scheduled task once a day, and then your CRM system doesn't have the burden of keeping track of all those running workflow instances.
EDIT: Ah, I see now you might have been thinking of one workflow that runs on all the records as opposed to workflows running on each record. benjynito's advice makes sense if you go this route, although I still think a scheduled task would be more appropriate than using workflow.
You'll want to make sure your workflow is running in non-peak hours. Assuming you have an on-premise installation you should be able to get away with that. If you're using a hosted instance, you might be worried about one organization running the workflow while another organization is using the system. Use the timeout and maybe a custom workflow activity, if necessary, to force the start time to a certain period.
I'm assuming you'll be as efficient as possible in figuring out which records to deactivate. (i.e. Query Expression would only bring back the records you'll be deactivating).
The built-in infinite loop-protection offered by CRM shouldn't kill your workflow instances. It stops after a call depth of 8, but it resets to 1 if no calls are made for an hour. So the fact that you're doing this once a day should make you OK on the recursive workflow front.