How to trigger a DevOps release pipeline based on a build pipeline's path filter - azure-devops

I have an Azure Function App that includes 3 functions:
FunctionApp
FunctionA
FunctionB
FunctionC
I have a DevOps pipeline configured to build the function app whenever any of the contents of the FunctionApp change.
I'd like now to set up a release pipeline that invokes whichever functions were updated. For example, if, in my pull request, only FunctionA was modified, I'd like to invoke only FunctionA.
How can I do this with DevOps pipelines?

I don't want to say "impossible", but what you're asking for is difficult-verging-on-impossible.
Without building a full dependency graph of the application code and tying it back to changed source files, you can't tell which function changed. Here's an example:
Let's say Function A and Function B depend on Library C. You update code in Library C. This means that from a functional perspective, both Function A and Function B have changed. How you determine this at build time?
It starts to get even trickier when you start to consider not only build, but deployment -- because you'd really be invoking the function after deployment, not build. Here's a scenario: Function A and Function B both change, but something goes wrong and the deployment is only partially successful and only Function A has been invoked. Now you run the deployment again. How does it know to invoke only Function B, since A has already been run?
You haven't said anything about what the actual purpose of the functions are, but I suspect that a better solution to this problem is to ensure that your functions are idempotent and can safely be run multiple times. If you do that, the entire premise of the question is moot, you can just deploy and invoke all of the functions every time.

Related

Letting Concourse retry a build which failed because of a flaky issue

According to Concourse documentation
If any step in the build plan fails, the build will fail and subsequent steps will not be executed
It makes sense. However I'm wondering how I could deal with flaky steps.
For instance if I have a pipeline with
a get step with trigger: true
and then a task which performs several operations, including an HTTP call to an external service.
If the HTTP call fails because of a temporary network error then it makes sense that Concourse fails the build. But I would also appreciate if I could have a way to tell Concourse that this type of errors does not mean that the current version is corrupted and that it should automatically retry to build it after some time.
I've looked for it in the Concourse documentation but couldn't find such feature. Is it possible?
Check out the attempts step modifier, the example from the doc:
plan:
- get: foo
- task: unit
file: foo/unit.yml
attempts: 10
It will attempt to run the task 10 times before it declares the task failed.
Using attempts as explained in the other answer can be an option. But, before going that route, I would think more about the possible consequences and alternatives.
Attempts has two potential problems:
it cannot know wether the failure is due to a flake or to a real error. If it is due to a real error, it will keep banging on the task for, say, 10 times, potentially consuming compute resource (it depends on how heavy the task is).
it will work as expected only if the task is as focused as possible and idempotent. For example, if the flake HTTP request you mention comes after other operations that make a change to the external world, then you must ensure (when designing the task) that redoing such operations due to a flake to the HTTP request is safe.
If you know that your task is not subject to these kind of problems, then attempts can make sense.
On the other hand, this discussion makes us realize that maybe we can restructure the pipeline to be more Concourse idiomatic.
Since you mention an HTTP request, another option is to proxy that HTTP request via a Concourse resource (see https://concourse-ci.org/implementing-resource-types.html). Once done, the side-effect is visible in the pipeline (instead of being hidden in the task) and its success could be made optional with try or another hook modifier (see https://concourse-ci.org/try-step.html and https://concourse-ci.org/modifier-and-hook-steps.html).
The trade-off in this case is the time to write your own Concourse resource (in case you don't find a community-provided one). Only you are in the position to take this decision. What I can say is that writing a resource is not that complicated, once you get familiar with the concept. For some tricks on quick iterations during development, that apply to any Concourse resource, you can have a look at https://github.com/Pix4D/cogito/blob/master/CONTRIBUTING.md#quick-iterations-during-development.

Passing values to onExit template in an argo workflow

I've stumbled upon a problem when using some more complex argo workflows with initialization and clean-up logic. We are running some initialization in one of the initial steps of the workflow (e.g. creation of some resources) and we'd like to perform a clean-up regardless of the status of the workflow. onExit template seems to be an ideal solution (I think that clean-up is even mentioned in argo documentation as predestined for tasks of the onExit template).
However, I haven't found a way yet to pass some values to it. For example - let's say that in the initialization phase we created some resource with id some-random-unique-id and we'd like to let the onExit container know what resources it needs to clean up.
We tried the outputs of some steps, but it seems that steps are unknown in the onExit template.
Is there a built-in argo mechanism to pass this kind of data? We'd like to avoid some external services (like key-value storage service that would hold the context).
You can mark output parameters as global using the globalName field. A global output parameter, assuming it has been set, can be accessed from anywhere in the Workflow, including in an exit handler.
The example file for writing and consuming global output parameters should contain all the information you need to use global output parameters in an exit handler.
https://github.com/argoproj/argo-workflows/blob/master/examples/global-outputs.yaml
Please share if you found a cleaner solution.
I see this workflow example for exit handler with parameters, which may resolve your issue: Exit Handler with Parameters
One method I found was to add steps to the template I want to call the exit template on, and call the exit template as the last step so I can access steps variables. I changed the original template to an inline template so the structure does not change too much, just involves adding the last step call to the exit template. Unfortunately, this does not utilize the actual onExit method.
Inline steps template example
1.https://sourcegraph.com/github.com/argoproj/argo-workflows/-/blob/examples/exit-handler-with-param.yaml?L21
2.https://raw.githubusercontent.com/argoproj/argo-workflows/master/examples/steps-inline-workflow.yaml

How can I pass the variable between builds in VSTS?

For example I have two builds A and B. When A build successfully, the build for B will be trigged and receives the output variable from A.
How can I implement this?
For now, there is no way to persist variables between two builds.
And there has an user voice Make it possible to change the value of a variable in a variable group which suggest similar feature, you can vote and follow up.
And the workaround trigger build B after build A successful (variables can not be passed) is adding a related task (such as Queue Build(s) Task) to queue build B at the end of build definition A.

Recursive Workflow in Powershell

I'm trying to automate a lengthy process that can be broken down into several steps. (say Steps 1-5)
I have written a script that separates these into functions and call them sequentially.
However, we now have the additional requirement of making the script restartable. That is, if it fails in any one of the steps, rerunning the script would cause it to skip all completed steps and retry from the failed one.
Is this at all possible without referencing an external log file?
I've tried using workflows but it seems like recursion isn't supported.
Any ideas?
Some options aside from using a log file.
Use the registry
you can set a registry value to a number depending on what step you stopped on, this removes the need for a log file but is somewhat similar in terms of 'external' storage
Check the task status on each run
depending on the tasks you could have the script 'test', for example, step 3 to see if it has already been completed, then check step 4, 5 etc. until it encounters one it needs to run and continue from there, this may be impossible or require a lot of overhead code though for not much payoff.
Allow the user to continue from within the script.
this is probably the best way of doing it (aside from just using a log file), run the script in blocks, and when an error is encountered you can prompt the user to fix the issue before pressing 'enter' to re-run the previous script block, this makes it easy to provide information about what failed as well.
the main thing here is that once a script 'quits', in order to know what happened in it's last run it needs an external source of information, or to handle it in another way.

How to change process variable value through remote rest api call for current human task in jbpm 6.5.0Final

I have a many human task. After start the process i want to update some process variable value with rest API call that's relates to current task. If anyone knows how to do that commend bellow.
I try with /execute this is only start the task then how to update the process variable for already started process instance?
Based on the documentation
Here is how to do update the process variable. However, this will update an entire process rather than only that specific task.
server/containers/{id}/processes/instances/{pInstanceId}/variables - POST
If you want to update process variable from a task, you should do it during task completion. However, this requires you to have output variables from that task. Otherwise, it won't take any effect.
server/containers/{id}/tasks/{tInstanceId}/states/completed - PUT
Anyway, the full documentation of rest can be viewed in
{localhost}:{port}/kie-server/docs