I have a main Rundeck job with an option hostname(node).
And in this main Job I have a step Job Reference. and I would like to execute a command on the hostname passed as an argument.
Is that possible and how to do that ?
Thanks
Yes, it is possible. You can use option passed into job as part of nodes filter. For example jobA reference JobB. JobB is Node-oriented and acept parameter hostname. So in Nodes section you can specify "Dispatch to nodes"=true (radio button and in filter set something like name=${option.hostname}
For example I am using next Node filters:
Related
I'm looking at an example of the YAML pipeline with a services section. Here is a sample:
The YAML schema doesn't have services defined.
Where can I get information about the services section of the pipeline?
Update: Per Bowman's answer, the services section is part of the job step. In this scenario, there is only one job so the job step is omitted.
In the simplest case, a pipeline has a single job. In that case, you do not have to explicitly use the job keyword unless you are using a template. You can directly specify the steps in your YAML file.
here is the reference
There is in the official document:
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema/jobs-job?view=azure-pipelines
services: # Container resources to run as a service container.
I think you directly check it in the top level, right? In fact, in this situation, there is a hidden default job, the definition of the job also be hidden. services section is under the job definition of that hidden job, not the top level.
I am using the official Microsoft ServiceNow plugin in a gate to create tickets via Azure Pipelines.
Once the gate is finished processing, there is an output that I'd like to consume in an agent job. The problem is that this output is only available in agentless jobs (which is not very useful for my use case).
How can I make it so that I can pass that output value from an agentless job to an agent job?
It doesn't look like you can pass an output value out of an agentless job.
Passing variables between jobs requires running a script. See an example in the docs here: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable
Agentless jobs do not support script pwsh or bash tasks, meaning you can't call a script and therefore can't set the output variable.
The easiest solution would be to use an agent.
See here for what tasks are supported by agentless jobs: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml#agentless-tasks
In this release pipeline, I have two tasks: one is running the kubectl command, and I need it to keep running while I run the second task. After researching for a while, I know that parallel tasks are not available in Azure DevOps, so I tried with multiple agents. However I could not make it work.
May I know which part am I missing?
My current config looks like this:
And in each of the agents, I selected "Multi-Agent" on parallelism with number of 2.
But it seems not the one I want.
What I want is, run the first job with kubectl port-forward command. And keep it running while second job start running. After second job Run script is finished, then the first job can end.
May I know in Azure DevOps is there a way to achieve this?
Thank you so much.
The easiest would be actually to use seprate stages. But if you want to use single stage you can do it as follows:
Define variable like this:
Configure parallelism on the job:
And then define custom condition on the tasks:
One task should have eq(variables['Script'], 'one') and the other eq(variables['Script'], 'two')
You will get two agents runs your jobs but in each job will actually do only one task:
I'm using Azure DevOps on a vs2017-win2016 build agent to provision some infrastructure using Terraform.
What I want to know is it possible to pass the Terraform Output of a hosts dynamically assigned IP address to a
2nd Job running a different build agent.
I'm able to pass these to build variables in the first Job
BASTION_PRIV_IP=x.x.x.x
BASTION_PUB_IP=1.1.1.1
But un-able to get these variables to appear to be consumed with the second build agent running ubuntu-16.04
I am able to pass any static defined parameters like Azure Resource Group name that I define before the job start, its just the
dynamically assigned ones.
This is pretty easily done when you are using the YAML based builds.
It's important to know that variables are only available within the scope of current job by default.
However you can set a variable as an output variable for your job.
This output variable can then be mapped to a variable within second job (do note that you need to set the first job as a dependency for the second job).
Please see the following link for an example of how to get this to work
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable
It may also be doable in the visual designer type of build, but i couldn't get that to work in the quick test i did, maybe you can get something to work inspired on the linked example.
I have a job in Rundeck with many tasks within, but when some task fails I have to duplicate de Job, remove all the other tasks, save it and then run this new reduced copy of my original job.
Is there a way to run only specific tasks without having to do all this workaround?
Thanks in advance.
AFAIK there is no way to do that.
As a workaround, you can simply add options for every step in your Rundeck job, so for instances, if you have 3 script steps in your job, you can add 3 options named: skip_step_1, skip_step_2 and skip_test_3 and then assign true to the ones that have finished successfully and false to the one that has failed in the first execution. And for every script step, you can add a condition whether to run it or not.
A smiliar feature request is already proposed to the rundeck team :
Optionally execute workflow step based on job options