I'm trying to follow the instructions provided at https://stackoverflow.com/a/61802154 to pass output from one job as input into another job.
Job1 sets up the k/v data
- defaultTab: output
description: ''
executionEnabled: true
id: b6656d3b-2b32-4554-b224-52bd3702c305
loglevel: INFO
name: job1
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: rdnode01'
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: output k/v
exec: echo RUNDECK:DATA:MYNUM=123
- description: test k/v
exec: echo ${data.MYNUM}
keepgoing: false
pluginConfig:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: ^RUNDECK:DATA:\s*([^\s]+?)\s*=\s*(.+)$
replaceFilteredResult: 'false'
type: key-value-data
strategy: node-first
uuid: b6656d3b-2b32-4554-b224-52bd3702c305
Job2 will output that k/v data
- defaultTab: output
description: ''
executionEnabled: true
id: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
loglevel: INFO
name: job2
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: rdnode01'
nodesSelectedByDefault: true
options:
- name: option_for_receive
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.option_for_receive}
keepgoing: false
strategy: node-first
uuid: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
Wrapper runs the job references as node steps and passes the data from job1 to job2
- defaultTab: output
description: ''
executionEnabled: true
id: 5a62cabf-ffc2-45d1-827b-156f4134a082
loglevel: INFO
name: wrapper job
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: rdnode01'
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: job1
jobref:
childNodes: true
group: ''
name: job1
nodeStep: 'true'
uuid: b6656d3b-2b32-4554-b224-52bd3702c305
- description: job2
jobref:
args: -option_for_receive ${data.MYNUM}
childNodes: true
group: ''
name: job2
nodeStep: 'true'
uuid: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
keepgoing: false
strategy: node-first
uuid: 5a62cabf-ffc2-45d1-827b-156f4134a082
This the formatted text from the execution log
11:26:39 [rundeck#rdnode01 1#node=rdnode01/1][NORMAL] RUNDECK:DATA:MYNUM=123
11:26:40 [rundeck#rdnode01 1#node=rdnode01/1][NORMAL] {"MYNUM":"123"}
11:26:40 [rundeck#rdnode01 1#node=rdnode01/2][NORMAL] 123
11:26:41 [rundeck#rdnode01 2#node=rdnode01/1][NORMAL] '${data.MYNUM}'
This is what it looks like on the screen:
As you can see, job2 is outputting '${data.MYNUM}' instead of the actual contents. Thus I think there's a syntax issue somewhere.
The data values are generated in the job context, in that case, the "Wrapper Job" (Parent Job in the Rundeck terminology) doesn't know about that data variable in their context (generated in the first job).
If you want to pass that data value to another job, call the second one from the first one in the following way (Workflow Node Step):
JobA:
- defaultTab: output
description: ''
executionEnabled: true
id: b6656d3b-2b32-4554-b224-52bd3702c305
loglevel: INFO
name: job1
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: localhost '
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: output k/v
exec: echo RUNDECK:DATA:MYNUM=123
- description: test k/v
exec: echo ${data.MYNUM}
- jobref:
args: -option_for_receive ${data.MYNUM}
childNodes: true
group: ''
name: job2
nodeStep: 'true'
uuid: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
keepgoing: false
pluginConfig:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: ^RUNDECK:DATA:\s*([^\s]+?)\s*=\s*(.+)$
replaceFilteredResult: 'false'
type: key-value-data
strategy: node-first
uuid: b6656d3b-2b32-4554-b224-52bd3702c305
JobB:
- defaultTab: output
description: ''
executionEnabled: true
id: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
loglevel: INFO
name: job2
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: localhost '
nodesSelectedByDefault: true
options:
- name: option_for_receive
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.option_for_receive}
keepgoing: false
strategy: node-first
uuid: c069e7d3-2d1f-46f2-a4d8-15eb19761daf
Related
I break my head with a problem of referenced job on my workflow. I'm not sur this is possible with Rundeck :
I have a job who call a second. I want run this second for all nodes but only over one server.
With this exemple maybe it's more simple to understand:
Workflow : Select Nodes
Referenced job 1
NodeA > Website www.exempleA.com < restore DB with default value
NodeB > Website www.exempleB.com < restore DB with default value
NodeC > Website www.exempleC.com < restore DB with default value
NodeD > Website www.exempleD.com < restore DB with default value
This run perfectly
Referenced Job 2 : Use Cypress server to test websites. Node filter have only Cypress server.
NodeE > Cypress -url https://${node.name} = NodeA > www.exempleA.com
NodeE > Cypress -url https://${node.name} = NodeB > www.exempleB.com
NodeE > Cypress -url https://${node.name} = NodeC > www.exempleC.com
NodeE > Cypress -url https://${node.name} = NodeD > www.exempleD.com
So I want to make a loop with a referenced job who execute on only one server but for all nodes name.
Someone know if this configuration is possible with Rundeck ?
Thank you for your knowledge.
Erwan
An excellent way to do that is to play with parent job options in two ways: first, against the first child job as a node filter (to dispatch to remote nodes), and second, against the second child job (to create an array and run the Cypress command in a bash loop).
Here is an example to test.
Parent Job. Contains an option that should be used for child jobs.
- defaultTab: nodes
description: ''
executionEnabled: true
id: db051872-7d5f-4506-bd49-17719af9785b
loglevel: INFO
name: ParentJob
nodeFilterEditable: false
options:
- name: nodes
value: node00 node01 node02
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- jobref:
args: -myfilter ${option.nodes}
group: ''
name: FirstChildJob
nodeStep: 'true'
uuid: f7271fc4-3ccb-41a5-9de4-a12e65093a3d
- jobref:
args: -myarray ${option.nodes}
childNodes: true
group: ''
name: SecondChildJob
nodeStep: 'true'
uuid: 1b8b1d82-a8dc-4949-9245-e973a8c37f5a
keepgoing: false
strategy: sequential
uuid: db051872-7d5f-4506-bd49-17719af9785b
First Child Job. Takes the parent job option and uses that as a job filter, the node filter is an own option called ${option.myfilter}.
- defaultTab: nodes
description: ''
executionEnabled: true
id: f7271fc4-3ccb-41a5-9de4-a12e65093a3d
loglevel: INFO
name: FirstChildJob
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: ${option.myfilter}
nodesSelectedByDefault: true
options:
- name: myfilter
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "hi"
keepgoing: false
strategy: node-first
uuid: f7271fc4-3ccb-41a5-9de4-a12e65093a3d
Second Child Job. Contains an inline-script step that takes the parent's job option as an array and runs in a bash loop.
- defaultTab: nodes
description: ''
executionEnabled: true
id: 1b8b1d82-a8dc-4949-9245-e973a8c37f5a
loglevel: INFO
name: SecondChildJob
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: node02'
nodesSelectedByDefault: true
options:
- name: myarray
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- script: "#!/bin/bash\narray=(#option.myarray#)\nfor i in \"${array[#]}\"\ndo\n\
\techo \"execute $i\"\ndone"
keepgoing: false
strategy: node-first
uuid: 1b8b1d82-a8dc-4949-9245-e973a8c37f5a
Here is the loop script (inside the second child job as inline-script):
#!/bin/bash
array=(#option.myarray#)
for i in "${array[#]}"
do
echo "$i"
done
And here you can see the result.
I have three jobs in the same project with their own node filters. And the matched nodes do not overlap between these jobs. I want to create a parent job that runs these three jobs instead of me running them individually. How do I configure the nodes on this parent job? Each step has it's own list of nodes.
Nothing is needed in the Parent Job, just edit the Job Reference Steps and click on the "Use referenced job's nodes." checkbox.
A basic example:
Parent Job:
- defaultTab: nodes
description: ''
executionEnabled: true
id: a0d5834d-4b62-44d9-bd1e-f00a6befb990
loglevel: INFO
name: ParentJob
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- jobref:
childNodes: true
group: ''
name: JobA
uuid: 63fb953c-53e0-4233-ba28-eabd69a0e41c
- jobref:
childNodes: true
group: ''
name: JobB
uuid: 8936db73-9bd4-4912-ae07-c5fc8500ee9d
- jobref:
childNodes: true
group: ''
name: JobC
uuid: 16fa66d3-fbda-439a-9a2b-14f90e99f72b
keepgoing: false
strategy: node-first
uuid: a0d5834d-4b62-44d9-bd1e-f00a6befb990
JobA:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 63fb953c-53e0-4233-ba28-eabd69a0e41c
loglevel: INFO
name: JobA
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: node00 '
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: hostname
keepgoing: false
strategy: node-first
uuid: 63fb953c-53e0-4233-ba28-eabd69a0e41c
JobB:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 8936db73-9bd4-4912-ae07-c5fc8500ee9d
loglevel: INFO
name: JobB
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: node01'
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: hostname
keepgoing: false
strategy: node-first
uuid: 8936db73-9bd4-4912-ae07-c5fc8500ee9d
JobC:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 16fa66d3-fbda-439a-9a2b-14f90e99f72b
loglevel: INFO
name: JobC
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'name: node02'
nodesSelectedByDefault: true
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: hostname
keepgoing: false
strategy: node-first
uuid: 16fa66d3-fbda-439a-9a2b-14f90e99f72b
Check the Result.
I am trying to set my build number for my Azure DevOps Pipeline to my MajorMinorPatch version from gitversion. I have the following in my YAML for my pipeline:
- task: GitVersion#5
inputs:
preferBundledVersion: false
updateAssemblyInfo: true
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
$versionInfo = '$($GITVERSION_MAJORMINORPATCH)'
Write-Host("##vso[task.setvariable variable=Version;]$versionInfo")
- script: echo %Action%%BuildVersion%
displayName: 'Set build version'
env:
Action: '##vso[build.updatebuildnumber]'
BuildVersion: '$env:Version'
The problem is that when I run my pipeline, I get a pipeline name like: 0.1.0-alpha.70
I am not sure why I get the -alpha.70. I know what they mean, I think, but I don't expect to see them in my string for Version. When I run gitversion locally, my MajorMinorPatch string is 0.1.0 and that is all I want to see. Can anyone help me get just that information?
EDIT: For anyone that is curious, I am including my GitVersion.yml here, it is pretty much the standard config:
assembly-versioning-scheme: MajorMinorPatch
assembly-file-versioning-scheme: MajorMinorPatchTag
mode: ContinuousDeployment
tag-prefix: '[vV]'
continuous-delivery-fallback-tag: ''
major-version-bump-message: '\+semver:\s?(breaking|major)'
minor-version-bump-message: '\+semver:\s?(feature|minor)'
patch-version-bump-message: '\+semver:\s?(fix|patch)'
no-bump-message: '\+semver:\s?(none|skip)'
legacy-semver-padding: 4
build-metadata-padding: 4
commits-since-version-source-padding: 4
commit-message-incrementing: Enabled
branches:
develop:
mode: ContinuousDeployment
tag: alpha
increment: Minor
prevent-increment-of-merged-branch-version: false
track-merge-target: true
regex: ^dev(elop)?(ment)?$
source-branches: []
tracks-release-branches: true
is-release-branch: false
is-mainline: false
pre-release-weight: 0
master:
mode: ContinuousDeployment
tag: ''
increment: Patch
prevent-increment-of-merged-branch-version: true
track-merge-target: false
regex: ^master$
source-branches:
- develop
- release
tracks-release-branches: false
is-release-branch: false
is-mainline: true
pre-release-weight: 55000
release:
mode: ContinuousDeployment
tag: beta
increment: None
prevent-increment-of-merged-branch-version: true
track-merge-target: false
regex: ^releases?[/-]
source-branches:
- develop
- master
- support
- release
tracks-release-branches: false
is-release-branch: true
is-mainline: false
pre-release-weight: 30000
feature:
mode: ContinuousDeployment
tag: useBranchName
increment: Inherit
prevent-increment-of-merged-branch-version: false
track-merge-target: false
regex: ^features?[/-]
source-branches:
- develop
- master
- release
- feature
- support
- hotfix
tracks-release-branches: false
is-release-branch: false
is-mainline: false
pre-release-weight: 30000
pull-request:
mode: ContinuousDeployment
tag: PullRequest
increment: Inherit
prevent-increment-of-merged-branch-version: false
tag-number-pattern: '[/-](?<number>\d+)'
track-merge-target: false
regex: ^(pull|pull\-requests|pr)[/-]
source-branches:
- develop
- master
- release
- feature
- support
- hotfix
tracks-release-branches: false
is-release-branch: false
is-mainline: false
pre-release-weight: 30000
hotfix:
mode: ContinuousDeployment
tag: beta
increment: Patch
prevent-increment-of-merged-branch-version: false
track-merge-target: false
regex: ^hotfix(es)?[/-]
source-branches:
- develop
- master
- support
tracks-release-branches: false
is-release-branch: false
is-mainline: false
pre-release-weight: 30000
support:
mode: ContinuousDeployment
tag: ''
increment: Patch
prevent-increment-of-merged-branch-version: true
track-merge-target: false
regex: ^support[/-]
source-branches:
- master
tracks-release-branches: false
is-release-branch: false
is-mainline: true
pre-release-weight: 55000
ignore:
sha: []
commit-date-format: yyyy-MM-dd
merge-message-formats: {}
Hopefully that helps.
Apparently, what I was attempting to do really isn't the way to pass along the version number. Instead, I am now using a transform to update the value in my JSON configuration and that is being published as the build artifact. Here is my current iteration of my azure-pipelines.yml:
name: $(date:yyyyMMdd)$(rev:.r)-$(Build.SourceBranchName)-$(GitVersion.SemVer)
trigger:
- master
- develop
stages:
- stage: DEV
displayName: 'DEV'
condition: and(always(), contains(variables['Build.SourceBranch'], 'develop'))
pool:
vmImage: 'ubuntu-latest'
variables:
contentVersion: $(GitVersion.AssemblySemVer)
parameters.semVer.value: $(GitVersion.AssemblySemVer)
parameters.resourceGroupName.value: 'rgName-DEV'
jobs:
- job: DevResourceGroup
steps:
- task: GitVersion#5
inputs:
preferBundledVersion: false
updateAssemblyInfo: true
configFilePath: './GitVersion.yml'
- script: echo %Action%%BuildVersion%
displayName: 'Set Build Number to Semantic Version'
env:
Action: '##vso[build.updatebuildnumber]'
BuildVersion: '$(GitVersion.SemVer)'
- task: FileTransform#2
inputs:
folderPath: '$(Build.SourcesDirectory)'
xmlTransformationRules:
jsonTargetFiles: './ResourceGroup/resourceGroup.parameters.json'
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Subscription'
azureResourceManagerConnection: 'ConnectionName'
subscriptionId: 'GUID'
location: 'East US'
templateLocation: 'Linked artifact'
csmFile: '$(Build.SourcesDirectory)/ResourceGroup/resourceGroup.json'
csmParametersFile: '$(Build.SourcesDirectory)/ResourceGroup/resourceGroup.parameters.json'
deploymentMode: 'Incremental'
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: '$(Build.SourcesDirectory)'
ArtifactName: 'develop'
publishLocation: 'Container'
- stage: PROD
displayName: 'PROD'
condition: and(always(), contains(variables['Build.SourceBranch'],'master'))
pool:
vmImage: 'ubuntu-latest'
variables:
contentVersion: $(GitVersion.AssemblySemVer)
parameters.semVer.value: $(GitVersion.AssemblySemVer)
jobs:
- job: ProdResourceGroup
steps:
- task: GitVersion#5
inputs:
preferBundledVersion: false
updateAssemblyInfo: true
configFilePath: './GitVersion.yml'
- script: echo %Action%%BuildVersion%
displayName: 'Set Build Number to Semantic Version'
env:
Action: '##vso[build.updatebuildnumber]'
BuildVersion: '$(GitVersion.SemVer)'
- task: FileTransform#2
inputs:
folderPath: '$(Build.SourcesDirectory)'
xmlTransformationRules:
jsonTargetFiles: './ResourceGroup/resourceGroup.parameters.json'
- task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Subscription'
azureResourceManagerConnection: 'ConnectionName'
subscriptionId: 'GUID'
location: 'East US'
templateLocation: 'Linked artifact'
csmFile: '$(Build.SourcesDirectory)/ResourceGroup/resourceGroup.json'
csmParametersFile: '$(Build.SourcesDirectory)/ResourceGroup/resourceGroup.parameters.json'
deploymentMode: 'Incremental'
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: '$(Build.SourcesDirectory)'
ArtifactName: 'master'
publishLocation: 'Container'
So, I take the version I want, write it to the JSON file and that will be available in my release pipeline.
After tested on my test project. I found $($GITVERSION_MAJORMINORPATCH) cannot get the version value, and $env:Version cannot be refer to variable Version. I made below changes to your build yml file and then it worked just as expected.
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
$versionInfo = '$(GitVersion.MajorMinorPatch)'
Write-Host("##vso[task.setvariable variable=Version;]$versionInfo")
Write-Host($versionInfo)
- script: echo %Action%%BuildVersion%
displayName: 'Set build version'
env:
Action: '##vso[build.updatebuildnumber]'
BuildVersion: '$(Version)'
In the powershell task I used $(GitVersion.MajorMinorPatch) to reference to the gitversion. And used $(Version) to get the version string for BuildVersion
we have a Rundeck (3.1.2-20190927) job which triggers multiple other jobs in Rundeck. In the /etc/rundeck/framework.properties are global variables defined that are used in the jobs. It's used to build the url of our Icinga so Rundeck can submit th job result to the monitoring. The variable is used in the notifications tab (edit job -> notification) of every single job.
When the meta job has run, it submits the result successfully to the monitoring. The same applies to the 'sub' job, if you trigger them manualy. BUT if they are triggert by the meta job, they throw an error:
Error calling the endpoint: Illegal character in authority at index 8: https://icinga-master-${globals.environment}.some.domain.
It looks like the global variable is not resolved correctly when the job are triggert by the meta job. Strange to say, other meta jobs don't have this problem. I can't find any configuration differences. Has anybody an idea what the cause could be?
Thanks for any help!
Update: Here's my job definition
- defaultTab: output
description: "Call Patchday Jobs in a row \n**Attention! This will reboot the servers**"
executionEnabled: true
group: CloudServices/SomeSoftWare
id: 5cf2966c-3e5f-4a32-8cce-b3e82b6fd036
loglevel: INFO
multipleExecutions: true
name: Patchday SomeSoftWare - Meta Job
nodeFilterEditable: false
nodefilters:
dispatch:
excludePrecedence: true
keepgoing: false
rankOrder: ascending
successOnEmptyNodeFilter: false
threadcount: '1'
filter: 'tags: role_SomeSoftWare'
nodesSelectedByDefault: false
notification:
onfailure:
plugin:
configuration:
_noSSLVerification: ''
_printResponseToFile: ''
_proxySettings: ''
authentication: Basic
body: |-
{
"type": "Service",
"filter": "service.name==\"Rundeck-Job - ${job.name}\"",
"exit_status": 2,
"plugin_output": "'${job.name}' failed"
}
contentType: application/json
file: ''
headers: 'Accept: application/json'
method: POST
noSSLVerification: 'true'
oauthTokenEndpoint: ''
oauthValidateEndpoint: ''
password: ******
proxyIP: ''
proxyPort: ''
remoteUrl: https://icinga-master-${globals.environment}.some.domain:5665/v1/actions/process-check-result
timeout: '30000'
username: rundeck_process-check-result
type: HttpNotification
onsuccess:
plugin:
configuration:
_noSSLVerification: ''
_printResponseToFile: ''
_proxySettings: ''
authentication: Basic
body: |-
{
"type": "Service",
"filter": "service.name==\"Rundeck-Job - ${job.name}\"",
"exit_status": 0,
"plugin_output": "'${job.name}' succeeded"
}
contentType: application/json
file: ''
headers: 'Accept: application/json'
method: POST
noSSLVerification: 'true'
oauthTokenEndpoint: ''
oauthValidateEndpoint: ''
password: ******
proxyIP: ''
proxyPort: ''
remoteUrl: https://icinga-master-${globals.environment}.some.domain:5665/v1/actions/process-check-result
timeout: '30000'
username: rundeck_process-check-result
type: HttpNotification
notifyAvgDurationThreshold: null
options:
- description: Addition paramater to give to the ansible-playbook call
name: AdditionalParameter
scheduleEnabled: true
sequence:
commands:
- jobref:
args: 'branch: ${globals.defaultbranch}'
group: Infrastructure
name: Icinga Service Downtime
nodefilters:
dispatch:
nodeIntersect: true
uuid: 6eec5749-ef35-481e-aea8-674f233c32ac
- description: Pause Bamboo Server
jobref:
args: -branch ${globals.defaultbranch} -bamboo_command pause
group: Infrastructure
name: Bamboo Control
nodefilters:
dispatch:
nodeIntersect: true
uuid: 87bc7f1c-d133-4d7e-9df9-2b40fb935fd4
- configuration:
ansible-become: 'false'
ansible-disable-limit: 'false'
ansible-playbook-inline: |
- name: +++ Stop Tomcat ++++
hosts: tag_role_SomeSoftWare
gather_facts: true
remote_user: ec2-user
become: yes
tasks:
- name: stop tomcat
service:
name: tomcat
state: stopped
nodeStep: false
type: com.batix.rundeck.plugins.AnsiblePlaybookInlineWorkflowStep
- description: SomeSoftWare Update
jobref:
args: -branch ${globals.defaultbranch}
group: Infrastructure
name: SomeSoftWare Server - Setup
nodefilters:
dispatch:
nodeIntersect: true
uuid: f01a4483-d8b2-43cf-99fd-6a610d25c3a4
- description: Install/Update fs-cli
jobref:
args: -branch ${globals.defaultbranch}
group: Infrastructure
name: firstspirit-cli - Setup
nodefilters:
dispatch:
nodeIntersect: true
uuid: c7c54433-be96-4d85-b2c1-32d0534b5c60
- description: Install/Update Modules
jobref:
args: -branch ${globals.defaultbranch}
group: Infrastructure
name: SomeSoftWare Modules - Setup
nodefilters:
dispatch:
nodeIntersect: true
uuid: f7a8929b-2bc3-4abe-8c69-e0d2acf62159
- description: restart SomeSoftWare
exec: sudo service SomeSoftWare restart
- description: Resume Bamboo Server
jobref:
args: -branch ${globals.defaultbranch} -bamboo_command resume
group: Infrastructure
name: Bamboo Control
nodefilters:
dispatch:
nodeIntersect: true
uuid: 87bc7f1c-d133-4d7e-9df9-2b40fb935fd4
keepgoing: false
strategy: node-first
timeZone: Europe/Berlin
uuid: 5cf2966c-3e5f-4a32-8cce-b3e82b6fd036
I am trying to deploy spinnaker locally with minikube and minio, i have everything setted up, my kubernetes cluster is up and running with a composed app on it, details below:
| NAME | READY | UP-TO-DATE | AVAILABLE | AGE |
|---------------------------|-------|------------|-----------|-----|
| deployment.extensions/api | 1/1 | 1 | 1 | 18s |
| deployment.extensions/db | 1/1 | 1 | 1 | 18s |
I configured both, my kubernetes and storage on my hal config, i will paste it below as well, when i try to deploy using "sudo hal deploy apply" i get the following error:
WARNING You have not specified a Kubernetes context in your halconfig, Spinnaker will use "minikube" instead. ? We recommend
explicitly setting a context in your halconfig, to ensure changes to
your kubeconfig won't break your deployment.
! ERROR Unable to communicate with your Kubernetes cluster: An error
has occurred.. ? Unable to authenticate with your Kubernetes cluster.
Try using kubectl to verify your credentials.
Problems in default.security:
WARNING Your UI or API domain does not have override base URLs set even though your Spinnaker deployment is a Distributed deployment on a
remote cloud provider. As a result, you will need to open SSH tunnels
against that deployment to access Spinnaker. ? We recommend that you
instead configure an authentication mechanism (OAuth2, SAML2, or
x509) to make it easier to access Spinnaker securely, and then
register the intended Domain and IP addresses that your publicly
facing services will be using.
Failed to prep Spinnaker deployment
Here is my hal config:
currentDeployment: default
deploymentConfigurations:
- name: default
version: ''
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: us-west-2
defaults:
iamRole: BaseIAMRole
ecs:
enabled: false
accounts: []
azure:
enabled: false
accounts: []
bakeryDefaults:
templateFile: azure-linux.json
baseImages: []
dcos:
enabled: false
accounts: []
clusters: []
dockerRegistry:
enabled: true
accounts:
- name: my-docker-registry
requiredGroupMembership: []
providerVersion: V1
permissions: {}
address: https://index.docker.io
username: <sensitive> (this is my actual username)
password: <sensitive> (this is my actual password)
email: fake.email#spinnaker.io
cacheIntervalSeconds: 30
clientTimeoutMillis: 60000
cacheThreads: 1
paginateSize: 100
sortTagsByDate: false
trackDigests: false
insecureRegistry: false
repositories:
- ericstoppel1/atixlabs
primaryAccount: my-docker-registry
google:
enabled: false
accounts: []
bakeryDefaults:
templateFile: gce.json
baseImages: []
zone: us-central1-f
network: default
useInternalIp: false
kubernetes:
enabled: true
accounts:
- name: my-k8s-account
requiredGroupMembership: []
providerVersion: V1
permissions: {}
dockerRegistries:
- accountName: my-docker-registry
namespaces: []
configureImagePullSecrets: true
cacheThreads: 1
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
cachingPolicies: []
kubeconfigFile: /home/osboxes/.kube/config
oAuthScopes: []
onlySpinnakerManaged: false
primaryAccount: my-k8s-account
oracle:
enabled: false
accounts: []
bakeryDefaults:
templateFile: oci.json
baseImages: []
cloudfoundry:
enabled: false
accounts: []
deploymentEnvironment:
size: SMALL
type: Distributed
accountName: my-k8s-account
updateVersions: true
consul:
enabled: false
vault:
enabled: false
customSizing: {}
sidecars: {}
initContainers: {}
hostAliases: {}
affinity: {}
nodeSelectors: {}
gitConfig:
upstreamUser: spinnaker
livenessProbeConfig:
enabled: false
haServices:
clouddriver:
enabled: false
disableClouddriverRoDeck: false
echo:
enabled: false
persistentStorage:
persistentStoreType: s3
azs: {}
gcs:
rootFolder: front50
redis: {}
s3:
bucket: spin-763f86d5-10ba-497e-9348-264fc353edec
rootFolder: front50
pathStyleAccess: false
endpoint: https://localhost:9001
accessKeyId: AKIAIOSFODNN7EXAMPLE
secretAccessKey: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
oracle: {}
features:
auth: false
fiat: false
chaos: false
entityTags: false
jobs: false
metricStores:
datadog:
enabled: false
tags: []
prometheus:
enabled: false
add_source_metalabels: true
stackdriver:
enabled: false
period: 30
enabled: false
notifications:
slack:
enabled: false
twilio:
enabled: false
baseUrl: https://api.twilio.com/
timezone: America/Los_Angeles
ci:
jenkins:
enabled: false
masters: []
travis:
enabled: false
masters: []
wercker:
enabled: false
masters: []
concourse:
enabled: false
masters: []
gcb:
enabled: false
accounts: []
repository:
artifactory:
enabled: false
searches: []
security:
apiSecurity:
ssl:
enabled: false
uiSecurity:
ssl:
enabled: false
authn:
oauth2:
enabled: false
client: {}
resource: {}
userInfoMapping: {}
saml:
enabled: false
userAttributeMapping: {}
ldap:
enabled: false
x509:
enabled: false
iap:
enabled: false
enabled: false
authz:
groupMembership:
service: EXTERNAL
google:
roleProviderType: GOOGLE
github:
roleProviderType: GITHUB
file:
roleProviderType: FILE
ldap:
roleProviderType: LDAP
enabled: false
artifacts:
bitbucket:
enabled: false
accounts: []
gcs:
enabled: false
accounts: []
oracle:
enabled: false
accounts: []
github:
enabled: false
accounts: []
gitlab:
enabled: false
accounts: []
http:
enabled: false
accounts: []
helm:
enabled: false
accounts: []
s3:
enabled: false
accounts: []
maven:
enabled: false
accounts: []
templates: []
pubsub:
enabled: false
google:
enabled: false
pubsubType: GOOGLE
subscriptions: []
publishers: []
canary:
enabled: false
serviceIntegrations:
- name: google
enabled: false
accounts: []
gcsEnabled: false
stackdriverEnabled: false
- name: prometheus
enabled: false
accounts: []
- name: datadog
enabled: false
accounts: []
- name: signalfx
enabled: false
accounts: []
- name: aws
enabled: false
accounts: []
s3Enabled: false
reduxLoggerEnabled: true
defaultJudge: NetflixACAJudge-v1.0
stagesEnabled: true
templatesEnabled: true
showAllConfigsEnabled: true
webhook:
trust:
enabled: false
I have my kubernetes config and can acces to it, so, separately it all seems to work, what may be wrong?
As per issue reported:
WARNING You have not specified a Kubernetes context in your halconfig,
Spinnaker will use "minikube" instead.
I don't see any Kuberenetes context entry defined in your hal config, find here dedicated chapter from Spinnaker guideline document.
Try adding the kubernetes details to the halyard context.
hal config provider kubernetes account add <ACCOUNT>
hal config provider kubernetes enable
This link can be used for reference: https://www.spinnaker.io/reference/halyard/commands/