I have one project (parent) that triggers a number of child projects (child_1, child_2) I can build parallel, and one (sub_child) more I need to build after successfully completing the previous child builds.
I made follow ci scripts:
parent
variables:
FLAGGER_REBUILD: "true"
COMMON_ONLY: "true"
stages:
- build_parent
- build_child
- build_sub_child
build parent:
stage: build_parent
retry: 2
when: on_success
only:
- master
script:
- echo "PARENT"
- date
.sub_build_template: &build_conf
stage: build_child
when: on_success
only:
refs:
- master
variables:
- $COMMON_ONLY == "true"
rebuild child_1:
<<: *build_conf
trigger:
project: ci_test/child_1
rebuild child_2:
<<: *build_conf
trigger:
project: ci_test/child_2
rebuild sub_child:
stage: build_sub_child
when: on_success
only:
refs:
- master
variables:
- $COMMON_ONLY == "true"
trigger:
project: ci_test/sub_child
child_1, 2
variables:
FLAGGER_REBUILD: "false"
stages:
- build_child_1
- build_sub_child
build image:
stage: build_child_1
retry: 2
when: on_success
only:
- master
script:
- echo "CHILD 1"
- date
rebuild flagger:
stage: build_sub_child
when: on_success
only:
refs:
- master
variables:
- $FLAGGER_REBUILD == "true"
trigger:
project: ci_test/sub_child
sub_child
variables:
FLAGGER_REBUILD: "false"
stages:
- build_sub_child
build flagger:
stage: build_sub_child
retry: 2
when: on_success
script:
- echo "SUB CHILD"
- date
Gitlab made the following scheme of launching:
img_1
But all children and sub_child tasks launch at one time. It's not good.
So, how do I need to change parent, children, or sub_child scripts to gen correct launching of builds?
img_2
img_3
I use Gitlab version 11.10.8-ee and I am not allowed to upgrade the version at the moment. Please help me decide on this task.
Thank you.
Related
I tried to create the below yaml pipeline config file. Can I get some help on reducing the number of stages in YAML pipeline. This is to reduce number of stages per domain.
name: '$(Date:yyyy.MM.dd)$(Rev:.rr)'
trigger:
branches:
include:
- master
- features/*
pool:
name: CustomPool
stages:
- template: templates\build.yaml
- template: templates\deploy.yaml
parameters:
Environment: 'dev'
IsEnabled: true
Domain: 'domainX'
ServerList:
- name: ServerX
restartIIS: true
- template: templates\deploy.yaml
parameters:
Environment: 'test'
IsEnabled: true
Domain: 'domainY'
ServerList:
- name: ServerY
restartIIS: false
Scenario 1: If you don't want to use stages, you can only use jobs in the main YAML and your template. There is a demo you can refer to.
azure-pipelines.yml
trigger:
- none
pool:
vmImage: ubuntu-latest
jobs:
- template: JobTemplate1.yml
- template: JobTemplate2.yml
- template: JobTemplate3.yml
JobTemplate1.yml
jobs:
- job: job1_1
steps:
- script: echo This is job1_1
- job: job1_2
steps:
- script: echo This is job1_2
In this example, each template has 2 jobs. Eventually you can see a total of 6 jobs running.
Scenario 2: If you want to use stages in your main YAML but just want to decrease the stages in your template, you can use only one stage and several jobs in your template.
azure-pipelines.yml
trigger:
- none
pool:
vmImage: ubuntu-latest
stages:
- template: template1.yml
- template: template2.yml
- template: template3.yml
template1.yml
stages:
- stage: Template1_Stage
jobs:
- job: Template1_Stage_job1
steps:
- script: echo This is Template1_Stage_job1
- job: Template1_Stage_job2
steps:
- script: echo This is Template1_Stage_job2
In this example, template1 has one stage and two jobs. Template2 has two stages, and each stage has one job. Template3 has one stage and one job. For your case, you can refer to template1.
How can i ensure that all stages of my pipelines are performed in the same working directory.
I have pipeline that looks like this:
resources:
repositories:
- repository: AzureRepoDatagovernance
type: git
name: DIF_data_governance
ref: develop
trigger:
branches:
include:
- main
paths:
include:
- terraform/DIF
variables:
- group: PRD_new_resources
- name: initial_deployment
value: false
pool: $(agent_pool_name)
stages:
- stage: VariableCheck
jobs:
- job: VariableMerge
steps:
- checkout: self
- checkout: AzureRepoDatagovernance
- ${{ if eq(variables.initial_deployment, 'false') }}:
- task: PythonScript#0
inputs:
scriptSource: filePath
scriptPath: DIF-devops/config/dynamic_containers.py
pythonInterpreter: /usr/bin/python3
arguments: --automount-path $(System.DefaultWorkingDirectory)/DIF_data_governance/data_ingestion_framework/$(env)/AutoMount_Config.json --variables-path $(System.DefaultWorkingDirectory)/DIF-devops/terraform/DIF/DIF.tfvars.json
displayName: "Adjust container names in variables.tf.json"
- stage: Plan
jobs:
- job: Plan
steps:
- checkout: self
- checkout: AzureRepoDatagovernance
- script: |
cd $(System.DefaultWorkingDirectory)$(terraform_folder_name) && ls -lah
terraform init
terraform plan -out=outfile -var-file=DIF.tfvars.json
displayName: "Plan infrastructure changes to $(terraform_folder_name) environment"
- stage: ManualCheck
jobs:
- job: ManualCheck
pool: server
steps:
- task: ManualValidation#0
timeoutInMinutes: 5
displayName: "Validate the configuration changes"
- stage: Apply
jobs:
- job: Apply
steps:
- checkout: self
- checkout: AzureRepoDatagovernance
- script: |
cd $(System.DefaultWorkingDirectory)$(terraform_folder_name) && ls -lah
terraform apply -auto-approve "outfile"
displayName: "Apply infrastructure changes to $(terraform_folder_name) environment"
How can I make sure that all 4 stages are inside this same working directory so I can check out just once and all stages have access to work done by previous jobs? I know that this
I know that my pipeline has some flaws that will need to be polished.
This is not possible. Each azure devops stage has its own working directory and it is considered a different devops agent job. The jobs inside the stage will use the same working directory for the steps that are included on them.
If you need to pass code or artifacts between stages you should use publish pipeline artifacts and download pipeline artifacts native devops tasks.
I've looked at the instructions here https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops and set up an environment called test. However when I put this environment: test line in the below pipeline I get an error "unexpected value". Where do I need to put the environment: test ?
pr:
branches:
include:
- '*'
trigger:
branches:
include:
- master
pool:
vmImage: ubuntu-latest
stages:
- stage: Build
jobs:
- job: Build
steps:
- template: templates/build.yml
- stage: Release
condition: and(succeeded('Build'), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
jobs:
- job: DeployDev
environment: test
variables:
You need to change your ordinary job into a deployment job :
jobs:
- deployment: DeployDev
environment: test
How can i loop over an array or through an object to create stages?
Below is a yml file that works. You can see the build stage loops over the parameters environments for jobs. IS it possible to achieve the same thing for the publishing stages?
The publishing stages require manual approval, must run in order and only when the previous stage is successfully complete?
parameters:
- name: 'environments'
type: object
default:
- environment: development
variableGroup: strata2-admin-spa-vg
dependsOn: 'build'
- environment: test
variableGroup: strata2-test-admin-spa-vg
dependsOn: 'development'
- environment: production
variableGroup: strata2-development-variables
dependsOn: 'development'
- name: 'buildTemplate'
type: string
default: buildTemplate.yml
- name: 'publishTemplate'
type: string
default: publishTemplate.yml
trigger:
- main
pool:
vmImage: ubuntu-latest
stages:
- stage: build
displayName: Build stage
jobs:
# Can I do this for stages?
- ${{each build in parameters.environments}}:
- template: ${{parameters.buildTemplate}}
parameters:
environment: ${{build.environment}}
variableGroup: ${{build.variableGroup}}
# How to loop over parameters.environments to dynamically create stages
- stage: Publish_Development
displayName: Publish development environment
dependsOn: build
jobs:
- template: ${{parameters.publishTemplate}}
parameters:
environment: Development_websites
variableGroup: strata2-admin-spa-vg
- stage: Publish_Test
displayName: Publish test environment
dependsOn: Publish_Dev
jobs:
- template: ${{parameters.publishTemplate}}
parameters:
environment: Test_websites
variableGroup: strata2-test-admin-spa-vg
- stage: Publish_Production
displayName: Publish production environment
dependsOn: Publish_Test
jobs:
- template: ${{parameters.publishTemplate}}
parameters:
environment: Production_websites
variableGroup: strata2-development-variables
You can create a stages object the same way you created the environments object.
stages:
Publish_Development:
- stage: Publish_Development
- displayName: Publish development environment
- dependsOn:
- ...
Publish_Test
- stage: Publish_Development
- ...
Then you can loop over the stages object like you did with environments.
- ${{each stage in parameters.stages}}:
- stage: ${{ stage.stage }}
displayName: ${{ stage.displayName}}
dependsOn: ${{ stage.dependsOn}}
...
Managed to get this working for myself. Stages automatically generated based on numerical batch numbers, that run in parallel. Hope it helps someone out there.
- name: batches
displayName: BATCH
type: object
default:
- 1
- 2
- 3
stages:
- ${{ each stage in parameters.batches }}:
- stage: BATCH_${{ stage }}
dependsOn: []
jobs:
- job: PREP
steps:
- template: install.yml
- job: RUN
dependsOn: PREP
timeoutInMinutes: 300
steps:
- template: run.yml
parameters:
batch: ${{ stage }}
Would be nice if the batch numbers weren't displayed as an editable box when running the pipeline from Azure DevOps. I tried setting them as fixed values, but couldn't get that to work, so this is what I went with in the end.
In Azure Devops multistage YAML pipeline we got multiple environments.
In stages to run normally we do a build and deploy only in QA, so we need to deselect each stage manually. By default all stages are selected is it possible to have exact opposite, where all stages are deselected by default???
trigger: none
pr: none
stages:
- stage: 'Build'
jobs:
- deployment: 'Build'
pool:
name: Default
# testing
environment: INT
strategy:
runOnce:
deploy:
steps:
- checkout: none
- powershell: |
echo "Hello Testing"
Start-Sleep -Seconds 10
- stage: 'Sandbox'
jobs:
- job: 'Sandbox'
pool:
name: Default
steps:
- checkout: none
# testing
- powershell: |
echo "Hello Testing"
- stage: 'Test'
jobs:
- job: 'DEV'
pool:
name: Default
steps:
- checkout: none
- powershell: |
echo "Hello Testing"
- stage: 'QA'
dependsOn: ['Test','Test1','Test2']
jobs:
- job: 'QA'
pool:
name: Default
steps:
- checkout: none
# Testing
- powershell: |
echo "Hello Testing"
I am afraid that there is no UI (like stage to run) method that can meet your needs.
You could try to add parameters to your Yaml Sample.
Here is an example:
trigger: none
pr: none
parameters:
- name: stageTest
displayName: Run Stage test
type: boolean
default: false
- name: stageBuild
displayName: Run Stage build
type: boolean
default: false
stages:
- ${{ if eq(parameters.stageBuild, true) }}:
- stage: 'Build'
jobs:
- deployment: 'Build'
pool:
name: Default
environment: INT
strategy:
runOnce:
deploy:
steps:
- checkout: none
- powershell: |
echo "Hello Testing"
Start-Sleep -Seconds 10
- ${{ if eq(parameters.stageTest, true) }}:
- stage: Test
dependsOn: []
jobs:
- job: B1
steps:
- script: echo "B1"
The parameters are used to determine whether to run these stages. You could add expressions before the stage to check if the parameter value could meet expression.
The default value is false. This means that the stage will not run by default.
Here is the result:
You can select the stage you need to run by clicking the selection box.
Update
Workaround has some limitations. When the select stage has depenencies, you need to select all dependent stages to run.
For example:
- stage: 'QA'
dependsOn: ['Test','Test1','Test2']
On the other hand, I have created a suggestion ticket to report this feature request. Here is the suggestion ticket link: Pipeline Deselect Stages By Default You could vote and add comment in it .
I've used this solution to build a nuget-package, and:
always push packages from master
conditionally push packages from other branches
Using GitVersion ensures that the packages from other branches get prerelease version numbers, e.g. 2.2.12-my-branch-name.3 or 2.2.12-PullRequest7803.4. The main branch simply gets 2.2.12, so the master branch is recognized as a "regular" version.
The reason I'm repeating the answer above, is that I chose to make the stage conditional instead of using an if:
trigger:
- master
parameters:
- name: pushPackage
displayName: Push the NuGet package
type: boolean
default: false
stages:
- stage: Build
jobs:
- job: DoBuild
steps:
- script: echo "I'm building a NuGet package (versioned with GitVersion)"
- stage: Push
condition: and(succeeded('build'), or(eq('${{ parameters.pushPackage }}', true), eq(variables['build.sourceBranch'], 'refs/heads/master')))
jobs:
- job: DoPush
steps:
- script: echo "I'm pushing the NuGet package"
Like the other answer, this results in a dialog:
But what's different from the (equally valid) solution with '${{ if }}', is that the stage is always shown (even if it's skipped):