Am using Azure datafactory in combination with git:
During execution of a pipeline, can i get the current branch name as pipeline parameter, maybe like this:?
Wished output (additional column with branch name):
Sources: ETL / ELT pipelines - Metainformation about the pipeline
The workaround to achieve your scenario is follow below steps:
create pipeline parameter with type string and default value as your branch name.
In source of copy activity use additional column and value as parameter you created for branch name.
Output
Only possible system variables are as mentioned in documents are as below:
Related
I have a YAML file that process the output of two different pipelines. We successfully have that pipeline working. Now I would like to create another pipeline using the same YAML file, but using variables specific to each pipeline. I am trying to run one pipeline using the output of "develop" branch pipelines. Then a second pipeline that will use the output of "master/main" branch pipelines. I saw parameters, but I do not want people to have to remember to select 'develop' or 'main' when these are run.
When I created the 'master' pipeline, I see a button to create variable in the ADO UI. I thought those would be specific to the pipeline and not the underlying YAML file. I have tried various formats for the variables ${{ variables.branch }} and $(branch) .
Something like:
resources:
pipelines:
pipeline: xyzBuild # Name of the pipeline resource.
source: $(xyzLinux_Pipeline) # The name of the pipeline referenced by this pipeline resource.
trigger: true # Run this pipeline when any run of Build_xyz_MasterBranch completes
So, I would set xyz_Pipeline to the names of the pipelines in the specific pipeline.
Maybe I'm misunderstanding variables completely...
Sorry for the formatting... Cannot get the text to look like I want....
Thanks.
Edit: I see some Microsoft examples that have
Which looks like it will trigger for either the 'master' or 'releases/*', in my case I could use 'develop'. But how does the checkout 'know' which repo to pull the code from ?
# Update the local develop branch from the head of the remote repo.
- script: |
git checkout develop
git pull
displayName: 'Pull the develop branch from the remote repo.'
our YAML files have the branch name hardcoded. Is there some special keyword that I can use that will specify the correct repo based on the trigger name? We have only started to use these pipelines and we're all just guessing. I have looked at the ADO website, but it seems too fragmented for me to piece together the whole concept.
So, I have the following situation:
20 git repositories with a microservice in each
A repo with a template pipeline for the standard build process
Each of the 20 repos defines its own pipeline that uses this template with some parameters
On a PR build for any of the 20 repos, it will run its own pipeline as a build validation.
That's all working fine.
But now, I want to add an additional Optional Check to each of the 20 repos which would run a code analysis tool (eg. sonarqube) as part of the PR.
I don't want to add this to the main pipeline as I want it to appear in the PR as a separate optional check which can be skipped or toggled between optional/required.
The only way that I can find to achieve this is to add a CodeAnalysis.yml to each of the 20 repos and create 20 associated pipelines, which is an overhead I'd rather not deal with.
Is there a way that you can have a single pipeline that can be referenced as an optional check in all of these repos?
According to the docs, it should be possible for the shared pipeline to dynamically fetch the code from the right repo using something like this:
- checkout: git://ProjectName/$(Build.Repository.Name)#$(Build.SourceBranch)
But when I try this, the PR is unable to queue the pipeline (unhelpfully, it doesn't give a reason why).
Is there a solution to this?
You need to use Templates to design a shared template to run the code scanning. Essentially templates are reusable yaml files that you can pass parameters to to customise options.
For example, for your use case you could have the template for code scanning and add an existing job onto your pipelines to extend this template, pass any optional parameters you need (such as the repo to check out) and you can add in conditions to decide when to run the code scanning check
I know what you mean, this is not possible (at least not in the PR interface). Because when you press 'Queue' button in the PR build section, there won't even be a popup to select parameters, it will just choose the default value.
- checkout: git://ProjectName/$(Build.Repository.Name)#$(Build.SourceBranch)
This is also not possible, because runtime variables are not accepted here, they will be read directly as string types.
One suggestion is that you can specify parameters manually in pipeline page and then run the pipeline after setting the parameters
The reason for this is:
1, The checkout section is expanded before everything happens in pipeline run.
2, The Queue button on PR page didn't provide pop-out window to select parameters.
Your pipeline definition should be like this:
trigger:
- none
parameters:
- name: ProjectName
default: xxx
- name: RepoName
default: xxx
- name: BranchName
default: xxx
pool:
vmImage: windows-latest
steps:
- checkout: git://${{parameters.ProjectName}}/${{parameters.RepoName}}#${{parameters.BranchName}}
- script: |
dir
displayName: 'Run a multi-line script'
Manually select the parameters every time:
I have a requirement to pass data between 2 release pipelines (to trigger 2nd pipeline on completion of 1st pipeline).
Can we pass variables dynamically between azure RELEASE pipelines using trigger an Azure DevOps pipeline extension?
I tried this blog but unable to find/understand if we can use "output variables" to pass data between azure release pipelines.
https://msftplayground.com/2019/02/trigger-a-pipeline-from-an-azure-devops-pipeline/
Thank you in advance!
Output variables are created by the pipeline and referenced by the other tasks in the pipeline, it means they are dynamic and refers to the result of a particular task.
These cannot be defined statically.
After running the task in the pipeline, output variables value can be known.
There are two different ways to create output variables :
By building support for the variable in the task itself
Setting the value ad-hoc in a script
Below example is defining a task with the name SomeTask that natively creates an output variable called out.
In a task within that same job, you can reference that variable using $(SomeTask.out).
steps:
- task: MyTask#1
name: SomeTask
- script: echo $(SomeTask.out)
For the detailed information regarding how to create output variables and pass between the pipelines, please refer azure devops output variables.
I have Azure DevOps yaml deployment pipelines that trigger when a build pipeline completes and publishes an artifact. According to docs found here, artifacts will be downloaded to
to $(PIPELINE.WORKSPACE)/pipeline-identifier/artifact-identifier folder.
my trigger is similar to
resources:
pipelines:
- pipeline: SmartHotel-PipelineIdentifier
project: DevOpsProject
source: SmartHotel-CI
trigger:
branches:
include:
- main
How can i access the pipeline-identifier from a template? I need to be able to create
$(PIPELINE.WORKSPACE)/SmartHotel-PipelineIdentifier/artifact-identifier
based upon the pipeline definition above.
When the pipeline is triggered by a build, I'm able to use
$(Pipeline.Workspace)/$(resources.triggeringAlias)/artifact-identifier
to give me what I need, but that value is blank when the pipeline is triggered manually.
How can I access what appears to be called the pipeline-identifier value for a specific pipeline to be used within templates in the deployment jobs of pipelines?
You can get various properties of the pipeline resource, but that's all assuming you know the alias. But there's no way to get that alias if it's not triggered automatically. If you had more pipeline resources, how would you know which one to get?
In this case, you could:
pass the pipeline identifier (SmartHotel-PipelineIdentifier) as a parameter to the template.
assume a convention for well-known pipeline identifiers (i.e. always use artifact-pipeline instead of SmartHotel-PipelineIdentifier)
download (or move) the artifact to a well-known location before calling the template. That's assuming you already use the pipeline identifier for the download task anyway, so you could simply do mv $(Pipeline.Workspace)/SmartHotel-PipelineIdentifier/artifact-identifier $(Pipeline.Workspace)/artifact-identifier right after download.
You can also use the full DownloadPipelineArtifact task (which download is a shorthand for) and configure it to download to a different directory.
determine the directory name in runtime, once the artifact is downloaded and set a pipeline variable dynamically:
# there will be directories in $(Pipeline.Workspace) like "a", "b", "s" for checked out repositories - ignore them
- pwsh: |
ls $(Pipeline.Workspace)
$alias = Get-ChildItem $(Pipeline.Workspace) | ? { $_.Name.Length -gt 1 } | select -ExpandProperty Name -First 1
echo "##vso[task.setvariable variable=ArtifactAlias]$alias"
displayName: determine artifact pipeline alias
- pwsh: echo 'alias: $(ArtifactAlias)' # use $(ArtifactAlias) where you would use $(resources.triggeringAlias)
I have a pipeline in Azure DevOps somewhat like this:
parameters:
- name: Scenario
displayName: Scenario suite
type: string
default: 'Default'
variables:
Scenario: ${{ parameters.Scenario }}
...
steps:
- script: echo Scenario is $(Scenario)
And I'm executing the pipeline via the VSTS CLI like this:
vsts build queue ... --variables Scenario=Test
When I run my pipeline, it seems that the parameter default value overwrites my cmd line specified variable value and I get the step output Scenario is Default. I tried something like Scenario: $[coalesce(variables['Scenario'], ${{ parameters.Scenario }})] but I think I got the syntax wrong because that caused a parsing issue.
What would be the best way to only use the parameter value if the Scenario variable has not already been set?
What would be the best way to only use the parameter value if the
Scenario variable has not already been set?
Sorry but as I know your scenario is not supported by design. The Note here has stated that:
When you set a variable in the YAML file, don't define it in the web editor as settable at queue time. You can't currently change variables that are set in the YAML file at queue time. If you need a variable to be settable at queue time, don't set it in the YAML file.
The --variables switch in command can only be used to overwrite the variables which are marked as Settable at queue time. Since yaml pipeline doesn't support Settable variables by design, your --variables Scenario=Test won't actually be passed when queuing the yaml pipeline.
Here're my several tests to prove that:
1.Yaml pipeline which doesn't support Settable variable at Queue time:
pool:
vmImage: 'windows-latest'
variables:
Scenario: Test
steps:
- script: echo Scenario is $(Scenario)
I ran the command vsts build queue ... --variables Scenario=Test123, the pipeline run started but the output log would always be Scenario is Test instead of expected Scenario is Test123. It proves that it's not Pipeline parameter overwrites variable value, instead the --variables Scenario=xxx doesn't get passed cause yaml pipeline doesn't support Settable variables.
2.Create Classic UI build pipeline with pipeline variable Scenario:
Queuing it via command az pipelines build queue ... --variables Scenario=Test12345(It has the same function like vsts build queue ... --variables Scenario=Test) only gives this error:
Could not queue the build because there were validation errors or warnings.
3.Then enable the Settable at queue time option of this variable:
Run the same command again and now it works to queue the build. Also it succeeds to overwrite the original pipeline variable with the new value set in command-line.
You can do similar tests like what I did to figure out the cause of the behavior you met.
In addition:
VSTS CLI has been deprecated and replaced by Azure CLI with the Azure DevOps extension for a long time. So now it's more recommend to use az pipelines build queue
instead.
Lance had a great suggestion, but here is how I ended up solving it:
- name: Scenario
displayName: Scenario suite
type: string
default: 'Default'
variables:
ScenarioFinal: $[coalesce(variables['Scenario'], '${{ parameters.Scenario }}')]
...
steps:
- script: echo Scenario is $(ScenarioFinal)
In this case we use the coalesce expression to assign the value of a new variable, ScenarioFinal. This way we can still use --variables Scenario=Test via the CLI or use the parameter via the pipeline UI. coalesce will take the first non-null value and effectively "reorder" the precedence Lance linked to here: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#expansion-of-variables
(Note that there need to be single quotes around the parameter reference '${{}}' because the ${{}} is simply converted to to the value, but then the coalesce expression doesn't know how to interpret the raw value unless it has the single quotes around it to denote it as a string)
Note that the ability to set parameters via the CLI is a current feature suggestion here: https://github.com/Azure/azure-devops-cli-extension/issues/972