Does skip_deploy_on_missing_secrets work in static web app pipeline? - azure-devops

I would like to only build my static web app and not deploy it. I saw there is a env setting "skip_deploy_on_missing_secrets' but after setting that in the pipeline it just gets ignored and the pipeline fails with error saying the deployment token is not set. How exactly should I use this env setting? Does it actually work?

There's not much info on the internet about this parameter. However, at least Dapr docs suggest that it should work, and I doubt they'd put it in their docs if it didn't (here).
However, I had problems getting it working as well.
One thing to notice there is that Dapr docs actually show a GitHub Action, and they work a little bit differently than Azure CICD YAML Pipelines, which I was using.
Finally I stumbled upon this comment on a similar issue on GitHub which hints that this magic undocumented parameter should be passed as an environment variable. I was passing it as an input. Maybe GitHubActions forward these params to envs automatically?
So I tried setting it as ENV and it worked!
- task: AzureStaticWebApp#0
inputs:
app_location: ...blahblahblah
....
#skip_deploy_on_missing_secrets: true
# ABOVE: this one is documented in few places, but it's expected to be a ENV var!
#see https://github.com/Azure/static-web-apps/issues/679
env:
SKIP_DEPLOY_ON_MISSING_SECRETS: true

Related

Access agent hostname for a build variable

I've got release pipelines defined that have worked. I've got a config transform that will write a API url to a config file (currently with a hardcoded api url).
What I'd like to do is be able to have the config be re-written based on the agent its being deployed on.
eg. if the machine being deployed to is TEST-1, I'd like to write https://TEST-1.somedomain.com/api into a config using that transform step.
The .somedomain.com/api can be static.
I've tried modifying the pipeline variable's value to be https://${{Environment.Name}}.somedomain.com/api, but it just replaces the API_URL in the config with that literal string (does not populate machine name in that variable).
Being that variables are the source of value that is being written to configs during the transform, I'm struggling to see another way to do this.
some gotchas
Using non yaml pipeline definitions (I know I saw people put logic in variable definitions within yaml pipelines)
Can't just use localhost, as the configuration is being read into a javascript rich app that would have js trying to connect to localhost vs trying to connect to the server.
I'm interested in any ways I could solve this problem
${{Environment.Name}} is not valid syntax for either YAML or classic pipelines.
In classic pipelines it would be $(Environment.Name).
In YAML, $(Environment.Name) or ${{ variables['Environment.Name'] }} would work.

Is there a way in Terraform Enterprise to read the payload from VCS?

I have configured a webhook between github and terraform enterprise correctly, so each time I push a commit, the terraform module gets executed. Why I want to achieve is to use part of the branch name where the push was made and pass it as a variable in the terraform module.
I have read that the value of a variable can be a HCL code, but I am unable to find the correct object to access the payload (or at least, the branch name), so at this moment I think it is not possible to get that value directly from the workspace configuration.
if you get a workaround for this, it may also work from me.
At this point the only idea I get is to call the terraform we hook using an API Call
Thanks in advance
Ok, after several try and error I found out that it is not possible to get any information in the terraform module if you are using the VCS mode. So, in order to be able to get the branch, I got these options:
Use several workspaces
You can configure a workspace for each branch, so you may create a variable a select that branch in each workspace. The problem is you will be repeating yourself with this option
Use Terraform CLI and a GitHub action
I used these fine tutorial from Hashicorp for creating a Github action that uses Terraform Cloud. It gets you done the 99% of the job. For passing a varible you must be aware that there are two methods, using a file or using an enviromental variable (check that information on the Hashicorp site here). So using a:
terraform apply -var="branch=value"
won't work. In my case I used the tfvars approach, so in my Github Action I put this snippet:
- name: Setup Terraform variables
id: vars
run: |-
cat > terraform.auto.tfvars <<EOF
branch = "${GITHUB_REF#refs/*/}"
EOF
I defined a variable within terraform called branch, I was able to get and work with this value

Azure Function on Linux Breaks Node function requiring Node v12 when Deployed from Azure DevOps

I have a Nodejs Azure Function using a timer trigger. It uses some modern Javascript syntax (await, flatMap, etc) that is supported in Node v12.
I've deployed my infrastructure with Terraform and specified the linuxFxVersion as "node|12". So far so good. When I deploy my code from the Azure DevOps using the built-in AzureFunctionApp#1 task, it will cause the function to deploy a new image that is running Node v8. This causes my function to break.
Here is the release definition:
steps:
- task: AzureFunctionApp#1
displayName: 'Azure Function App Deploy: XXXXXXXXX'
inputs:
azureSubscription: 'XXXXXXXXX'
appType: functionAppLinux
appName: 'XXXXXXXXX'
package: '$(System.DefaultWorkingDirectory)/_XXXXXXXXX/drop/out.zip'
runtimeStack: 'DOCKER|microsoft/azure-functions-node8:2.0'
configurationStrings: '-linuxFxVersion: node|12'
You can see I explicitly try to force the linuxFxVersion to remain 'node|12' in the release.
In the release logs, you can watch the release try to set the configuration for linuxFxVersion 2x, once to the wrong image, and the second time to "node|12".
After I release the code, the function will still run, but when I print the node version it shows version 8 and fails at runtime when it hits the unsupported syntax.
If I re-run my terraform script, it will show me that the linuxFxVersion for my function app is now set to 'DOCKER|microsoft/azure-functions-node8:2.0' and it sets it back to "node|12". After that runs, my function now works. If I update my code and deploy again, it breaks again in the same way.
What is even more baffling to me is that this is a v3 function app, which in theory does not support Node v8 at all.
Am I missing something obvious here or is the Function App release task just broken for Linux Functions?
After writing up this whole big question, and proof reading it... I noticed this little snippet in the release task YAML (which I hadn't seen before today as it's a release and uses the AzDO GUI for editing):
runtimeStack: 'DOCKER|microsoft/azure-functions-node8:2.0'
It turns out, if you specify the stack as 'JavaScript' (the options are .NET and JavaScript) the task sets the "runtimeStack" to that string. Which is what gets set in the linuxFxVersion setting on the Function App. Even if you override that setting in the configuration settings.
The fix is to leave the Runtime field blank and then it will respect your settings. Awesome.

How to pass all global credentials to Jenkins pipeline

This is my first question posted on stackoverflow, hence in case I did something incorrectly pleaselet me know.
Description
I am currently working on translation of freestyle projects to declarative pipelines in Jenkins (jenkinsfiles kept in Git repo). The original freestyle job was triggering PowerShell script which needed access to Global name/password pairs defined in Mask Passwords plugin section in Configure System. Solution to this problem was an additional tick in project itself (unfortunately I am not allowed adding screenshots to posts yet, hence editor uploaded screen to imgur and pasted link - please see Screenshot 1):
Screenshot 1
Therefore I started looking for possible implementation of such solution to jenkinsfile, however wothout luck.
My problem
When the script is triggered from the pipeline, it errors out stating that it cannot find relevant passwords (powershell refers to those credentials as to environment variables). This works fine when ran from freestyle project.
Which I reckon is caused by pipeline not being able to reach out to previously mentioned credentials.
What I tried
Wrapping the step into below block:
wrap([$class: 'MaskPasswordsBuildWrapper']) {
bat(batch file launching ps script)
}
Then the above block containing relevant step wrapping into
script {
wrap(...)
}
But none of them worked.
I have taken a look at other plugins like Credentials Binding Plugin or Credentials Plugin but those allow to bind/pass one credential per step, and I need to pass all credentials specified in Jenkins (I am open to move saved credentials to any other location within Jenkins).
I have looked at adding environment variable:
credentials('Credentials-ID')
But the problem is the same as with mentioned plugins.
By any chance, have anyone came across similar situation and know what can be done in order to allow pipeline to access/pass to pipeline all the credentials specified in Jenkins instead of binding/passing them one a time?
All tips are very welcome!
You can do this and the env variable will then be available throughout your job. You could define multiple env variable too.
environment {
// Use credentials() to hide the environment variable's output
MY_PERSONAL_TOKEN = credentials('Credentials-ID')
}
stages {
stage('Test Stage') {
steps {
script {
// do what you need to
}
}
}
}

How do I load values from a .json file into a Devops Yaml Pipeline Parameter

Microsoft Documentation explains the use of parameters in Yaml Pipeline jobs as
# File: azure-pipelines.yml
trigger:
- master
extends:
template: simple-param.yml
parameters:
yesNo: false # set to a non-boolean value to have the build fail
But instead of statically specifying the value of yesNo: I'd prefer to load it from a completely separate json config file. Preferably a json file that both my Build Job and my Application could share so that parameters specified for the Application could also be used in the Build Job.
Thus the question:
How do I load values from a .json file into a Devops Yaml Pipeline Parameter?
I've been using this marketplace task:
https://marketplace.visualstudio.com/items?itemName=OneLuckiDev.json2variable
And it's been working great so far. Haven't tried it builds, but can't see why it wouldn't work with separate build pipelines/multi-staged builds. There are a few things you have to be aware of/stumble upon, like double escaping slashes in directory paths - and you'll have to fetch secrets from someplace else, like traditional variable groups.