Terraform Azure Pipeline is not getting the workspace - azure-devops

I have a pipeline created in Azure Devops that uses terraform module. I have been able to run my pipeline but I'm having issues to detect the created workspace.
The pipeline tasks are described below:
The bash script creates the workspace in case it doesn't exist, here you can see the script:
#!/bin/bash
echo "*************************************************************"
echo "* Create or select workspace *"
echo "*************************************************************"
if [ $(terraform workspace list | grep -c "$1") -eq 0 ] ; then
echo "** Create new workspace $1 **"
terraform workspace new "$1" -no-color
else
echo "** Switch to workspace $1 **"
terraform workspace select "$1" -no-color
fi
I'm certain that the workspace has been created but the terraform subsequent tasks are not picking up the workspace.
You can see is setting default instead of development. This is is in terraform plan task
2021-03-12T18:13:48.0424826Z [1m # azurerm_resource_group.k8s[0m will be created[0m[0m
2021-03-12T18:13:48.0426216Z [0m [32m+[0m[0m resource "azurerm_resource_group" "k8s" {
2021-03-12T18:13:48.0427763Z [32m+[0m [0m[1m[0mid[0m[0m = (known after apply)
2021-03-12T18:13:48.0428525Z [32m+[0m [0m[1m[0mlocation[0m[0m = "eastus"
2021-03-12T18:13:48.0429278Z [32m+[0m [0m[1m[0mname[0m[0m = "default-k8s"
2021-03-12T18:13:48.0430000Z [32m+[0m [0m[1m[0mtags[0m[0m = {
2021-03-12T18:13:48.0430713Z [32m+[0m [0m"environment" = "default"
2021-03-12T18:13:48.0431181Z }
2021-03-12T18:13:48.0431534Z }
Has someone faced this issue before, if so any advice on having the terraform tasks detect the workspace that has been created in the bash script?

I was missing a key detail in the bash script section. Which is the working directory where I wanted my scripts to be executed.
You can see that is in the Advanced section. Without that path the scripts were running in the wrong place.
As a result I have the development workspace and the resource group development-k8s.
2021-03-12T19:40:13.7898170Z [1m # azurerm_resource_group.k8s[0m will be created[0m[0m
2021-03-12T19:40:13.7898875Z [0m [32m+[0m[0m resource "azurerm_resource_group" "k8s" {
2021-03-12T19:40:13.7928911Z [32m+[0m [0m[1m[0mid[0m[0m = (known after apply)
2021-03-12T19:40:13.7930291Z [32m+[0m [0m[1m[0mlocation[0m[0m = "eastus"
2021-03-12T19:40:13.7954850Z [32m+[0m [0m[1m[0mname[0m[0m = "development-k8s"
2021-03-12T19:40:13.7955573Z [32m+[0m [0m[1m[0mtags[0m[0m = {
2021-03-12T19:40:13.7956351Z [32m+[0m [0m"environment" = "development"
2021-03-12T19:40:13.7956951Z }
2021-03-12T19:40:13.7957351Z }
I hope it saves you the several hours I spent going back and forth through the process :)

Related

Running executables with arguments from a Jenkinsfile (using bat - under Windows)

I have read multiple threads and I still cannot figure out how to get my Jenkinsfile to run some applications such as NuGet.exe or devenc.com.
Here is what I have so far as a Jenkins file:
pipeline {
agent any
options {
timestamps ()
skipStagesAfterUnstable()
}
environment {
solutionTarget = "${env.WORKSPACE}\\src\\MySolution.sln"
}
stages {
stage('Build Solution') {
steps {
dir ("${env.workspace}") {
script {
echo "Assumes nuget.exe was downloaded and placed under ${env.NUGET_EXE_PATH}"
echo 'Restore NuGet packages'
""%NUGET_EXE_PATH%" restore %solutionTarget%"
echo 'Build solution'
""%DEVENV_COM_PATH%" %solutionTarget% /build release|x86"
}
}
}
}
}
}
In this example, I get the following error:
hudson.remoting.ProxyException: groovy.lang.MissingMethodException: No signature of method: java.lang.String.mod() is applicable for argument types: (java.lang.String) values: [C:\Program Files (x86)\NuGet\nuget.exe]
Note that a declarative checkout is in place, although not visible from the Jenkinsfile:
I have also tried to use a function to run those cmds, but without success either:
def cmd_exec(command) {
return bat(returnStdout: true, script: "${command}").trim()
}
Any tip would be highly appreciated.

terraform plan recreates resources on every run with terraform cloud backend

I am running into an issue where terraform plan recreates resources that don't need to be recreated every run. This is an issue because some of the steps depend on those resources being available, and since they are recreated with each run, the script fails to complete.
My setup is Github Actions, Linode LKE, Terraform Cloud.
My main.tf file looks like this:
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
provider "linode" {
}
provider "helm" {
debug = true
kubernetes {
config_path = "${local_file.kubeconfig.filename}"
}
}
resource "linode_lke_cluster" "lke_cluster" {
label = "MY-LABEL-HERE"
k8s_version = "1.21"
region = "us-central"
pool {
type = "g6-standard-2"
count = 3
}
}
and my outputs.tf file
resource "local_file" "kubeconfig" {
depends_on = [linode_lke_cluster.lke_cluster]
filename = "kube-config"
# filename = "${path.cwd}/kubeconfig"
content = base64decode(linode_lke_cluster.lke_cluster.kubeconfig)
}
resource "helm_release" "ingress-nginx" {
# depends_on = [local_file.kubeconfig]
depends_on = [linode_lke_cluster.lke_cluster, local_file.kubeconfig]
name = "ingress"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
}
resource "null_resource" "custom" {
depends_on = [helm_release.ingress-nginx]
# change trigger to run every time
triggers = {
build_number = "${timestamp()}"
}
# download kubectl
provisioner "local-exec" {
command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
}
# apply changes
provisioner "local-exec" {
command = "./kubectl apply -f ./k8s/ --kubeconfig ${local_file.kubeconfig.filename}"
}
}
In Github Actions, I'm running these steps:
jobs:
init-terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./terraform
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: List terraform state
run: terraform state list
- name: Terraform Plan
run: terraform plan
id: plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
When I look at the results of terraform state list I can see my resources:
Run terraform state list
terraform state list
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin state list
helm_release.ingress-nginx
linode_lke_cluster.lke_cluster
local_file.kubeconfig
null_resource.custom
But my terraform plan fails and the issue seems to stem from the fact that those resources try to get recreated.
Run terraform plan
terraform plan
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
LINODE_TOKEN: ***
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
Waiting for the plan to start...
Terraform v1.0.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
linode_lke_cluster.lke_cluster: Refreshing state... [id=31946]
local_file.kubeconfig: Refreshing state... [id=fbb5520298c7c824a8069397ef179e1bc971adde]
helm_release.ingress-nginx: Refreshing state... [id=ingress]
╷
│ Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory
│
│ with helm_release.ingress-nginx,
│ on outputs.tf line 8, in resource "helm_release" "ingress-nginx":
│ 8: resource "helm_release" "ingress-nginx" {
Is there a way to tell terraform it doesn't need to recreate those resources?
Regarding the actual error shown, Error: Kubernetes cluster unreachable: stat kibe-config: no such file or directory... which is referencing your outputs file... I found this which could help with your specific error: https://github.com/hashicorp/terraform-provider-helm/issues/418
1 other thing looks strange to me. Why does your outputs.tf refer to 'resources' & not 'outputs'. Shouldn't your outputs.tf look like this?
output "local_file_kubeconfig" {
value = "reference.to.resource"
}
Also I see your state file / backend config looks like it's properly configured.
I recommend logging into your terraform cloud account to verify that the workspace is indeed there, as expected. It's the state file that tells terraform not to re-create the resources it manages.
If the resources are already there and terraform is trying to re-create them, that could indicate that those resources were created prior to using terraform or possibly within another terraform cloud workspace or plan.
Did you end up renaming your backend workspace at any point with this plan? I'm referring to your main.tf file, this part where it says MY-WORKSPACE-HERE :
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
Unfortunately I am not a kurbenetes expert, so possibly more help can be used there.

Not able to run an exe in jenkins pipeline using powershell

I am trying to execute a process which is written in c# through jenkins pipeline during the build and deployment process.
It is a simple executable which takes 3 arguments, when it gets called from jenkins pipeline using a powershell function it doesn't write any logs which are plenty within the code of this exe, also it does not show anything on the pipeline logs as to what happened to this process. Whereas the logs output is clean before and after the execution of this process i.e. "Started..." & "end" gets printed in the jenkins build log.
When i try to run the same exe on a server directly with the same powershel script it runs perfectly fine. Could you please let me know how can i determine whats going wrong here or how can i make the logs more verbose so i can figure out the root cause.
Here is the code snippet
build-utils.ps1
function disable-utility($workspace) {
#the code here fetches the executable and its supporting libraries from the artifactory location and unzip it on the build agent server.
#below is the call to the executable
Type xmlPath #this prints the whole contents of the xml file which is being used as an input to my exe.
echo "disable exe path exists : $(Test-Path ""C:\Jenkins\workspace\utils\disable.exe"")" // output is TRUE
echo "Started..."
Start-Process -NoNewWindow -Filepath "C:\Jenkins\workspace\utils\disable.exe" -ArgumentList "-f xmlPath 0" #xmlPath is a path to a xml file
echo "end."
}
jenkinsfile
library {
identifier: 'jenkins-library#0.2.14',
retriever: legacySCM{[
$class: 'GitSCM',
userRemoteConfigs: [[
credtialsId: 'BITBUCKET_RW'
url: <htps://gitRepoUrl>
]]
]}
}
def executeStep(String stepName) {
def butil = '.\\build\\build-utils.ps1'
if(fileExists(butil))
{
def status = powershell(returnStatus: true, script: "& { . '${butil}'; ${stepName}; }")
echo status
if(status != 0) {
currentBuild.Result = 'Failure'
error("$StepName failed")
}
}
else
{
error("failed to find the file")
}
}
pipeline {
agent {
docker {
image '<path to the docker image to pull a server with VS2017 build tools>'
lable '<image name>'
reuseNode true
}
}
environment {
#loading the env variables here
}
stages {
stage {
step {
executeStep("disable-utility ${env.workspace}")
}
}
}
}
Thanks a lot in advance !
Have you changed it ? go to Regedit [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System Set "EnableLUA"= 0

Publish Nunit Test Results in Post Always Section

I'm trying to run a pipeline that does some Pester Testing and publish the NUnit results.
New tests were introduced and for whatever the reason, Jenkins no longer publishes the test results and errors out immediately after the powershell script. Hence, it doesn't get to the nunit publish piece. I receive this:
ERROR: script returned exit code 128
Finished: FAILURE
I've been trying to include the publish in the always section of the post section of the Jenkinsfile, however, I'm running into problems on how to make that NUnit test file available.
I've tried establishing an agent and unstash the file (even though it probably won't stash if the powershell script cancels the whole pipeline). When I use agent I get the following exception:
java.lang.NoSuchMethodError: No such DSL method 'agent' found among steps
Here is the Jenkinsfile:
pipeline {
agent none
environment {
svcpath = 'D:\\svc\\'
unitTestFile = 'UnitTests.xml'
}
stages {
stage ('Checkout and Stash') {
agent {label 'Agent1'}
steps {
stash name: 'Modules', includes: 'Modules/*/**'
stash name: 'Tests', includes: 'Tests/*/**'
}
}
stage ('Unit Tests') {
agent {label 'Agent1'}
steps {
dir(svcpath + 'Modules\\'){deleteDir()}
dir(svcpath + 'Tests\\'){deleteDir()}
dir(svcpath){
unstash name: 'Modules'
unstash name: 'Tests'
}
dir(svcpath + 'Tests\\'){
powershell """
\$requiredCoverageThreshold = 0.90
\$modules = Get-ChildItem ../Modules/ -File -Recurse -Include *.psm1
\$result = Invoke-Pester -CodeCoverage \$modules -PassThru -OutputFile ${unitTestFile} -OutputFormat NUnitXml
\$codeCoverage = \$result.CodeCoverage.NumberOfCommandsExecuted / \$result.CodeCoverage.NumberOfCommandsAnalyzed
Write-Output \$codeCoverage
if (\$codeCoverage -lt \$requiredCoverageThreshold) {
Write-Output "Build failed: required code coverage threshold of \$(\$requiredCoverageThreshold * 100)% not met. Current coverage: \$(\$codeCoverage * 100)%."
exit 1
} else {
write-output "Required code coverage threshold of \$(\$requiredCoverageThreshold * 100)% met. Current coverage: \$(\$codeCoverage * 100)%."
}
"""
stash name: 'TestResults', includes: unitTestFile
nunit testResultsPattern: unitTestFile
}
}
post {
always {
echo 'This will always run'
agent {label 'Agent1'}
unstash name: 'TestResults'
nunit testResultsPattern: unitTestFile
}
success {
echo 'This will run only if successful'
}
failure {
echo 'This will run only if failed'
}
unstable {
echo 'This will run only if the run was marked as unstable'
}
changed {
echo 'This will run only if the state of the Pipeline has changed'
echo 'For example, if the Pipeline was previously failing but is now successful'
}
}
}
Any and all input is welcome! Thanks!
The exception you are getting is due to Jenkins' strict pipeline DSL. Documentation of allowable uses of agent are here.
Currently agent {...} is not allowed to be used in the post section. Maybe this will change in the future. If you require the whole job to run on the node that services label 'Agent1' the only way to currently do that is to
Put agent {label 'Agent1'} immediately under pipeline { to make it global
Remove all instances of agent {label 'Agent1'} in each stage
Remove the agent {label 'Agent1'} from the post section.
The post section acts more like traditional scripted DSL than the pipeline declarative DSL. So you have to use node() instead of agent.
I believe I've had this same question myself, and this SO post has the answer and some good context.
This Jenkins issue isn't exactly the same thing but shows the node syntax in the post stage.

DSC Script resource does not read environement variable

I have a DSC configuration which install nodejs, adds npm to environment Path variable and then installs a npm module.
xPackage InstallNodeJs {
Name = 'Node.js'
Path = "$env:SystemDrive\temp\node-v4.4.7-x64.msi"
ProductId = '8434AEA1-1294-47E3-9137-848F546CD824'
Arguments = "/quiet"
}
Environment AddEnvironmentPaths
{
Name = "Path"
Ensure = "Present"
Path = $true
Value = "$env:SystemDrive\ProgramData\npm"
}
Script UpgradeNpm {
SetScript = {
& npm install --global --production npm-windows-upgrade
& npm-windows-upgrade --npm-version 3.10.6
}
TestScript = {
$npmVersion = & npm -v
return $npmVersion -eq "3.10.6"
}
GetScript = {
return {#{Result = "UpgradeNpm"}}
}
}
Installing nodejs and adding npm to Path variable seems to be successful. Both nodejs and npm location are added to Path and I can use them both in powershell and cmd.
However, Script resource returns that 'npm' is not recognized as internal or external command ...
the same is for node which is used inside npm-windows-upgrade script file.
Do you know why Script resource cannot read newly added Path entires?
The Environment DSC resource implementation makes the change by updating the values stored in the registry (with the exception of variables targeting Process). Changes made to environment variables stored in the registry are not reflected in the current session (read once, on session start).
You can affect values stored in the current session by:
Using System.Environment.SetEnvironmentVariable ([System.Environment]::SetEnvironmentVariable)
Modifying $env:<VariableName>
Of those, only the first allows you to write a persistent change. The latter can be considered a volatile change.
It's an odd limitation of the resource, I've looked at this before and felt it a little lacking.
There's no dependency information in there, so you can't count on the Environment resource running before the Script resource. There's not enough information in your post to tell if that's the case for sure, but you should consider controlling it anyway:
xPackage InstallNodeJs {
Name = 'Node.js'
Path = "$env:SystemDrive\temp\node-v4.4.7-x64.msi"
ProductId = '8434AEA1-1294-47E3-9137-848F546CD824'
Arguments = "/quiet"
}
Environment AddEnvironmentPaths
{
Name = "Path"
Ensure = "Present"
Path = $true
Value = "$env:SystemDrive\ProgramData\npm"
DependsOn = '[xPackage]InstallNodeJs'
}
Script UpgradeNpm {
SetScript = {
& npm install --global --production npm-windows-upgrade
& npm-windows-upgrade --npm-version 3.10.6
}
TestScript = {
$npmVersion = & npm -v
return $npmVersion -eq "3.10.6"
}
GetScript = {
return {#{Result = "UpgradeNpm"}}
}
DependsOn = '[Environment]AddEnvironmentPaths'
}
Can you share which version of DSC you are using? You can get this by doing a $PSVersionTable on the PowerShell console. I am able to add to the PATH variable and use it in a script resource.
configuration NPMTest
{
Environment AddEnvironmentPaths
{
Name = 'Path'
Ensure = 'Present'
Path = $true
Value = "$env:SystemDrive\ProgramData\npm"
}
Script p
{
GetScript = {#{}}
TestScript = {return $false}
SetScript = {$a = & a.ps1 ; Write-Verbose $a -Verbose}
}
}
The script a.ps1 was executed fine even though I am not specifying the full path to the script.