How to setup build number as a pipeline id in fastlane? - fastlane

This is what I currently have in Fastfile:
def build(target_name)
cocoapods
cert
sigh
if ENV['CI_PIPELINE_ID']
increment_build_number(build_number: "#{ENV['CI_PIPELINE_ID']}")
end
build_app(
scheme: target_name,
workspace: WORKSPACE_FILE_PATH,
clean: true,
output_directory: OUTPUT_PATH,
output_name: target_name + '.ipa',
export_options: {
provisioningProfiles: {
BETA_BUNDLE_IDENTIFIER => BETA_PROVISIONING_PROFILE,
DEMO_BUNDLE_IDENTIFIER => DEMO_PROVISIONING_PROFILE,
DEV_BUNDLE_IDENTIFIER => DEV_PROVISIONING_PROFILE
}
}
)
end
But this code ends up with email from Fabric like this:
v3.3.21 (116)
instead of:
v3.3.21 (11741)
Why it doesn't assign pipeline id to build number?
It looks like it doesn't get inside if statement. Is it possible that CI_PIPELINE_ID variable is not visible for runner?

Related

Jenkins dynamic choice parameter to read a ansible host file in github

I have an ansible host file that is stored in GitHub and was wondering if there is a way to list out all the host in jenkins with choice parameters? Right now every time I update the host file in Github I would have to manually go into each Jenkins job and update the choice parameter manually. Thanks!
I'm assuming your host file has content something similar to below.
[client-app]
client-app-preprod-01.aws-xxxx
client-app-preprod-02.aws
client-app-preprod-03.aws
client-app-preprod-04.aws
[server-app]
server-app-preprod-01.aws
server-app-preprod-02.aws
server-app-preprod-03.aws
server-app-preprod-04.aws
Option 01
You can do something like the one below. Here you can first checkout the repo and then ask for the user input. I have implemented the function getHostList() to parse the host file to filter the host entries.
pipeline {
agent any
stages {
stage('Build') {
steps {
git 'https://github.com/jglick/simple-maven-project-with-tests.git'
script {
def selectedHost = input message: 'Please select the host', ok: 'Next',
parameters: [
choice(name: 'PRODUCT', choices: getHostList("client-app","ansible/host/location"), description: 'Please select the host')]
echo "Host:::: $selectedHost"
}
}
}
}
}
def getHostList(def appName, def filePath) {
def hosts = []
def content = readFile(file: filePath)
def startCollect = false
for(def line : content.split('\n')) {
if(line.contains("["+ appName +"]")){ // This is a starting point of host entries
startCollect = true
continue
} else if(startCollect) {
if(!line.allWhitespace && !line.contains('[')){
hosts.add(line.trim())
} else {
break
}
}
}
return hosts
}
Option 2
If you want to do this without checking out the source and with Job Parameters. You can do something like the one below using the Active Choice Parameter plugin. If your repository is private, you need to figure out a way to generate an access token to access the Raw GitHub link.
properties([
parameters([
[$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
description: 'Select the Host',
name: 'Host',
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: false,
script:
'return [\'Could not get Host\']'
],
script: [
classpath: [],
sandbox: false,
script:
'''
def appName = "client-app"
def content = new URL ("https://raw.githubusercontent.com/xxx/sample/main/testdir/hosts").getText()
def hosts = []
def startCollect = false
for(def line : content.split("\\n")) {
if(line.contains("["+ appName +"]")){ // This is a starting point of host entries
startCollect = true
continue
} else if(startCollect) {
if(!line.allWhitespace && !line.contains("[")){
hosts.add(line.trim())
} else {
break
}
}
}
return hosts
'''
]
]
]
])
])
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
echo "Host:::: ${params.Host}"
}
}
}
}
}
Update
When you are calling a private repo, you need to send a Basic Auth header with the access token. So use the following groovy script instead.
def accessToken = "ACCESS_TOKEN".bytes.encodeBase64().toString()
def get = new URL("https://raw.githubusercontent.com/xxxx/something/hosts").openConnection();
get.setRequestProperty("authorization", "Basic " + accessToken)
def content = get.getInputStream().getText()

Rundeck stop running steps based on global variable

I have a Rundeck job that executes multiple steps, each of which are Job References to other small jobs. The first step selects a server to upgrade, and sets a global variable with the server name. The remaining steps perform upgrade tasks. It is possible though for the first step to return NONE as the server name, and if that's the case I would like to halt execution right there without running the remaining steps, and I'd like the whole job to be marked as Successful.
I could just make that first job exit with an error code, but then the whole job looks failed, and it looks like there is something wrong with it, even though it successfully ran and found there was nothing to upgrade.
Any ideas? I'm finding "use a flow control step" everywhere, but I can't see how to make that work for my use case.
The best way to create complex workflows depending on some output value is to use the Ruleset Strategy (Rundeck Enterprise). Take a look at this.
On the community version you can save the result of the first step on a key-value variable and do some "script-fu" in the following steps:
Step 1: print the status and save it on a data variable using the key-value data log filter.
Steps 2,3,4: capture the key-value data and then the step can continue or not.
I made an example easy to import to your instance for testing:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
options:
- enforced: true
name: opt1
required: true
value: 'true'
values:
- 'true'
- 'false'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "url=${option.opt1}"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
name: result
regex: .*=\s*(.+)$
type: key-value-data
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step two"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step three"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step four"
fi
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
MegaDrive68k's answer is what you can do best with the basic opensource version or if you have the Enterprise version.
But you can also create your own plugin or make a fork out of an existing one.
Which I did with the official flow control puglin and add conditions.
You can fork this plugin and add in the java code 2 new #PluginProperty (That add two new field in a plugin parameter in rundeck interface) and make a comparison of values.
Example:
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
Comparison of Strings values (in your case it is)
if (value1.equals(value2)) {...}
Comparison of Numeric values
if (value1 == value2) {...}
If you want to stop the job with successful (it does not stop the parent job, just actual):
context.getFlowControl().Halt(true);
If you want to stop the job with a failed status:
context.getFlowControl().Halt(false);
If you want to stop the job with a customized status:
context.getFlowControl().Halt("MY CUSTOM STATUS");
And finally, if you want to continue and not stop:
context.getFlowControl().Continue();
So a complete example (add this to your public class):
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
#Override
public void executeStep(final PluginStepContext context, final Map<String, Object> configuration)
throws StepException
{
if (value1.equals(value2)) {
//Halt actual JOB without failed
context.getFlowControl().Halt(true);
} else {
//Continue
context.getFlowControl().Continue();
}
}
Then create your jar file and place it in the libext folder.
Now you can add your custom step. Put your global var in the first field and "NONE" in the second field.
If global var contain "NONE" the job stop successful at this step.
If you call a job with this step from oterh job (parent), the parent job continue.
If you want you can use this fork plugin which already includes these modifications. Look like this

Publish Nunit Test Results in Post Always Section

I'm trying to run a pipeline that does some Pester Testing and publish the NUnit results.
New tests were introduced and for whatever the reason, Jenkins no longer publishes the test results and errors out immediately after the powershell script. Hence, it doesn't get to the nunit publish piece. I receive this:
ERROR: script returned exit code 128
Finished: FAILURE
I've been trying to include the publish in the always section of the post section of the Jenkinsfile, however, I'm running into problems on how to make that NUnit test file available.
I've tried establishing an agent and unstash the file (even though it probably won't stash if the powershell script cancels the whole pipeline). When I use agent I get the following exception:
java.lang.NoSuchMethodError: No such DSL method 'agent' found among steps
Here is the Jenkinsfile:
pipeline {
agent none
environment {
svcpath = 'D:\\svc\\'
unitTestFile = 'UnitTests.xml'
}
stages {
stage ('Checkout and Stash') {
agent {label 'Agent1'}
steps {
stash name: 'Modules', includes: 'Modules/*/**'
stash name: 'Tests', includes: 'Tests/*/**'
}
}
stage ('Unit Tests') {
agent {label 'Agent1'}
steps {
dir(svcpath + 'Modules\\'){deleteDir()}
dir(svcpath + 'Tests\\'){deleteDir()}
dir(svcpath){
unstash name: 'Modules'
unstash name: 'Tests'
}
dir(svcpath + 'Tests\\'){
powershell """
\$requiredCoverageThreshold = 0.90
\$modules = Get-ChildItem ../Modules/ -File -Recurse -Include *.psm1
\$result = Invoke-Pester -CodeCoverage \$modules -PassThru -OutputFile ${unitTestFile} -OutputFormat NUnitXml
\$codeCoverage = \$result.CodeCoverage.NumberOfCommandsExecuted / \$result.CodeCoverage.NumberOfCommandsAnalyzed
Write-Output \$codeCoverage
if (\$codeCoverage -lt \$requiredCoverageThreshold) {
Write-Output "Build failed: required code coverage threshold of \$(\$requiredCoverageThreshold * 100)% not met. Current coverage: \$(\$codeCoverage * 100)%."
exit 1
} else {
write-output "Required code coverage threshold of \$(\$requiredCoverageThreshold * 100)% met. Current coverage: \$(\$codeCoverage * 100)%."
}
"""
stash name: 'TestResults', includes: unitTestFile
nunit testResultsPattern: unitTestFile
}
}
post {
always {
echo 'This will always run'
agent {label 'Agent1'}
unstash name: 'TestResults'
nunit testResultsPattern: unitTestFile
}
success {
echo 'This will run only if successful'
}
failure {
echo 'This will run only if failed'
}
unstable {
echo 'This will run only if the run was marked as unstable'
}
changed {
echo 'This will run only if the state of the Pipeline has changed'
echo 'For example, if the Pipeline was previously failing but is now successful'
}
}
}
Any and all input is welcome! Thanks!
The exception you are getting is due to Jenkins' strict pipeline DSL. Documentation of allowable uses of agent are here.
Currently agent {...} is not allowed to be used in the post section. Maybe this will change in the future. If you require the whole job to run on the node that services label 'Agent1' the only way to currently do that is to
Put agent {label 'Agent1'} immediately under pipeline { to make it global
Remove all instances of agent {label 'Agent1'} in each stage
Remove the agent {label 'Agent1'} from the post section.
The post section acts more like traditional scripted DSL than the pipeline declarative DSL. So you have to use node() instead of agent.
I believe I've had this same question myself, and this SO post has the answer and some good context.
This Jenkins issue isn't exactly the same thing but shows the node syntax in the post stage.

Handling attributes in InSpec

I was trying to create some basic inspec tests to validate a set of HTTP URLs. The way I started is like this -
control 'http-url-checks' do
impact 1.0
title 'http-url-checks'
desc '
Specify the URLs which need to be up and working.
'
tag 'http-url-checks'
describe http('http://example.com') do
its('status') { should eq 200 }
its('body') { should match /abc/ }
its('headers.name') { should eq 'header' }
end
describe http('http://example.net') do
its('status') { should eq 200 }
its('body') { should match /abc/ }
its('headers.name') { should eq 'header' }
end
end
We notice that the URLs are hard-coded in the controls and isn't a lot of fun. I'd like to move them to some 'attributes' file of some sort and loop through them in the control file.
My attempt was to use the 'files' folder structure inside the profile.I created a file - httpurls.yml and had the following content in it -
- url: http://example.com
- url: http://example.net
..and in my control file, I had the construct -
my_urls = yaml(content: inspec.profile.file('httpurls.yml')).params
my_urls.each do |s|
describe http(s['url']) do
its('status') { should eq 200 }
end
end
However, when I execute the compliance profile, I get an error - 'httpurls.yml not found' (not sure about the exact error message though though). The following is the folder structure I had for my compliance profile.
What I am doing wrong?
Is there a better way to achieve what I am trying to do?
The secret is to use profile attributes, as defined near the bottom of this page:
https://www.inspec.io/docs/reference/profiles/
First, create a profile attributes YML file. I name mine profile-attribute.yml.
Second, put your array of values in the YML file, like so:
urls:
- http://example.com
- http://example.net
Third, create an attribute at the top of your InSpec tests:
my_urls = attribute('urls', description: 'The URLs that I am validating.')
Fourth, use your attribute in your InSpec test:
my_urls.each do |s|
describe http(s['url']) do
its('status') { should eq 200 }
end
end
Finally, when you call your InSpec test, point to your YML file using --attrs:
inspec exec mytest.rb --reporter=cli --attrs profile-attribute.yml
There is another way to do this using files (instead of the profile attributes and the --attrs flag). You can use JSON or YAML.
First, create the JSON and/or YAML file and put them in the files directory. A simple example of the JSON file might look like this:
{
"urls": ["https://www.google.com", "https://www.apple.com"]
}
And a simple example of the YAML file might look like this:
urls:
- https://www.google.com
- https://www.apple.com
Second, include code at the top of your InSpec file to read and parse the JSON and/or YAML, like so:
jsoncontent = inspec.profile.file("tmp.json")
jsonparams = JSON.parse(jsoncontent)
jsonurls = jsonparams['urls']
yamlcontent = inspec.profile.file("tmp.yaml")
yamlparams = YAML.load(yamlcontent)
yamlurls = yamlparams['urls']
Third, use the variables in your InSpec tests, like so:
jsonurls.each do |jsonurl|
describe http(jsonurl) do
puts "json url is " + jsonurl
its('status') { should eq 200 }
end
end
yamlurls.each do |yamlurl|
describe http(yamlurl) do
puts "yaml url is " + yamlurl
its('status') { should eq 200 }
end
end
(NOTE: the puts line is for debugging.)
The result is what you would expect:
json url is https://www.google.com
json url is https://www.apple.com
yaml url is https://www.google.com
yaml url is https://www.apple.com
Profile: InSpec Profile (inspec-file-test)
Version: 0.1.0
Target: local://
http GET on https://www.google.com
✔ status should eq 200
http GET on https://www.apple.com
✔ status should eq 200
http GET on https://www.google.com
✔ status should eq 200
http GET on https://www.apple.com
✔ status should eq 200

Overwrite if if it exists or create if it does not

I am trying to rewrite a file using puppet with the following function.
If the file exists I still want the file to be rewrite from the source. Will this be achieved with the following method?
define setup_sysctl_conf( $dependence=File[$dummy_dependence_file] )
{
file { $name:
path => '/etc/sysctl.conf',
ensure => present,
mode => 0777,
source => '/vagrant/files/sysctl.conf',
require => $dependence,
}
}
The file: /etc/sysctl.conf will already be present on your host (created by the initscripts package).
I would recommend to modify existing files with puppet using augeas instead of replacing them.
Example (changes net.ipv4.ip_forward to 1):
class sysctl_augeas_example {
augeas{"Set net.ipv4.ip_forward to 1":
context => "/files",
changes => [
"set etc/sysctl.conf/net.ipv4.ip_forward 1",
]
}
}
include sysctl_augeas_example
Save this example as test.pp and run it with puppet apply test.pp