Jenkins dynamic choice parameter to read a ansible host file in github - github

I have an ansible host file that is stored in GitHub and was wondering if there is a way to list out all the host in jenkins with choice parameters? Right now every time I update the host file in Github I would have to manually go into each Jenkins job and update the choice parameter manually. Thanks!

I'm assuming your host file has content something similar to below.
[client-app]
client-app-preprod-01.aws-xxxx
client-app-preprod-02.aws
client-app-preprod-03.aws
client-app-preprod-04.aws
[server-app]
server-app-preprod-01.aws
server-app-preprod-02.aws
server-app-preprod-03.aws
server-app-preprod-04.aws
Option 01
You can do something like the one below. Here you can first checkout the repo and then ask for the user input. I have implemented the function getHostList() to parse the host file to filter the host entries.
pipeline {
agent any
stages {
stage('Build') {
steps {
git 'https://github.com/jglick/simple-maven-project-with-tests.git'
script {
def selectedHost = input message: 'Please select the host', ok: 'Next',
parameters: [
choice(name: 'PRODUCT', choices: getHostList("client-app","ansible/host/location"), description: 'Please select the host')]
echo "Host:::: $selectedHost"
}
}
}
}
}
def getHostList(def appName, def filePath) {
def hosts = []
def content = readFile(file: filePath)
def startCollect = false
for(def line : content.split('\n')) {
if(line.contains("["+ appName +"]")){ // This is a starting point of host entries
startCollect = true
continue
} else if(startCollect) {
if(!line.allWhitespace && !line.contains('[')){
hosts.add(line.trim())
} else {
break
}
}
}
return hosts
}
Option 2
If you want to do this without checking out the source and with Job Parameters. You can do something like the one below using the Active Choice Parameter plugin. If your repository is private, you need to figure out a way to generate an access token to access the Raw GitHub link.
properties([
parameters([
[$class: 'ChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
description: 'Select the Host',
name: 'Host',
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: false,
script:
'return [\'Could not get Host\']'
],
script: [
classpath: [],
sandbox: false,
script:
'''
def appName = "client-app"
def content = new URL ("https://raw.githubusercontent.com/xxx/sample/main/testdir/hosts").getText()
def hosts = []
def startCollect = false
for(def line : content.split("\\n")) {
if(line.contains("["+ appName +"]")){ // This is a starting point of host entries
startCollect = true
continue
} else if(startCollect) {
if(!line.allWhitespace && !line.contains("[")){
hosts.add(line.trim())
} else {
break
}
}
}
return hosts
'''
]
]
]
])
])
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
echo "Host:::: ${params.Host}"
}
}
}
}
}
Update
When you are calling a private repo, you need to send a Basic Auth header with the access token. So use the following groovy script instead.
def accessToken = "ACCESS_TOKEN".bytes.encodeBase64().toString()
def get = new URL("https://raw.githubusercontent.com/xxxx/something/hosts").openConnection();
get.setRequestProperty("authorization", "Basic " + accessToken)
def content = get.getInputStream().getText()

Related

Terraform enable or disable a resource conditionally

My requirement is to create or delete a resource by specifying enable flag true or false (In case of false the resource should get deleted, in case of true the resource should get created)
Kindly refer below code - here I am creating "confluent topic" resource and calling it dynamically using for_each condition.
Confluent Topic creation
topic.tf file:
resource "confluent_kafka_topic" "topic" {
kafka_cluster {
id = confluent_kafka_cluster.dedicated.id
}
for_each = { for t in var.topic_name : t.topic_name => t }
topic_name = each.value["topic_name"]
partitions_count = each.value["partitions_count"]
rest_endpoint = confluent_kafka_cluster.dedicated.rest_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
}
Variable declared as:
variable "topic_name" {
type = list(map(string))
default = [{
"topic_name" = "default_topic"
}]
}
And finally executing it through DEV.tfvars file:
topic_name = [
{
topic_name = "avro-topic-1"
partitions_count = "6"
},
{
topic_name = "json-topic-1"
partitions_count = "8"
},
]
The above code execution works fine and I am able to create and delete multiple resources. I want to modify it further and add a flag/toggle to create or delete a resource.
Example as shown below:
topic_name = [
{
topic_name = "avro-topic-1"
partitions_count = "6"
**enable = true #this flag will create the resource**
},
{
topic_name = "json-topic-1"
partitions_count = "8"
**enable = false #this flag will delete the resource**
},
]
Kindly help suggest how it can be achieved and if there is any different approach to follow.
As mentioned in my comment, I think this can be achieved with the following change:
resource "confluent_kafka_topic" "topic" {
for_each = { for t in var.topic_name : t.topic_name => t if t.enable }
kafka_cluster {
id = confluent_kafka_cluster.dedicated.id
}
topic_name = each.value["topic_name"]
partitions_count = each.value["partitions_count"]
rest_endpoint = confluent_kafka_cluster.dedicated.rest_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
}
Additionally, for_each should probably be at the top of the resource block to make sure it is visible immediately to the reader. The if t.enable part will make sure that for_each will create a resource only when the variable key has the enabled = true.

Resource not found error in cross subscription resources in Bicep

I am trying to create a private endpoint in one subscription (say xxxx) and my Vnet is in another subscription (say YYYYY). both are managed under a management group. So, i am deploying at management group level. But while creating the endpoint it is giving error that Resource is not found. Please suggest how to solve this issue.
Below is my code for main file:
targetScope = 'managementGroup'
param env string = 'xxxxx'
param appname string = 'abcd'
param tags object
param strgSKU string
param strgKind string
//variables
var envfullname = ((env == 'PrePrd') ? 'preprod' : ((env == 'Prd') ? 'prod' : ((env == 'SB') ? 'sb' : 'dev')))
var strgActName = toLower('${envfullname}${appname}sa1')
var saPrvtEndptName = '${envfullname}-${appname}-sa-pe1'
resource RG 'Microsoft.Resources/resourceGroups#2021-04-01' existing = {
scope:subscription('xxxxxxxxxxxxxxxxxxxxx')
name: '${env}-${appname}-RG'
}
resource vnet 'Microsoft.Network/virtualNetworks#2021-08-01' existing = {
scope: resourceGroup('yyyyyyyyyyyyyyyyy','Networking_RG')
name: 'Vnet1'
}
resource linkSubnet 'Microsoft.Network/virtualNetworks/subnets#2021-08-01' existing = {
scope: resourceGroup('yyyyyyyyyyyyyyy','Networking_RG')
name: 'Vnet1/subnet1'
}
var location = RG.location
var vnetid = vnet.id
//Deploy Resources
/////////////// STORAGE ACCOUNT///////////////////////////////////
//call storage Account bicep module to deploy the storage account
module storageAct './modules/storageAccount.bicep' = {
scope:RG
name: strgActName
params:{
strgActName: strgActName
location: location
tags: tags
sku: strgSKU
kind: strgKind
}
}
// Create a private endpoint and link to storage Account
module saPrivateEndPoint './modules/privateEndpoint.bicep' = {
scope:RG
name: saPrvtEndptName
params: {
prvtEndpointName: saPrvtEndptName
prvtLinkServiceId: storageAct.outputs.saId
tags: tags
location: location
subnetId: linkSubnet.id
//ipaddress: privateDNSip
fqdn: '${strgActName}.blob.core.cloudapi.net'
groupId: 'blob'
}
dependsOn: [
storageAct
]
}
And my privateendpoint module file looks like:
param prvtEndpointName string
param prvtLinkServiceId string
param tags object
param location string
param subnetId string
//param ipaddress string
param fqdn string
param groupId string
resource privateEndpoint 'Microsoft.Network/privateEndpoints#2020-11-01' = {
name: prvtEndpointName
location: location
tags: tags
properties: {
privateLinkServiceConnections: [
{
name: '${prvtEndpointName}_cef3fd7f-f1d3-4970-ae54-497245676050'
properties: {
privateLinkServiceId: prvtLinkServiceId
groupIds: [
groupId
]
privateLinkServiceConnectionState: {
status: 'Approved'
description: 'Auto-Approved'
actionsRequired: 'None'
}
}
}
]
manualPrivateLinkServiceConnections: []
subnet: {
id: subnetId
}
customDnsConfigs: [
{
fqdn: fqdn
// ipAddresses: [
// ipaddress
// ]
}
]
}
}
Command to execute the script is:
az deployment mg create --location 'USEast2' --name 'dev2'--management-group-id xt74yryuihfjdnv --template-file main.bicep --parameters main.parameters.json
Looks like privateEndpoint should be created in same subscription where Vnet resides, however it can be used across subscription resources.
https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overview#private-endpoint-properties
link for reference.

Conditional environment variables in Jenkinsfile

My Jenkinsfiles use the environment{} directive. I've been trying to set a condition where I invoke different variables depending on the GIT branch being builded.
I've tried something like e. g. :
switch(branch_name) {
case 'dev' :
Var1 = x;
case 'master' :
Var1 = y;
}
That would allow me skip using different Jenkinsfiles for each branch on a repo. But groovy syntax seems not to work inside this environment{} directive.
Is there a way tackling it? Or would you suggest another approach for handling different global variables for each branch?
You can use when declarative in your declarative pipeline. More info here
pipeline {
agent any
stages {
stage('Example Build') {
steps {
echo 'Hello World'
}
}
stage('Deploy to PROD') {
when {
branch 'production'
environment name: 'DEPLOY_TO', value: 'production'
}
steps {
echo 'Deploying'
}
}
stage('Deploy to DEV') {
when {
branch 'dev'
environment name: 'DEPLOY_TO', value: 'dev'
}
steps {
echo 'Deploying'
}
}
}
}

Error in email template for Robot framework when using multiple script blocks in pipeline

Im using Jenkins in Windows 10 with pipeline script running a robot script and then sending a email with the results.
The template I'm using is this opne
I've tried using different setups in the jenkinsfile (including using a scripted version) but basically this works:
pipeline {
agent any
stages {
stage('Build') {
steps {
script{
bat "robot -v store:${storeID} -v testType:${testType} -v site:${pageID} -v productName: previews.robot"
}
}
}
}
post {
always{
script{
step(
[
$class : 'RobotPublisher',
outputPath : '.',
outputFileName : 'output.xml',
reportFileName : 'report.html',
logFileName : 'log.html',
disableArchiveOutput: false,
passThreshold : 50,
unstableThreshold : 40,
otherFiles : "*.png",
]
)
def emailBody = '''${SCRIPT, template="robot.template"}'''
emailext body: emailBody,
subject: "[JENKINS BUILD]",
recipientProviders: [[$class: 'DevelopersRecipientProvider'], [$class: 'RequesterRecipientProvider']]
}
archiveArtifacts '*.html, *.xml, *.png'
}
}
}
But if I add another stage with some scripting, as for example, change the build name, then I will get a template error.
Pipeline that generates error:
pipeline {
agent any
stages {
stage('Init'){
steps{
script{
def name = ""
switch(testType){
case "newProduct":
def tmpProduct = pageID.split("/")
productName = tmpProduct[tmpProduct.size()-1]
name= storeID+": New Product ("+productName+")"
break
case "newHomepage":
name = storeID+": New Homepage"
break
case "newBlog":
name = storeID+": New Blog Post"
break
case "cosmetics":
name = storeID+": UI Changes"
break
default:
name = "Test type empty"
break
}
currentBuild.displayName = name
}
}
}
stage('Build') {
steps {
script{
bat "robot -v store:${storeID} -v testType:${testType} -v site:${pageID} -v productName: previews.robot"
}
}
}
}
post {
always{
script{
step(
[
$class : 'RobotPublisher',
outputPath : '.',
outputFileName : 'output.xml',
reportFileName : 'report.html',
logFileName : 'log.html',
disableArchiveOutput: false,
passThreshold : 50,
unstableThreshold : 40,
otherFiles : "*.png",
]
)
def emailBody = '''${SCRIPT, template="robot.template"}'''
emailext body: emailBody,
subject: "[JENKINS BUILD]",
recipientProviders: [[$class: 'DevelopersRecipientProvider'], [$class: 'RequesterRecipientProvider']]
}
archiveArtifacts '*.html, *.xml, *.png'
}
}
}
The name of the build is changed correctly and the script runs to the end without issues BUT in the email and console output I can see a template error:
Exception raised during template rendering: Cannot get property 'simpleName' on null object
java.lang.NullPointerException: Cannot get property 'simpleName' on null object
Any Pointers? I would really like to make a pipeline more rich but I will stay with the basic if I can't solve this problem...
Modify robot.template as below,which solved this problem.
if( action && (action.class.simpleName.equals("RobotBuildAction") ) )

How does jenkins declarative pipeline define arrays

docs
I want to define volumes and envVars, but I get an error
pipeline {
agent {
kubernetes {
idleMinutes 5
workspaceVolume nfsWorkspaceVolume(readOnly: false, serverAddress: '127.0.0.1', serverPath: '/data/nfs-data/kubernetes/jenkins/agent')
containerTemplate {
name 'maven'
image 'maven:3.3.9-jdk-8-alpine'
ttyEnabled true
command 'cat'
envVars envVar( key: 'ENV', value: 'test')
}
}
}
}
stages {
stage('Clone') {
steps {
script {
sh 'pwd'
}
}
}
}
}
Running a pipeline script gives following error:
java.lang.UnsupportedOperationException: no known implementation of interface java.util.List is using symbol ‘envVar’
at org.jenkinsci.plugins.structs.describable.DescribableModel.resolveClass(DescribableModel.java:570)
at org.jenkinsci.plugins.structs.describable.UninstantiatedDescribable.instantiate(UninstantiatedDescribable.java:207)
at org.jenkinsci.plugins.structs.describable.DescribableModel.coerce(DescribableModel.java:466)
at org.jenkinsci.plugins.structs.describable.DescribableModel.injectSetters(DescribableModel.java:429)
at org.jenkinsci.plugins.structs.describable.DescribableModel.instantiate(DescribableModel.java:331)
Caused: java.lang.IllegalArgumentException: Could not instantiate {image=maven:3.3.9-jdk-8-alpine, ttyEnabled=true, name=maven, envVars=#envVar(key=127.0.0.1,value=/data/nfs-data/kubernetes/jenkins/agent), command=cat} for org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate
at org.jenkinsci.plugins.structs.describable.DescribableModel.instantiate(DescribableModel.java:334)
at org.jenkinsci.plugins.structs.describable.DescribableModel.coerce(DescribableModel.java:474)
at org.jenkinsci.plugins.structs.describable.DescribableModel.injectSetters(DescribableModel.java:429)
at org.jenkinsci.plugins.structs.describable.DescribableModel.instantiate(DescribableModel.java:331)
What should I do??