I have the following code in groovy:
void call(Closure closure) {
pod_template_maven_image = ...
pod_template_maven_m2 = ...
pod_template_nodejs_image = ...
pod_template_sonar_image = ...
toleration_condition = true
def yaml_config = ""
if(toleration_condition){
yaml_config = """
spec:
tolerations:
- key: "my_toleration"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
"""
}
podTemplate(containers: ...,
volumes: ...,
etc...,
yaml: yaml_config,
yamlMergeStrategy: merge()) {
node(POD_LABEL) {
closure()
}
}
}
At the moment, when I run the job in jenking nothing happens, the pod is created whitout error. But the yaml is not added in pod.
We want to add the toleration (yaml_config) in podTemplate depending of toleration_condition value.
I'm new in this area and don't now if is possible.
Its is? Whats the best way to do it?
Thanks.
Tip: I see you use the Kubernetes plugin. Have you seen the section of inheritance? Using the declarative pipeline syntax, with inheritFrom, might provide a much cleaner way to achieve the wanted results. Here we also keep the yamlMergeStrategy as default (override()).
I have seen that the indents of the "delta yaml" (yaml_config), provided to yaml:, influences the final outcome (trailing and starting newlines could also have impact). So I would recommend testing with a yaml_config like:
yaml_config = """
spec:
tolerations:
- key: "my_toleration"
operator: "Equal"
value: "value1"
effect: "NoSchedule" """
Related
I have a terraform code as given below
locals {
file_path = format("%s-%s", var.test1, var.test2)
test_decode = yamldecode((data.github_repository_file.test.content))
}
data "github_repository_file" "test" {
repository = "test-repo"
branch = "develop"
file = "${local.file_path}/local/test.yaml"
}
test_encode = ${yamlencode(local.test_decode.spec.names)}
This is working fine when a ".spec.names" attribute present in the test.yaml file. Since we are selecting the test.yaml based on local.file_path some times attribute .spec.names might not present in the test.yaml and the plan failing with "Error: Unsupported attribute". How to check ".spec.names" attribute present in the test.yaml?
Updating the question to add yaml example
Yaml with names attribute
apiVersion: helm.toolkit.gitops.io/v2beta1
kind: HelmRelease
metadata:
name: "test"
namespace: "test-system"
spec:
chart:
spec:
chart: "test-environment"
version: "0.1.10"
names:
key1: "value1"
key2: "value2"
key3: "value3"
key4: "value4"
YAML without names attribute
apiVersion: helm.toolkit.gitops.io/v2beta1
kind: HelmRelease
metadata:
name: "test"
namespace: "test-system"
spec:
chart:
spec:
chart: "test-environment"
version: "0.1.10"
You can use try:
test_encode = yamlencode(try(local.test_decode.spec.names, "some_default_value"))
I don't know if a more elegant way exists, but you can check if a property exists like this:
contains([for k, v in local.test_decode : k], "spec")
chaining this to check if the spec and the names exists like this did not work for me as the second condition was still evaluated even if the first condition failed:
contains([for k, v in local.test_decode : k], "spec") && contains([for k, v in local.test_decode.spec : k], "names")
this does the trick, but I don't like it, because I consider it to be quite hacky:
contains([for k, v in local.test_decode : k], "spec") ? contains([for k, v in local.test_decode.spec : k], "names") : false
i am currently checking out tanka + jsonnet. But evertime i think i understand it... sth. new irritates me. Can somebody help me understand how to do a loop-reference? (Or general better solution?)
Trying to create multiple deployments with a corresponding configmapVolumeMount and i am not sure how to reference to the according configmap object here?
(using a configVolumeMount it works since it refers to the name, not the object).
deployment: [
deploy.new(
name='demo-' + instance.name,
],
)
+ deploy.configMapVolumeMount('config-' + instance.name, '/config.yml', k.core.v1.volumeMount.withSubPath('config.yml'))
for instance in $._config.demo.instances
],
configMap: [
configMap.new('config-' + instance.name, {
'config.yml': (importstr 'files/config.yml') % {
name: instance.name,
....
},
}),
for instance in $._config.demo.instances
]
regards
Great to read that you're making progress with tanka, it's an awesome tool (once you learned how to ride it heh).
Find below a possible answer, see inline comments in the code, in particular how we ab-use tanka layout flexibility, to "populate" deploys: [...] array with jsonnet objects containing each paired deploy+configMap.
config.jsonnet
{
demo: {
instances: ['foo', 'bar'],
image: 'nginx', // just as example
},
}
main.jsonnet
local config = import 'config.jsonnet';
local k = import 'github.com/grafana/jsonnet-libs/ksonnet-util/kausal.libsonnet';
{
local deployment = k.apps.v1.deployment,
local configMap = k.core.v1.configMap,
_config:: import 'config.jsonnet',
// my_deploy(name) will return name-d deploy+configMap object
my_deploy(name):: {
local this = self,
deployment:
deployment.new(
name='deploy-%s' % name,
replicas=1,
containers=[
k.core.v1.container.new('demo-%s' % name, $._config.demo.image),
],
)
+ deployment.configMapVolumeMount(
this.configMap,
'/config.yml',
k.core.v1.volumeMount.withSubPath('config.yml')
),
configMap:
configMap.new('config-%s' % name)
+ configMap.withData({
// NB: replacing `importstr 'files/config.yml';` by
// a simple YAML multi-line string, just for the sake of having
// a simple yet complete/usable example.
'config.yml': |||
name: %(name)s
other: value
||| % { name: name }, //
}),
},
// Tanka is pretty flexible with the "layout" of the Kubernetes objects
// in the Environment (can be arrays, objects, etc), below using an array
// for simplicity (built via a loop/comprehension)
deploys: [$.my_deploy(name) for name in $._config.demo.instances],
}
output
$ tk init
[...]
## NOTE: using https://kind.sigs.k8s.io/ local Kubernetes cluster
$ tk env set --server-from-context kind-kind environments/default
[... save main.jsonnet, config.jsonnet to ./environments/default/]
$ tk apply --dry-run=server environments/default
[...]
configmap/config-bar created (server dry run)
configmap/config-foo created (server dry run)
deployment.apps/deploy-bar created (server dry run)
deployment.apps/deploy-foo created (server dry run)
I have a pod running dotnet that leverages an appsettings.json file. I have the following entry for RabbitMq:
appsettings.json
{
...
"RabbitMQ": {
"HostName": "localhost",
"UserName": "someuser",
"Password": "somepassword"
}
}
I am trying to update the RabbitMQ.HostName property within my deployment yaml like so:
env:
- name: "RabbitMQ:HostName"
value: "rabbitmq-cluster-deployment.rabbitmq.svc.cluster.local"
It doesn't work. I have tried different variations but nothing looks like it sets it.
Does Kubernetes have a way of setting the "nested property" or no? I am aware that the : character is not allowed. I have tried using . which didn't throw an error, but also didn't work. The reason I was thinking it was a : is because that is how you would do it with dotnet.
Example: _configuration["RabbitMQ:HostName"]
Other "non-nested" environment variables are set just fine.
Remove the quotes from the name field and replace : with double underscores __
Instead of
env:
- name: "RabbitMQ:HostName"
value: "rabbitmq-cluster-deployment.rabbitmq.svc.cluster.local"
use
env:
- name: RabbitMQ__HostName
value: "rabbitmq-cluster-deployment.rabbitmq.svc.cluster.local"
I have a Rundeck job that executes multiple steps, each of which are Job References to other small jobs. The first step selects a server to upgrade, and sets a global variable with the server name. The remaining steps perform upgrade tasks. It is possible though for the first step to return NONE as the server name, and if that's the case I would like to halt execution right there without running the remaining steps, and I'd like the whole job to be marked as Successful.
I could just make that first job exit with an error code, but then the whole job looks failed, and it looks like there is something wrong with it, even though it successfully ran and found there was nothing to upgrade.
Any ideas? I'm finding "use a flow control step" everywhere, but I can't see how to make that work for my use case.
The best way to create complex workflows depending on some output value is to use the Ruleset Strategy (Rundeck Enterprise). Take a look at this.
On the community version you can save the result of the first step on a key-value variable and do some "script-fu" in the following steps:
Step 1: print the status and save it on a data variable using the key-value data log filter.
Steps 2,3,4: capture the key-value data and then the step can continue or not.
I made an example easy to import to your instance for testing:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
options:
- enforced: true
name: opt1
required: true
value: 'true'
values:
- 'true'
- 'false'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "url=${option.opt1}"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
name: result
regex: .*=\s*(.+)$
type: key-value-data
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step two"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step three"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step four"
fi
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
MegaDrive68k's answer is what you can do best with the basic opensource version or if you have the Enterprise version.
But you can also create your own plugin or make a fork out of an existing one.
Which I did with the official flow control puglin and add conditions.
You can fork this plugin and add in the java code 2 new #PluginProperty (That add two new field in a plugin parameter in rundeck interface) and make a comparison of values.
Example:
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
Comparison of Strings values (in your case it is)
if (value1.equals(value2)) {...}
Comparison of Numeric values
if (value1 == value2) {...}
If you want to stop the job with successful (it does not stop the parent job, just actual):
context.getFlowControl().Halt(true);
If you want to stop the job with a failed status:
context.getFlowControl().Halt(false);
If you want to stop the job with a customized status:
context.getFlowControl().Halt("MY CUSTOM STATUS");
And finally, if you want to continue and not stop:
context.getFlowControl().Continue();
So a complete example (add this to your public class):
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
#Override
public void executeStep(final PluginStepContext context, final Map<String, Object> configuration)
throws StepException
{
if (value1.equals(value2)) {
//Halt actual JOB without failed
context.getFlowControl().Halt(true);
} else {
//Continue
context.getFlowControl().Continue();
}
}
Then create your jar file and place it in the libext folder.
Now you can add your custom step. Put your global var in the first field and "NONE" in the second field.
If global var contain "NONE" the job stop successful at this step.
If you call a job with this step from oterh job (parent), the parent job continue.
If you want you can use this fork plugin which already includes these modifications. Look like this
I have nodejs code running inside a pod. From inside the pod I want to find the zone of the node where this pod is running. What is the best way do do that? Do I need extra permissions?
I have not been able to find a library but I post the code that does it below. The getContent function was slightly adapted from this post This code should work inside a GKE pod or and GCE host.
Use it as following:
const gcp = require('./gcp.js')
gcp.zone().then(z => console.log('Zone is: ' + z))
Module: gcp.js
const getContent = function(lib, options) {
// return new pending promise
return new Promise((resolve, reject) => {
// select http or https module, depending on reqested url
const request = lib.get(options, (response) => {
// handle http errors
if (response.statusCode < 200 || response.statusCode > 299) {
reject(new Error('Failed to load page, status code: ' + response.statusCode));
}
// temporary data holder
const body = [];
// on every content chunk, push it to the data array
response.on('data', (chunk) => body.push(chunk));
// we are done, resolve promise with those joined chunks
response.on('end', () => resolve(body.join('')));
});
// handle connection errors of the request
request.on('error', (err) => reject(err))
})
};
exports.zone = () => {
return getContent(
require('http'),
{
hostname: 'metadata.google.internal',
path: '/computeMetadata/v1/instance/zone',
headers: {
'Metadata-Flavor': 'Google'
},
method: 'GET'
})
}
You can use failure-domain.beta.kubernetes.io/region and failure-domain.beta.kubernetes.io/zone labels of the pod to getting its region and AZ.
But, please keep in mind, that:
Only GCE and AWS are currently supported automatically (though it is easy to add similar support for other clouds or even bare metal, by simply arranging for the appropriate labels to be added to nodes and volumes).
To get access to labels, you can use DownwardAPI for attaching a Volume with your current labels and annotations of the pod. You don't need any extra permissions for use it, just mount them as a volume.
Here is an example from a documentation:
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
zone: us-est-coast
cluster: test-cluster1
rack: rack-22
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args:
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
readOnly: false
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
When you have a mounted volume with labels, you can read a file /etc/labels which will contain information about AZ and Region as a Key-Pairs, like this:
failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1c