Convert Str to Int in from Yaml to Json - kubernetes

How to force integer value in kubernetes template (without helm)
In variables.yaml
-name: PORT
value: 80
In template.yaml
ports:
- containerPort: $(PORT) --> output '80', expected: 80 (int)
protocol: TCP

In the Kubernetes template, you can use the "int" function to force the value of PORT to be an integer. you can make those changes on your template.yaml:
The "int" function converts PORT to an integer for the "containerPort" field. The "int" function around PORT ensures that the template treats it as an integer.
Helm utilizes variables differently than Kubernetes. Helm uses parentheses (()) to access variables, while Kubernetes uses double curly braces ().
ports:
- containerPort: {{int .Values.PORT}}
protocol: TCP
Can you try that script above if that can help you out?

To convert a string to an integer while converting from YAML to JSON, you can use a custom function to parse the string and convert it to an integer. Here is an example implementation in Python:
import yaml
import json
def parse_int(val):
try:
return int(val)
except ValueError:
return val
# Load YAML data from file or string
with open('data.yaml', 'r') as file:
yaml_data = yaml.safe_load(file)
# Parse string values as integers
parsed_data = json.loads(json.dumps(yaml_data, default=parse_int))
# Save JSON data to file or output to console
with open('data.json', 'w') as file:
json.dump(parsed_data, file, indent=2)
In this code, the parse_int function tries to parse the given value as an integer using the int function. If the value is not a valid integer, the function returns the original value. When converting the YAML data to JSON, we pass the parse_int function as the default argument to the json.dumps function. This tells the function to use our custom function to serialize any non-standard types (in this case, strings that represent integers) instead of raising an error.
Note that this implementation only works for integer values represented as strings in the YAML data. If you have other types of data that need to be converted, you will need to modify the 'parse_int' function or create additional functions to handle those cases.

Related

JSON path semantics different in kubectl and additional printer columns in custom resource definition

I use kubectl to list Kubernetes custom resources of a kind mykind with an additional table column LABEL that contains the value of a label a.b.c.com/key if present:
kubectl get mykind -o=custom-columns=LABEL:.metadata.labels.'a\.b\.c\.com/key'
This works, i.e., the label value is properly displayed.
Subsequently, I wanted to add a corresponding additional printer column to the custom resource definition of mykind:
- description: Label value
jsonPath: .metadata.labels.'a\.b\.c\.com/key'
name: LABEL
type: string
Although the additional column is added to kubectl get mykind, it is empty and no label value is shown (in contrast to above kubectl command). My only suspicion were problems with escaping of the special characters - but no variation helped.
Are you aware of any difference between the JSON path handling in kubectl and additional printer columns? I expected strongly that they are exactly the same.
mdaniel's comment works!
- description: Label value
jsonPath: '.metadata.labels.a\.b\.c\.com/key'
name: LABEL
type: string
You need to use \. instead of . and use single quotes ' '. It doesn't work with double quotes for the reasons I don't understand

has function in the helm template is not returning true

I have a values.yaml file containing list variable like:
excludePort: [32104, 30119]
I am trying to use that in the helm template like:
{{ if has 32104 .Values.excludePort }}
But it seems to be not returning true. (The block after the condition is not executing. Any reason for it?
This is a problem caused by a variable type mismatch.
{{ kindOf (first .Values.excludePort) }}
output:
float64
You need to understand that the essence of helm rendering templates is to first parse the values.yaml file into map through golang, and the number will be deserialized to float64 type by default, which is determined by the underlying implementation of the golang language.
See this: Go Decode 、json package
So the elements in the excludePort array is of type float64 and 32104 is of type int
In order to get the desired result you need to implement this:
{{- if has 32104.0 .Values.excludePort}}
Of course, this is not a good implementation, because there is a precision problem caused by float, it is best to use a string to solve it.
Lisk this:
values.yaml
excludePort: ["32104", "30119"]
template/xxx.yaml
{{- if has "32104" .Values.excludePort}}
...

How to provision a bunch of resources using the pyplate macro

In this learning exercise I want to use a PyPlate script to provision the BucketA, BucketB and BucketC buckets in addition to the TestBucket.
Imagine that the BucketNames parameter could be set by a user of this template who would specify a hundred bucket names using UUIDs for example.
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Resources:
TestBucket:
Type: "AWS::S3::Bucket"
#!PyPlate
output = []
bucket_names = params['BucketNames']
for name in bucket_names:
output.append('"' + name + '": {"Type": "AWS::S3::Bucket"}')
The above when deployed responds with a Template format error: YAML not well-formed. (line 15, column 3)
Although the accepted answer is functionally correct, there is a better way to approach this.
Essentially PyPlate code recursively reads through all the key-value pairs of the stack and replaces the values with their Python computed values (ie they match the #!PyPlate regex). So we need to have a corresponding key to the PyPlate code.
Here's how a combination of PyPlate and Explode would solve the above problem.
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate, Explode]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Mappings: {}
Resources:
MacroWrapper:
ExplodeMap: |
#!PyPlate
param = "BucketNames"
mapNamespace = param + "Map"
template["Mappings"][mapNamespace] = {}
for bucket in params[param]:
template["Mappings"][mapNamespace][bucket] = {
"ResourceName": bucket
}
output = mapNamespace
Type: "AWS::S3::Bucket"
TestBucket:
Type: "AWS::S3::Bucket"
This approach is powerful because:
You can append resources to an existing template, because you won't tamper with the whole Resources block
You don't need to rely on hardcoded Mappings, as required by Explode. You can drive dynamic logic in Cloudformation.
Most of the Cloudformation props/KV's remain in the YAML part, and there is minimal python part which augments to the CFT functionality.
Please be aware of the macro order through - PyPlate needs to be executed before Explode, which is why the order [PyPlate, Explode]. The execution is sequential.
If we walk through the source code of PyPlate, it gives us control of more template related variables to work with, namely
params (stack parameters)
template (the entire template)
account_id
region
I utilised the template variable in this case.
Hope this helps
This works for me:
AWSTemplateFormatVersion: "2010-09-09"
Transform: [PyPlate]
Description: A stack that provisions a bunch of s3 buckets based on param names
Parameters:
BucketNames:
Type: CommaDelimitedList
Description: All bucket names that should be created
Default: BucketA,BucketB,BucketC
Resources:
|
#!PyPlate
output = {}
bucket_names = params['BucketNames']
for name in bucket_names:
output[name] = {"Type": "AWS::S3::Bucket"}
Explanation:
The python code outputs a dict object where the key is the bucket name and the value is its configuration:
{'BucketA': {'Type': 'AWS::S3::Bucket'}, 'BucketB': {'Type': 'AWS::S3::Bucket'}}
Prior to Macro execution, the YAML template is transformed to JSON format, and because the above is valid JSON data, I can plug it in as value of Resources.
(Note that having the hardcoded TestBucket won't work with this and I had to remove it)

Using Grafana Variable in Prometheus Query

I'm trying to use a variable at the start of one of my PromQL queries so I can return data based on the variable. Not sure if this is possible or not.
$variable_totalaccuracy_total
Expected to return the totalaccuracy for the variable but get back
error:"parse error at char 1: unexpected character: '$'"
Use braces around the variable name:
${variable}_totalaccuracy_total

Grafrana: use label as value

I am running the SNMP_Exporter package towards some SNMP enabled devices, the snmp.yml is generated via the generator tool.
Now this is a value that is stored in Prometheus:
RefclockOffset{instance="10.0.2.8",job="snmp",RefclockOffset="-0.001258"}
As you can see the SNMP_Exporter stores the float value inside a label.
How can i plot this in grafana?
I am no power-user.
I can't find what MIB this is coming from, but this is due to the MIB exposing the value as a string.
You can add an override to your module in your generator.yml to extract the float from the string:
overrides:
RefclockOffset:
regex_extracts:
'':
- regex: '(.*)'
value: '$1'