Helm pre-install yaml for config - kubernetes

I’ve dependency in priority class inside my k8s yaml configs files and I need to install before any of my yaml inside the template folder
the prio class
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
value: 1000
globalDefault: false
After reading the helm docs it seems that I can use the pre-install hook
I’ve changed my yaml and add anotiations section with pre-hook, and still it doesnt works, any idea what I miss here?
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
annotations:
"helm.sh/hook": pre-install
value: 1000
globalDefault: false
The yaml is located inisde the template folder

You put quotation marks for helm.sh/hook annotation which is incorrect - you can only add quotation marks for values of them.
You can add description field in your configuration file, remember that this field is an arbitrary string. It is meant to tell users of the cluster when they should use this PriorityClass.
Your PriorityClass should looks like this:
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
value: 1000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
More information about proper configuration of PriorityClass you can find here: PriorityClass.
More information about installing hooks you can find here: helm-hooks.
I hope it helps.

Related

How to reuse variables in a kubernetes yaml?

I have a number of repeated values in my kubernetes yaml file and I wondering if there was a way I could store variables somewhere in the file, ideally at the top, that I can reuse further down
sort of like
variables:
- appName: &appname myapp
- buildNumber: &buildno 1.0.23
that I can reuse further down like
labels:
app: *appname
tags.datadoghq.com/version:*buildno
containers:
- name: *appname
...
image: 123456.com:*buildno
if those are possible
I know anchors are a thing in yaml I just couldn't find anything on setting variables
You can't do this in Kubernetes manifests, because you need a processor to manipulate the YAML files. Though you can share the anchors in the same YAML manifest like this:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: &cmname myconfig
namespace: &namespace default
labels:
name: *cmname
deployedInNamespace: *namespace
data:
config.yaml: |
[myconfig]
example_field=1
This will result in:
apiVersion: v1
data:
config.yaml: |
[myconfig]
example_field=1
kind: ConfigMap
metadata:
creationTimestamp: "2023-01-25T10:06:27Z"
labels:
deployedInNamespace: default
name: myconfig
name: myconfig
namespace: default
resourceVersion: "147712"
uid: 4039cea4-1e64-4d1a-bdff-910d5ff2a485
As you can see the labels name && deployedInNamespace have the values resulted from the anchor evaluation.
Based on your use case description, what you would need is going the Helm chart path and template your manifests. You can then leverage helper functions and easily customize when you want these fields. From my experience, when you have an use case like this, Helm is the way to go, because it will help you customize everything within your manifests when you decide to change something else.
I guess there is a similar question with answer.
Please check below
How to reuse an environment variable in a YAML file?

Is it possible to set the name of an external metric for a HorizontalPodAutoscaler from a configmap? GKE

I am modifying a deployment which autoscales using a HorizontalPodAutoscaler (HPA). This deployment is part of a pipeline in which workers read messages from pubsub subscriptions, do some work and publish to the next topic. Right now I use a configmap to define the pipeline for the deployments (the configmap contains input subscription and output topics). The HPA autoscales based on the number of messages on the input subscription. I would like to be able to pull the subscription name for the HPA from a configmap if possible? Is there a way to do this?
example HPA:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: my-deployment-hpa
namespace: default
labels:
name: my-deployment-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- external:
metricName: pubsub.googleapis.com|subscription|num_undelivered_messages
metricSelector:
matchLabels:
resource.labels.subscription_id: "$INPUT_SUBSCRIPTION"
targetAverageValue: "2"
type: External
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
The value from the HPA currently $INPUT_SUBSCRIPTION could ideally come from a configmap.
Posting this answer as a community wiki for a better visibility as well as the answer was provided in the comments.
Answering the question from the post:
I would like to be able to pull the subscription name for the HPA from a configmap if possible? Is there a way to do this?
As pointed by user #Abdennour TOUMI there is no possibility to set the metric used by HPA with a ConfigMap:
Unfortunately, you cannot.. but you can using prometheus-adapter + HPA . Check this tuto: itnext.io/...
As for a manual workaround you could use a script that will extract needed metric name from the configMap and use a template to replace and apply new HPA.
With a configMap like:
apiVersion: v1
kind: ConfigMap
metadata:
name: example
data:
metric_name: "new_awesome_metric" # <-
not_needed: "only for example"
And following script:
#!/bin/bash
# variables
hpa_file_name="hpa.yaml"
configmap_name="example"
string_to_replace="PLACEHOLDER"
# extract the metric name used in a configmap
new_metric=$(kubectl get configmap $configmap_name -o json | jq '.data.metric_name')
# use the template to replace the $string_to_replace with your $new_metric and apply it
sed "s/$string_to_replace/$new_metric/g" $hpa_file_name | kubectl apply -f -
This script will need to have a hpa.yaml with the template to apply it as resource (example from question could be used with a change:
resource.labels.subscription_id: PLACEHOLDER
For more reference this HPA definition could be based on this guide:
Cloud.google.com: Kubernetes Engine: Tutorials: Autoscaling-metrics: PubSub

Kubernetes Dynamic Configmapping in StatefulSet

In Kubernetes you have the ability to dynamically grab the name of a pod and reference it in a yaml file (Pod Field) like so:
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
and reference it later in the yaml file like so:
- name: FOO
value: $(POD_NAME)-bar
Where in the case of a StatefulSet the value of foo may be something like "app_thing-0-bar, app_thing-1-bar ... etc". However this doesn't seem to work in dynamically setting the name of a configmap. For example, the following configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app_thing-0-config
data:
FOO: BAR
and this in the StatefulSet deployment yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: app_thing
.
.
.
.
.
envFrom:
- configMapRef:
name: $(POD_NAME)-config
will not reference the configmap correctly as it doesn't seem to like the $() syntax. Is there any way to do this without resorting to init containers and entrypoint scripting?
If I understand you correctly there is a tool that can make it work. It's called RELOADER:
Problem: We would like to watch if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant
DeploymentConfig, Deployment, Daemonset and Statefulset
Solution: Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs,
Deployments, Daemonsets and Statefulsets.
You can find all the necessary info in the link above.
Also if you'd need more details than you can check the documentation.
Please let me know if that helped.

Spinnaker - Reference ConfigMap versioned value inside manifest

I'm deploying a single yaml file containing two manifests using the Spinnaker Kubernetes Provider V2 (Manifest deployer). Inside the Deployment I have a custom annotation that references the ConfigMap:
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
data:
foo: bar
---
# Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
metadata:
annotations:
my-config-map-reference: my-config-map
[...]
Upon deployment, Spinnaker applies versioning to the ConfigMap, which is then deployed as my-config-map-v000.
I'd like to be able to retrieve the full name inside my custom annotation, but since Spinnaker replaces automatically the configMap references with the appropriate versioned values only in specific entrypoints ( https://github.com/spinnaker/clouddriver/blob/master/clouddriver-kubernetes/src/main/groovy/com/netflix/spinnaker/clouddriver/kubernetes/v2/artifact/ArtifactReplacerFactory.java ) in this case this does not work.
According to Spinnaker documentation ( https://www.spinnaker.io/reference/artifacts/in-kubernetes-v2/#why-not-pipeline-expressions ) I may be able to write a Pipeline Expression to retrieve the full name, but I wasn't able to do so.
How can I set the full ConfigMap name inside the annotation?
Spinnaker can inject artifacts from the currently executing pipeline into your manifests as they are deployed
Refer to this guide for the instructions on how to Binding artifacts in manifests
However, as mentioned here, there's NO resource mapping for annotation, so it should be user-supplied only as a parameter for your manifest.
In the future, certain relationships between resources will be recorded and annotated by Spinnaker

Kubernetes HPA fails to detect a successfully published custom metric from Stackdriver

I'm trying to scale a Kubernetes Deployment using a HorizontalPodAutoscaler, which listens to a custom metrics through Stackdriver.
I'm having a GKE cluster, with a Stackdriver adapter enabled.
I'm able to publish the custom metric type to Stackdriver, and following is the way it's being displayed in Stackdriver's Metric Explorer.
This is how I have defined my HPA:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: custom.googleapis.com|worker_pod_metrics|baz
targetValue: 400
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-app-group-1-1
After successfully creating example-hpa, executing kubectl get hpa example-hpa, always shows TARGETS as <unknown>, and never detects the value from custom metrics.
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
example-hpa Deployment/test-app-group-1-1 <unknown>/400 1 10 1 18m
I'm using a Java client which runs locally to publish my custom metrics.
I have given the appropriate resource labels as mentioned here (hard coded - so that it can run without a problem in local environment). I have followed this document to create the Java client.
private static MonitoredResource prepareMonitoredResourceDescriptor() {
Map<String, String> resourceLabels = new HashMap<>();
resourceLabels.put("project_id", "<<<my-project-id>>>);
resourceLabels.put("pod_id", "<my pod UID>");
resourceLabels.put("container_name", "");
resourceLabels.put("zone", "asia-southeast1-b");
resourceLabels.put("cluster_name", "my-cluster");
resourceLabels.put("namespace_id", "mynamespace");
resourceLabels.put("instance_id", "");
return MonitoredResource.newBuilder()
.setType("gke_container")
.putAllLabels(resourceLabels)
.build();
}
What am I doing wrong in the above-mentioned steps please? Thank you in advance for any answers provided!
EDIT [RESOLVED]:
I think I have had some misconfigurations, since kubectl describe hpa [NAME] --v=9 showed me some 403 status code, as well as I was using type: External instead of type: Pods (Thanks MWZ for your answer, pointing out this mistake).
I managed to fix it by creating a new project, a new service account, and a new GKE cluster (basically everything from the beginning again). Then I changed my yaml file as follows, exactly as this document explains.
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: test-app-group-1-1
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: test-app-group-1-1
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods # Earlier this was type: External
pods: # Earlier this was external:
metricName: baz # metricName: custom.googleapis.com|worker_pod_metrics|baz
targetAverageValue: 20
I'm now exporting as custom.googleapis.com/baz, and NOT as custom.googleapis.com/worker_pod_metrics/baz. Also, now I'm explicitly specifying the namespace for my HPA in the yaml.
Since you can see your custom metric in Stackdriver GUI I'm guessing metrics are correctly exported. Based on Autoscaling Deployments with Custom Metrics I believe you wrongly defined metric to be used by HPA to scale the deployment.
Please try using this YAML:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: baz
targetAverageValue: 400
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-app-group-1-1
Please have in mind that:
The HPA uses the metrics to compute an average and compare it to the
target average value. In the application-to-Stackdriver export
example, a Deployment contains Pods that export metric. The following
manifest file describes a HorizontalPodAutoscaler object that scales a
Deployment based on the target average value for the metric.
Troubleshooting steps described on the page above can also be useful.
Side-note
Since above HPA is using beta API autoscaling/v2beta1 I got error when running kubectl describe hpa [DEPLOYMENT_NAME]. I ran kubectl describe hpa [DEPLOYMENT_NAME] --v=9 and got response in JSON.
It is a good practice to put some unique labels to target your metrics. Right now, based on metrics labelled in your java client, only pod_id looks unique which can't be used due to its stateless nature.
So, I would suggest you try introducing a deployment/metrics wide unqiue identifier.
resourceLabels.put("<identifier>", "<could-be-deployment-name>");
After this, you can try modifying your HPA with something similar to following:
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: custom.googleapis.com|worker_pod_metrics|baz
metricSelector:
matchLabels:
# define labels to target
metric.labels.identifier: <deployment-name>
# scale +1 whenever it crosses multiples of mentioned value
targetAverageValue: "400"
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-app-group-1-1
Apart from this, this setup has no issues and should work smooth.
Helper command to see what metrics are exposed to HPA :
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/custom.googleapis.com|worker_pod_metrics|baz" | jq