I'm deploying a single yaml file containing two manifests using the Spinnaker Kubernetes Provider V2 (Manifest deployer). Inside the Deployment I have a custom annotation that references the ConfigMap:
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
data:
foo: bar
---
# Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
metadata:
annotations:
my-config-map-reference: my-config-map
[...]
Upon deployment, Spinnaker applies versioning to the ConfigMap, which is then deployed as my-config-map-v000.
I'd like to be able to retrieve the full name inside my custom annotation, but since Spinnaker replaces automatically the configMap references with the appropriate versioned values only in specific entrypoints ( https://github.com/spinnaker/clouddriver/blob/master/clouddriver-kubernetes/src/main/groovy/com/netflix/spinnaker/clouddriver/kubernetes/v2/artifact/ArtifactReplacerFactory.java ) in this case this does not work.
According to Spinnaker documentation ( https://www.spinnaker.io/reference/artifacts/in-kubernetes-v2/#why-not-pipeline-expressions ) I may be able to write a Pipeline Expression to retrieve the full name, but I wasn't able to do so.
How can I set the full ConfigMap name inside the annotation?
Spinnaker can inject artifacts from the currently executing pipeline into your manifests as they are deployed
Refer to this guide for the instructions on how to Binding artifacts in manifests
However, as mentioned here, there's NO resource mapping for annotation, so it should be user-supplied only as a parameter for your manifest.
In the future, certain relationships between resources will be recorded and annotated by Spinnaker
Related
I am modifying a deployment which autoscales using a HorizontalPodAutoscaler (HPA). This deployment is part of a pipeline in which workers read messages from pubsub subscriptions, do some work and publish to the next topic. Right now I use a configmap to define the pipeline for the deployments (the configmap contains input subscription and output topics). The HPA autoscales based on the number of messages on the input subscription. I would like to be able to pull the subscription name for the HPA from a configmap if possible? Is there a way to do this?
example HPA:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: my-deployment-hpa
namespace: default
labels:
name: my-deployment-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- external:
metricName: pubsub.googleapis.com|subscription|num_undelivered_messages
metricSelector:
matchLabels:
resource.labels.subscription_id: "$INPUT_SUBSCRIPTION"
targetAverageValue: "2"
type: External
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
The value from the HPA currently $INPUT_SUBSCRIPTION could ideally come from a configmap.
Posting this answer as a community wiki for a better visibility as well as the answer was provided in the comments.
Answering the question from the post:
I would like to be able to pull the subscription name for the HPA from a configmap if possible? Is there a way to do this?
As pointed by user #Abdennour TOUMI there is no possibility to set the metric used by HPA with a ConfigMap:
Unfortunately, you cannot.. but you can using prometheus-adapter + HPA . Check this tuto: itnext.io/...
As for a manual workaround you could use a script that will extract needed metric name from the configMap and use a template to replace and apply new HPA.
With a configMap like:
apiVersion: v1
kind: ConfigMap
metadata:
name: example
data:
metric_name: "new_awesome_metric" # <-
not_needed: "only for example"
And following script:
#!/bin/bash
# variables
hpa_file_name="hpa.yaml"
configmap_name="example"
string_to_replace="PLACEHOLDER"
# extract the metric name used in a configmap
new_metric=$(kubectl get configmap $configmap_name -o json | jq '.data.metric_name')
# use the template to replace the $string_to_replace with your $new_metric and apply it
sed "s/$string_to_replace/$new_metric/g" $hpa_file_name | kubectl apply -f -
This script will need to have a hpa.yaml with the template to apply it as resource (example from question could be used with a change:
resource.labels.subscription_id: PLACEHOLDER
For more reference this HPA definition could be based on this guide:
Cloud.google.com: Kubernetes Engine: Tutorials: Autoscaling-metrics: PubSub
I’ve dependency in priority class inside my k8s yaml configs files and I need to install before any of my yaml inside the template folder
the prio class
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
value: 1000
globalDefault: false
After reading the helm docs it seems that I can use the pre-install hook
I’ve changed my yaml and add anotiations section with pre-hook, and still it doesnt works, any idea what I miss here?
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
annotations:
"helm.sh/hook": pre-install
value: 1000
globalDefault: false
The yaml is located inisde the template folder
You put quotation marks for helm.sh/hook annotation which is incorrect - you can only add quotation marks for values of them.
You can add description field in your configuration file, remember that this field is an arbitrary string. It is meant to tell users of the cluster when they should use this PriorityClass.
Your PriorityClass should looks like this:
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
value: 1000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
More information about proper configuration of PriorityClass you can find here: PriorityClass.
More information about installing hooks you can find here: helm-hooks.
I hope it helps.
i'm playing around with knative currently and bootstrapped a simple installation using gloo and glooctl. Everything worked fine out of the box. However, i just asked myself if there is a possibility to change the generated url, where the service is made available at.
I already changed the domain, but i want to know if i could select a domain name without containing the namespace, so helloworld-go.namespace.mydomain.com would become helloworld-go.mydomain.com.
The current YAML-definition looks like this:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
labels:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: Go Sample v1
Thank you for your help!
This is configurable via the ConfigMap named config-network in the namespace knative-serving. See the ConfigMap in the deployment resources:
apiVersion: v1
data:
_example: |
...
# domainTemplate specifies the golang text template string to use
# when constructing the Knative service's DNS name. The default
# value is "{{.Name}}.{{.Namespace}}.{{.Domain}}". And those three
# values (Name, Namespace, Domain) are the only variables defined.
#
# Changing this value might be necessary when the extra levels in
# the domain name generated is problematic for wildcard certificates
# that only support a single level of domain name added to the
# certificate's domain. In those cases you might consider using a value
# of "{{.Name}}-{{.Namespace}}.{{.Domain}}", or removing the Namespace
# entirely from the template. When choosing a new value be thoughtful
# of the potential for conflicts - for example, when users choose to use
# characters such as `-` in their service, or namespace, names.
# {{.Annotations}} can be used for any customization in the go template if needed.
# We strongly recommend keeping namespace part of the template to avoid domain name clashes
# Example '{{.Name}}-{{.Namespace}}.{{ index .Annotations "sub"}}.{{.Domain}}'
# and you have an annotation {"sub":"foo"}, then the generated template would be {Name}-{Namespace}.foo.{Domain}
domainTemplate: "{{.Name}}.{{.Namespace}}.{{.Domain}}"
...
kind: ConfigMap
metadata:
labels:
serving.knative.dev/release: "v0.8.0"
name: config-network
namespace: knative-serving
Therefore, your config-network should look like this:
apiVersion: v1
data:
domainTemplate: {{ '"{{.Name}}.{{.Domain}}"' }}
kind: ConfigMap
metadata:
name: config-network
namespace: knative-serving
You can also have a look and customize the config-domain to configure the domain name that is appended to your services.
Assuming you're running knative over an istio service mesh, there's an example of how to use an Istio Virtual Service to accomplish this at the service level in the knative docs.
Now I have a situation that I have same yaml files, each of them contains things like deployment, service, ingress, etc. I have to create them concurrently, I tyied ansible to achive if but failed, so I want to know if kubernetes has an API endpoint can post the yaml file to it and create resources like what the command kubectl create -f sample.yaml did.You can also give me other advice to achive my purpose if you like.
I can also accept if there's a way to post my yaml file to kubernetes api and create all resources in it
Then you can simply concatenate all individual yaml files into one big yaml file but each separted by --- in between.
For example to install kubernetes-dashboard you simply use: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml, examination of this yaml file reveals structure you need (excerpt below):
...
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
...
where each block between comments could be separate yaml file. Note that comments are optional and separator between contents of individual yaml files is ---.
Just some sidenotes:
although this is somewhat practical for final deployments, if you glue all of your individual yaml files into such a mega-all-in-one.yaml file then everything you do to it - (create/update/apply/delete...) you do to all resources listed inside.
If it is not "shared" file to be executed from some network resource, it then may be easier to use --recursive switch to kubectl as detailed in the official documentation and run against folder that contains all individual yaml files. This way you retain capability to individually pick any yaml file, and can deploy/delete/apply... all at once if you choose so...
i have a sample nodejs application which uses an envVar environment variable, i have deployed this on kubernetes cluster. I am passing the env variable through config map.
once deployed and when pods is all running, if i change my config map with new value. Should deployment of my nodejs application need to be redone after this?
configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: app1-config
namespace: default
data:
envVal: '12345' # initial value
apiUrl: http://a4235a7ee247011e8aa6f0213eb6eb14-1392003683.us-west-2.elb.amazonaws.com/myapp4
after updating the configmap.yaml
configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: app1-config
namespace: default
data:
envVal: '56789' # changed value
apiUrl: http://a4235a7ee247011e8aa6f0213eb6eb14-1392003683.us-west-2.elb.amazonaws.com/myapp4
When you mount the keys from the ConfigMap as environment variables, you would need to restart your pod for the changes to take effect.
When you mount it as volume into you system, the files in the volume will be updated automatically. The update is not immediate, there is some TTL configured in the kubelet before it checks for changes / does the update. But it is normally quite quick. However it would still depend on your application how it loads the data from the file - whether it will be able to update its self on the fly when the files change or whether these data were loaded only once at startup.