I am using ansible version 2.7 for kubernetes deployment.
For sending logs to datadog on kubernetes one of the way is to configure annotations like below,
template:
metadata:
annotations:
ad.datadoghq.com/nginx.logs: '[{"source":"nginx","service":"webapp"}]'
this works fine and I could see logs in DataDog.
However I would like to achieve above configuration via ansible deployment on kubernetes for which I have used below code
template:
metadata:
annotations:
ad.datadoghq.com/xxx.logs: "{{ lookup('template', './datadog.json.j2')}}"
and datadog.json.j2 looks like below
'[{{ '{' }}"source":"{{ sourcea }}"{{ ',' }} "service":"{{ serviceb }}"{{ '}' }}]' **--> sourcea and serviceb are defined as vars**
However the resulting config on deployment is below
template:
metadata:
annotations:
ad.datadoghq.com/yps.logs: |
'[{"source":"test", "service":"test"}]'
and this config does not allow datadog agent to parse logs failing with below error
[ AGENT ] 2019-xx-xx xx10:50 UTC | ERROR | (kubelet.go:97 in parseKubeletPodlist) | Can't parse template for pod xxx-5645f7c66c-s9zj4: could not extract logs config: in logs: invalid character '\'' looking for beginning of value
if I use ansible code as below (using replace)
template:
metadata:
annotations:
ad.datadoghq.com/xxx.logs: "{{ lookup('template', './datadog.json.j2', convert_data=False) | string | replace('\n','')}}"
it generates deployment config as below
template:
metadata:
annotations:
ad.datadoghq.com/yps.logs: '''[{"source":"test", "service":"test"}]'''
creationTimestamp: null
labels:
Which also fails,
to configure the working config with ansible, I have to either remove leading pipe (|) or three quotes coming when using replace).
I would like to have jinja variables substitution in place so that I could configure deployment with desired source and service at deployment time.
kindly suggest
By introducing space in datadog.json.j2 template definition .i.e.
[{"source":"{{ sourcea }}"{{ ',' }} "service":"{{ serviceb }}"}] (space at start)
and running deployment again I got the working config as below
template:
metadata:
annotations:
ad.datadoghq.com/yps.logs: ' [{"source":"test", "service":"test"}]'
However I am not able to understand the behaviour if anyone could help me understand it
The problem is that the YAML being produced is broken. The | character starts an indented scalar ("string" more or less), but the next line includes single-quotes - so the quotes end up being inside the annotation value.
The correct YAML should look like:
template:
metadata:
annotations:
ad.datadoghq.com/yps.logs: |
[{"source":"test", "service":"test"}]
This looks like a bug in how Ansible is generating the output YAML, and your fix must have worked around the bug.
Related
I am trying to add git repo url to pod annotation in openshift. However deployment complains about special character is not allowed as value for app.openshift.io/vcs-uri
Here is the error:
Deploy failed: The Deployment "test-app" is invalid: metadata.labels: Invalid value: "git://github.com/myrepo/testrepo.git": a valid label must be an empty string or consist of alphanumeric characters, '-', '' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9.]*)?[A-Za-z0-9])?')
Here is my sample helm chart:
apiVersion: v1
kind: Service
metadata:
name: test-app
namespace: test-poc
labels:
helm.sh/chart: poc-0.0.1
app.kubernetes.io/name: angular
app.kubernetes.io/instance: test-app
app.kubernetes.io/version: "2.4"
app.kubernetes.io/managed-by: Helm
app.openshift.io/runtime: angularjs
app.openshift.io/vcs-uri: "git://github.com/myrepo/testrepo.git"
spec:
type: ClusterIP
ports:
.......
You cannot have special characters like # or / in your label value or name. The error message clearly states that:
consist of alphanumeric characters, '-', '' or '.'
There is even a regex in the error message:
'(([A-Za-z0-9][-A-Za-z0-9.]*)?[A-Za-z0-9])?'
You can plug that Regular Expression into any choice of tool like regex101.com and check if your label value works.
However, annotations are a little less restricted as described in the Kubernetes Documentation
apiVersion: v1
kind: ConfigMap
metadata:
name: kibana
namespace: the-project
labels:
app: kibana
env: dev
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |- server.name: kib.the-project.d4ldev.txn2.com server.host: "0" elasticsearch.url: http://elasticsearch:9200
this is my config.yml file. when I try to create this project, I get this error
error: error parsing configmap.yml: error converting YAML to JSON: yaml: line 13: did not find expected comment or line break
I can't get rid of the error even after removing the space in line 13 column 17
The yml content can be directly put on multiple lines, formatted like a real yaml, take a look at the following example:
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |-
server:
name: kib.the-project.d4ldev.txn2.com
host: "0"
elasticsearch.url: http://elasticsearch:9200
This works when put in a ConfigMap, it should work even if provided to a HELM Chart (depending on how the HELM templates are written)
i'm trying to assign pods to a specific node as part of helm command, so by the end the deployment yaml should look like this
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node-name: dev-cpu-pool
i'm using this command as part of Jenkins file deployment
`sh "helm upgrade -f charts/${job_name}/default.yaml --set nodeSelector.name=${deployNamespace}-cpu-pool --install ${deployNamespace}-${name} helm/${name} --namespace=${deployNamespace} --recreate-pods --version=${version}`"
the deployment works good and the pod is up and running but from some reason i cannot see the nodeSelector key and value as part of the deployment yaml and as a results pods not assign to the specific node i want. any idea what is wrong ? should i put any place holder as part of my chart template or is not must ?
The artifacts that Helm submits to the Kubernetes API are exactly the result of rendering the chart templates; nothing more, nothing less. If your templates don't include a nodeSelector: block then the resulting Deployment never will either. Even if you helm install --set ... things that could match Kubernetes API fields, nothing will implicitly fill them in.
If you want an option to specify rarely-used fields like nodeSelector: then your chart code needs to include them. You can make the presence of the field conditional on the value being set, but you do need to explicitly list it out:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
{{- if .Values.nodeSelector }}
nodeSelector: {{- .Values.nodeSelector | toYaml | nindent 8 }}
{{- end }}
I have tried to run Helm for the first time. I am having deployment.yaml, service.yaml and ingress.yaml files alongwith values.yaml and chart.yaml.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
namespace: xyz
labels:
app: abc
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 3
template:
spec:
containers:
- name: abc
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
-
containerPort: 8080
service.yaml
apiVersion: v1
kind: Service
metadata:
name: abc
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
namespace: xyz
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.service.sslCert }}
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
selector:
app: abc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "haproxy-ingress"
namespace: xyz
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: alb
From what I can see I do not think I have missed putting app.kubernetes.io/managed-by but still, I keep getting an error:
rendered manifests contain a resource that already exists. Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
It renders the file locally correctly.
helm list --all --all-namespaces returns nothing.
Please help.
You already have some resources, e.g. service abc in the given namespace, xyz that you're trying to install via a Helm chart.
Delete those and install them via helm install.
$ kubectl delete service -n <namespace> <service-name>
$ kubectl delete deployment -n <namespace> <deployment-name>
$ kubectl delete ingress -n <namespace> <ingress-name>
Once you have these resources deployed via Helm, you will be able to perform helm update to change properties.
Remove the "app.kubernetes.io/managed-by" label from your yaml's, this will be added by Helm.
The error below is quiet common:
label validation error: missing key "app.kubernetes.io/managed-by":
must be set to "Helm"; annotation validation error: missing key
"meta.helm.sh/release-name": must be set to ..
So I'll provide a bit longer explanation and also a context to the topic.
What happend?
It seems that you tried to create resources that were already exist and created outside of Helm (probably with kubectl).
Why Helm throw the error?
Helm doesn't allow a resource to be owned by more than one
deployment.
It is the responsibility of the chart creator to ensure that the chart
produce unique resources only.
How can you solve this?
Option 1 - Follow the error message and add the meta.helm.sh annotations:
As can be describe in this PR: Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that
already exists in the target cluster if the existing resource has the
correct meta.helm.sh/release-name and
meta.helm.sh/release-namespace annotations, and matches the label
selector app.kubernetes.io/managed-by=Helm. This facilitates
zero-downtime migrations to Helm 3 for managing existing deployments,
and allows Helm to "adopt" existing resources that it previously
created.
(*) I think that the meta.helm.sh scope is a less common approach today.
Option 2 - Add the app.kubernetes.io/instance label:
As can be seen in different Helm chart providers (Bitnami, Nginx ingress controller, External-Dns for example) - the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
(*) Notice: There are some CD tools like ArgoCD that automatically sets the app.kubernetes.io/instance label and uses it to determine which resources form the app.
Option 3 - Delete old resources.
It might be relevant in your specific case where the old resources might not be relevant anymore.
For those who need some context
What are those labels?
Shared labels and annotations share a common prefix: app.kubernetes.io. Labels without a prefix are private to users. The shared prefix ensures that shared labels do not interfere with custom user labels.
In order to take full advantage of using these labels, they should be applied on every resource object.
The app.kubernetes.io/managed-by label is used to describe the tool being used to manage the operation of an application - for example: helm.
Read more on the Recommended Labels section.
Are they added by helm?
No.
First of all, as mentioned before, those labels are not specific to Helm and Helm itself never requires that a particular label be present.
From the other hand, Helm docs recommend to use the following Standard Labels. app.kubernetes.io/managed-by is one of them and should be set to {{ .Release.Service }} in order to find all resources managed by Helm.
So it is the role of the chart maintainer to add those labels.
What is the best way to add them?
Many Helm chart providers adds them to the _helpers.tpl file and let all resources include it:
labels: {{ include "my-chart.labels" . | nindent 4 }}
The trick here is to chase the error message.
For example, in the below case the erro message points at something wrong with the 'service' in namespace 'xyz'
Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
Simply delete the same service from the mentioned namespace with below:
kubectl -n xyz delete svc abc
And then try the installation/deployment again. It might so happen that similar issue may appear but for a different resource as shown in the below example:
Release "nok-sec-sip-tls-crd" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Role "nok-sec-sip-tls-crd-role" in namespace "debu" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nok-sec-sip-tls-crd": current value is "nok-sec-sip"
Again use the kubectl command and delete the resource mentioned in the error message. For example, in the above case the error resource should be deleted with the below command:
kubectl delete role nok-sec-sip-tls-crd-role -n debu
I was getting this error because I was trying to upgrade the helm chart with wrong release name. So it conflicted with the existing resources in same namespace.
I was running this command with wrong releasename
helm upgrade --install --namespace <namespace> wrong-releasename <chart-folder>
and got the similar errors
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap \"cmname\" in namespace \"namespace\" exists and cannot be imported into the current release
invalid ownership metadata; label validation error: missing key \"app.kubernetes.io/managed-by\": must be set to \"Helm\"; annotation validation error: missing key \"meta.helm.sh/release-name\": must be set to \"wrong-releasename\"; annotation validation error: missing key \"meta.helm.sh/release-namespace\": must be set to \"namespace\"
I checked the existing helm releases in the same namespace and used the same name as the listed release name to upgrade my helmchart
helm ls -n <namespace>
helm upgrade --install --namespace <namespace> releasename <chart-folder>
Here's a faster and more thorough way to get rid of argo so it can be reinstalled :
helm list -A # see argocd in namespace argocd
helm uninstall argocd -n argocd
kubectl delete namespace argocd
The last line gets rid of all secrets and other resources not cleaned up by uninstalling the helm chart, and was needed in my environment, otherwise, I got the same sorts of errors about duplicate resources you were seeing.
We use GitOps via Flux, and I was getting the same rendered manifests contain a resource that already exists error. For me the problem was I accidentally defined a resource with the same name in two different files, so it was trying to create it twice. I removed the duplicate resource definition from one of the files to fix it up.
I have a docker-compose.yml file we have been using to set up our development environment.
The file declares some services, all of them more or less following the same pattern:
services:
service_1:
image: some_image_1
enviroment:
- ENV_VAR_1
- ENV_VAR_2
depends_on:
- another_service_of_the_same_compose_file
In the view of migrating to kubernetes, when running:
kompose convert -f docker-compose.yml
produces, for each service, a pair of deployment/service manifests.
Two questions about the deployment generated:
1.
the examples in the official documentation seem to hint that the selector field is needed for a Deployment to be aware of the pods to manage.
However the deployment manifests created do not include a selector field, and are as follows:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.6.0 (e4adfef)
creationTimestamp: null
labels:
io.kompose.service: service_1
name: service_1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: service_1
spec:
containers:
- image: my_image
name: my_image_name
resources: {}
restartPolicy: Always
status: {}
2.
the apiVersion in the generated deployment manifest is extensions/v1beta1, however the examples in the Deployments section of the official documentation default to apps/v1.
The recommendation seems to be
for versions before 1.9.0 use apps/v1beta2
Which is the correct version to use? (using kubernetes 1.8)
Let's begin by saying that Kubernetes and Kompose are two different independent systems. Kompose is trying to match all of the dependency with kubernetes.
At the moment all of the selector's fields are generated by kubernetes.In future, It might be done by us.
If you would like to check your selector's fields use following commands
kubectl get deploy
kubectl describe deploy DEPLOY_NAME
After version k8s 1.9 all of the long-running objects would be part of /apps group.
We’re excited to announce General Availability (GA) of the apps/v1 Workloads API, which is now enabled by default. The Apps Workloads API groups the DaemonSet, Deployment, ReplicaSet, and StatefulSet APIs together to form the foundation for long-running stateless and stateful workloads in Kubernetes. Note that the Batch Workloads API (Job and CronJob) is not part of this effort and will have a separate path to GA stability.
I have attached the link for further research
kubernetes-19-workloads
As a selector field isn't required for deployments and Kompose doesn't know your cluster's nodes, it doesn't set a selector (which basically tells k8s in which nodes run pods).
I wouldn't edit apiversion cause Kompose assumes that version defining the rest of the resource. Also, if you are using kubernetes 1.8 read 1.8 docs https://v1-8.docs.kubernetes.io/docs/
In kubernetes 1.16 the deployment's spec.selector became required. Kompose (as of 1.20) does not yet do this automatically. You will have to add this to every *-deployment.yaml file it creates:
selector:
matchLabels:
io.kompose.service: alignment-processor
If you use an IDE like jetbrains you can use the following search/replace patterns on the folder where you put the conversion results:
Search for this multiline regexp:
io.kompose.service: (.*)
name: \1
spec:
replicas: 1
template:
Replace with this pattern:
io.kompose.service: $1
name: $1
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: $1
template:
The (.*) captures the name of the service, the \1 matches the (first and only) capture, and the $1 substitutes the capture in the replacement.
You will also have to substitute all extensions/v1beta1 with apps/v1 in all *-deployment.yaml files.
I also found that secrets have to massaged a bit but this goes beyond the scope of this question.