RabbitMQ Kubernetes RabbitmqCluster Operator not adding members to Quorum Queues - kubernetes

I am using the rabbitmq cluster operator to deploy a RabbitMQ HA cluster on kubernetes.
I am importing the defintions by referring to [this example](https://github.com/rabbitmq/cluster-operator/blob/main/docs/examples/import-definitions/rabbitmq.yaml Import Definitions Example).
I have provided configuration as below (3 replicas)
After the cluster is up and i access the management console using port-forward on the service, i see that the queue is declared with type set to "quorum" but it only shows the 1st node of the cluster under leader, online and members.
The cluster is set up with 3 replicas and the default value for quorum initial group size is 3 if i dont specify any.(although i am specifying it explicitly in the defintions file).
It should show other members of cluster under online and members section but it shows only the first node (rabbitmq-ha-0)
Am i missing any configuration ?
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: import-definitions
spec:
replicas: 3
override:
statefulSet:
spec:
template:
spec:
containers:
- name: rabbitmq
volumeMounts:
- mountPath: /path/to/exported/ # filename left out intentionally
name: definitions
volumes:
- name: definitions
configMap:
name: definitions # Name of the ConfigMap which contains definitions you wish to import
rabbitmq:
additionalConfig: |
load_definitions = /path/to/exported/definitions.json # Path to the mounted definitions file
and my definitions file is something like this:
{
"users": [
{
"name": "my-vhost",
"password": "my-vhost",
"tags": "",
"limits": {}
}
],
"vhosts": [
{
"name": "/"
},
{
"name": "my-vhost"
}
],
"permissions": [
{
"user": "my-vhost",
"vhost": "my-vhost",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
"topic_permissions": [
{
"user": "my-vhost",
"vhost": "my-vhost",
"exchange": "",
"write": ".*",
"read": ".*"
}
],
"parameters":[
{
"value":{
"max-connections":100,
"max-queues":15
},
"vhost":"my-vhost",
"component":"vhost-limits",
"name":"limits"
}
],
"policies":[
{
"vhost":"my-vhost",
"name":"Queue-Policy",
"pattern":"my-*",
"apply-to":"queues",
"definition":{
"delivery-limit":3
},
"priority":0
}
],
"queues":[
{
"name":"my-record-q",
"vhost": "my-vhost",
"durable":true,
"auto_delete":false,
"arguments":{
"x-queue-type":"quorum",
"x-quorum-initial-group-size":3
}
}
],
"exchanges":[
{
"name":"my.records.topic",
"vhost": "my-vhost",
"type":"topic",
"durable":true,
"auto_delete":false,
"internal":false,
"arguments":{
}
}
],
"bindings":[
{
"source":"my.records-changed.topic",
"vhost": "my-vhost",
"destination":"my-record-q",
"destination_type":"queue",
"routing_key":"#.record.#",
"arguments":{
}
}
]
}

Related

When updating service target I have 502 errors during 5 to 60 seconds

I want to manually reroute (when I have needs) a web server (A Helm deployment on GKE) to another one.
To do that I have 3 Helm deployments :
Application X
Application Y
Ingress on application X
All work fine, but if I launch a Helm update with Ingress chart changing uniquely the selector of the service I target I have 502 errors :(
Source of service :
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.Service.Name }}-https
labels:
app: {{ .Values.Service.Name }}
type: svc
name: {{ .Values.Service.Name }}
environment: {{ .Values.Environment.Name }}
annotations:
cloud.google.com/neg: '{"ingress": true}'
beta.cloud.google.com/backend-config: '{"ports": {"{{ .Values.Application.Port }}":"{{ .Values.Service.Name }}-https"}}'
spec:
type: NodePort
selector:
name: {{ .Values.Application.Name }}
environment: {{ .Values.Environment.Name }}
ports:
- protocol: TCP
port: {{ .Values.Application.Port }}
targetPort: {{ .Values.Application.Port }}
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: {{ .Values.Service.Name }}-https
spec:
timeoutSec: 50
connectionDraining:
drainingTimeoutSec: 60
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 300
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Values.Service.Name }}-https
labels:
app: {{ .Values.Service.Name }}
type: ingress
name: {{ .Values.Service.Name }}
environment: {{ .Values.Environment.Name }}
annotations:
kubernetes.io/ingress.global-static-ip-name: {{ .Values.Service.PublicIpName }}
networking.gke.io/managed-certificates: "{{ join "," .Values.Service.DomainNames }}"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: {{ $.Values.Service.Name }}-https
servicePort: 80
rules:
{{- range .Values.Service.DomainNames }}
- host: {{ . | title | lower }}
http:
paths:
- backend:
serviceName: {{ $.Values.Service.Name }}-https
servicePort: 80
{{- end }}
The only thing which change from one call to another is the value of "{{ .Values.Application.Name }}", all other values are strictly the same.
Targeted PODS are always UP & RUNNING and all respond 200 using "kubectl" port forwarding test.
Here is the status of all my namespace objects :
NAME READY STATUS RESTARTS AGE
pod/drupal-dummy-404-v1-pod-744454b7ff-m4hjk 1/1 Running 0 2m32s
pod/drupal-dummy-404-v1-pod-744454b7ff-z5l29 1/1 Running 0 2m32s
pod/drupal-dummy-v1-pod-77f5bf55c6-9dq8n 1/1 Running 0 3m58s
pod/drupal-dummy-v1-pod-77f5bf55c6-njfl9 1/1 Running 0 3m57s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/drupal-dummy-v1-service-https NodePort 172.16.90.71 <none> 80:31391/TCP 3m49s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/drupal-dummy-404-v1-pod 2/2 2 2 2m32s
deployment.apps/drupal-dummy-v1-pod 2/2 2 2 3m58s
NAME DESIRED CURRENT READY AGE
replicaset.apps/drupal-dummy-404-v1-pod-744454b7ff 2 2 2 2m32s
replicaset.apps/drupal-dummy-v1-pod-77f5bf55c6 2 2 2 3m58s
NAME AGE
managedcertificate.networking.gke.io/d8.syspod.fr 161m
managedcertificate.networking.gke.io/d8gfi.syspod.fr 128m
managedcertificate.networking.gke.io/dummydrupald8.cnes.fr 162m
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/drupal-dummy-v1-service-https d8gfi.syspod.fr 34.120.106.136 80 3m50s
Another test has to pre-launch two services, one for each deployment and just update the Ingress Helm deployment changing this time "{{ $.Values.Service.Name }}", same problem, and the site indisponibility is here from 60s to 300s.
Here is the status of all my namespace objects (for this second test) :
root#47475bc8c41f:/opt/bin# k get all,svc,ingress,managedcertificates
NAME READY STATUS RESTARTS AGE
pod/drupal-dummy-404-v1-pod-744454b7ff-8r5pm 1/1 Running 0 26m
pod/drupal-dummy-404-v1-pod-744454b7ff-9cplz 1/1 Running 0 26m
pod/drupal-dummy-v1-pod-77f5bf55c6-56dnr 1/1 Running 0 30m
pod/drupal-dummy-v1-pod-77f5bf55c6-mg95j 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/drupal-dummy-404-v1-pod-https NodePort 172.16.106.121 <none> 80:31030/TCP 26m
service/drupal-dummy-v1-pod-https NodePort 172.16.245.251 <none> 80:31759/TCP 27m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/drupal-dummy-404-v1-pod 2/2 2 2 26m
deployment.apps/drupal-dummy-v1-pod 2/2 2 2 30m
NAME DESIRED CURRENT READY AGE
replicaset.apps/bastion-66bb77bfd5 1 1 1 148m
replicaset.apps/drupal-dummy-404-v1-pod-744454b7ff 2 2 2 26m
replicaset.apps/drupal-dummy-v1-pod-77f5bf55c6 2 2 2 30m
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/drupal-dummy-v1-service-https d8gfi.syspod.fr 34.120.106.136 80 14m
Does anybody have any explanation (and solution) ?
Added deployment DUMP (sure something is missing but I don't see) :
root#c55834fbdf1a:/# k get deployment.apps/drupal-dummy-v1-pod -o json
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"deployment.kubernetes.io/revision": "2",
"meta.helm.sh/release-name": "drupal-dummy-v1-pod",
"meta.helm.sh/release-namespace": "e1"
},
"creationTimestamp": "2020-06-23T18:49:59Z",
"generation": 2,
"labels": {
"app.kubernetes.io/managed-by": "Helm",
"environment": "e1",
"name": "drupal-dummy-v1-pod",
"type": "dep"
},
"name": "drupal-dummy-v1-pod",
"namespace": "e1",
"resourceVersion": "3977170",
"selfLink": "/apis/apps/v1/namespaces/e1/deployments/drupal-dummy-v1-pod",
"uid": "56f74fb9-b582-11ea-9df2-42010a000006"
},
"spec": {
"progressDeadlineSeconds": 600,
"replicas": 2,
"revisionHistoryLimit": 10,
"selector": {
"matchLabels": {
"environment": "e1",
"name": "drupal-dummy-v1-pod",
"type": "dep"
}
},
"strategy": {
"rollingUpdate": {
"maxSurge": "25%",
"maxUnavailable": "25%"
},
"type": "RollingUpdate"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"environment": "e1",
"name": "drupal-dummy-v1-pod",
"type": "dep"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "APPLICATION",
"value": "drupal-dummy-v1-pod"
},
{
"name": "DB_PASS",
"valueFrom": {
"secretKeyRef": {
"key": "password",
"name": "dbpassword"
}
}
},
{
"name": "DB_FQDN",
"valueFrom": {
"configMapKeyRef": {
"key": "dbip",
"name": "gcpenv"
}
}
},
{
"name": "DB_PORT",
"valueFrom": {
"configMapKeyRef": {
"key": "dbport",
"name": "gcpenv"
}
}
},
{
"name": "DB_NAME",
"valueFrom": {
"configMapKeyRef": {
"key": "dbdatabase",
"name": "gcpenv"
}
}
},
{
"name": "DB_USER",
"valueFrom": {
"configMapKeyRef": {
"key": "dbuser",
"name": "gcpenv"
}
}
}
],
"image": "eu.gcr.io/gke-drupal-276313/drupal-dummy:1.0.0",
"imagePullPolicy": "Always",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/",
"port": 80,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"name": "drupal-dummy-v1-pod",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
}
],
"readinessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/",
"port": 80,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/www/html/sites/default",
"name": "drupal-dummy-v1-pod"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"name": "drupal-dummy-v1-pod",
"persistentVolumeClaim": {
"claimName": "drupal-dummy-v1-pod"
}
}
]
}
}
},
"status": {
"availableReplicas": 2,
"conditions": [
{
"lastTransitionTime": "2020-06-23T18:56:05Z",
"lastUpdateTime": "2020-06-23T18:56:05Z",
"message": "Deployment has minimum availability.",
"reason": "MinimumReplicasAvailable",
"status": "True",
"type": "Available"
},
{
"lastTransitionTime": "2020-06-23T18:49:59Z",
"lastUpdateTime": "2020-06-23T18:56:05Z",
"message": "ReplicaSet \"drupal-dummy-v1-pod-6865d969cd\" has successfully progressed.",
"reason": "NewReplicaSetAvailable",
"status": "True",
"type": "Progressing"
}
],
"observedGeneration": 2,
"readyReplicas": 2,
"replicas": 2,
"updatedReplicas": 2
}
}
Here service DUMP too :
root#c55834fbdf1a:/# k get service/drupal-dummy-v1-service-https -o json
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"annotations": {
"beta.cloud.google.com/backend-config": "{\"ports\": {\"80\":\"drupal-dummy-v1-service-https\"}}",
"cloud.google.com/neg": "{\"ingress\": true}",
"cloud.google.com/neg-status": "{\"network_endpoint_groups\":{\"80\":\"k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551\"},\"zones\":[\"europe-west3-a\",\"europe-west3-b\"]}",
"meta.helm.sh/release-name": "drupal-dummy-v1-service",
"meta.helm.sh/release-namespace": "e1"
},
"creationTimestamp": "2020-06-23T18:50:45Z",
"labels": {
"app": "drupal-dummy-v1-service",
"app.kubernetes.io/managed-by": "Helm",
"environment": "e1",
"name": "drupal-dummy-v1-service",
"type": "svc"
},
"name": "drupal-dummy-v1-service-https",
"namespace": "e1",
"resourceVersion": "3982781",
"selfLink": "/api/v1/namespaces/e1/services/drupal-dummy-v1-service-https",
"uid": "722d3a99-b582-11ea-9df2-42010a000006"
},
"spec": {
"clusterIP": "172.16.103.181",
"externalTrafficPolicy": "Cluster",
"ports": [
{
"nodePort": 32396,
"port": 80,
"protocol": "TCP",
"targetPort": 80
}
],
"selector": {
"environment": "e1",
"name": "drupal-dummy-v1-pod"
},
"sessionAffinity": "None",
"type": "NodePort"
},
"status": {
"loadBalancer": {}
}
}
And ingress one :
root#c55834fbdf1a:/# k get ingress.extensions/drupal-dummy-v1-service-https -o json
{
"apiVersion": "extensions/v1beta1",
"kind": "Ingress",
"metadata": {
"annotations": {
"ingress.gcp.kubernetes.io/pre-shared-cert": "mcrt-a15e339b-6c3f-4f23-8f6b-688dc98b33a6,mcrt-f3a385de-0541-4b9c-8047-6dcfcbd4d74f",
"ingress.kubernetes.io/backends": "{\"k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551\":\"HEALTHY\"}",
"ingress.kubernetes.io/forwarding-rule": "k8s-fw-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/https-forwarding-rule": "k8s-fws-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/https-target-proxy": "k8s-tps-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/ssl-cert": "mcrt-a15e339b-6c3f-4f23-8f6b-688dc98b33a6,mcrt-f3a385de-0541-4b9c-8047-6dcfcbd4d74f",
"ingress.kubernetes.io/target-proxy": "k8s-tp-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/url-map": "k8s-um-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"kubernetes.io/ingress.global-static-ip-name": "gkxe-k1312-e1-drupal-dummy-v1",
"meta.helm.sh/release-name": "drupal-dummy-v1-service",
"meta.helm.sh/release-namespace": "e1",
"networking.gke.io/managed-certificates": "dummydrupald8.cnes.fr,d8.syspod.fr",
"nginx.ingress.kubernetes.io/rewrite-target": "/"
},
"creationTimestamp": "2020-06-23T18:50:45Z",
"generation": 1,
"labels": {
"app": "drupal-dummy-v1-service",
"app.kubernetes.io/managed-by": "Helm",
"environment": "e1",
"name": "drupal-dummy-v1-service",
"type": "ingress"
},
"name": "drupal-dummy-v1-service-https",
"namespace": "e1",
"resourceVersion": "3978178",
"selfLink": "/apis/extensions/v1beta1/namespaces/e1/ingresses/drupal-dummy-v1-service-https",
"uid": "7237fc51-b582-11ea-9df2-42010a000006"
},
"spec": {
"backend": {
"serviceName": "drupal-dummy-v1-service-https",
"servicePort": 80
},
"rules": [
{
"host": "dummydrupald8.cnes.fr",
"http": {
"paths": [
{
"backend": {
"serviceName": "drupal-dummy-v1-service-https",
"servicePort": 80
}
}
]
}
},
{
"host": "d8.syspod.fr",
"http": {
"paths": [
{
"backend": {
"serviceName": "drupal-dummy-v1-service-https",
"servicePort": 80
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "34.98.97.102"
}
]
}
}
}
I have seen that in kubernetes events (only when I reconfigure my service selector to target first or second deployment).
Switch to indisponibility page (3 seconds 502) :
81s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-b")
78s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-a")
Switch back to application (15 seconds 502 -> Never the same duration):
7s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-a")
7s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-b")
I could check that the NEG events appear just before 502 error ends, I suspect that when we change service definition a new NEG is implemented but the time is not immediate, and while we wait to have it we still not have the old service up and there is no service during this time :(
There is no "rolling update" of services definition ?
No solution at all, even with having two services and just upgrading the Ingress. Seen witn GCP support, GKE will always destroy then recretate something it is not possible to do any rolling update of service or ingress with no downtime. They suggest to have two full silos then play with DNS, we have chosen another solution, having only one deployment and just do a simple rolling update of deployment changing referenced docker image. Not really in the target, but it works...

How to add resource and limits on Kubernetes Engine on Google Cloud Platform

I am trying to add resource and limits to my deployment on Kuberenetes Engine since one of my deployment on the pod is continuously getting evicted with an error message The node was low on resource: memory. Container model-run was using 1904944Ki, which exceeds its request of 0. I assume that the issue could be resolved by adding resource requests.
When I try to add resource requests and deploy, the deployment is successful but when I go back and and view detailed information about the Pod, with the command
kubectl get pod default-pod-name --output=yaml --namespace=default
It still says the pod has request of cpu: 100m and without any mention of memory that I have allotted. I am guessing that the cpu request of 100m was a default one. Please let me know how I can allot the requests and limits, the code I am using to deploy is as follows:
kubectl run model-run --image-pull-policy=Always --overrides='
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "gcr.io/some-project/news/model-run:development",
"imagePullPolicy": "Always",
"resouces": {
"requests": [
{
"memory": "2048Mi",
"cpu": "500m"
}
],
"limits": [
{
"memory": "2500Mi",
"cpu": "750m"
}
]
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath":"/path/collection/keys"
}
],
"env":[
{
"name":"GOOGLE_APPLICATION_CREDENTIALS",
"value":"/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}
' --image=gcr.io/some-project/news/model-run:development
Any solution will be appreciated
The node was low on resource: memory. Container model-run was using 1904944Ki, which exceeds its request of 0.
At first the message seems like there is a lack of resource in the node itself but the second part makes me believe you are correct in trying to raise the request limit for the container.
Just keep in mind that if you still face errors after this change, you might need to add mode powerful node-pools to your cluster.
I went through your command, there is a few issues I'd like to highlight:
kubectl run was deprecated in 1.12 to all resources except for pods and it is retired in version 1.18.
apiVersion": "apps/v1beta1 is deprecated, and starting on v 1.16 it is no longer be supported, I replaced with apps/v1.
In spec.template.spec.container it's written "resouces" instead of "resources"
after fixing the resources the next issue is that requests and limits are written in array format, but they need to be in a list, otherwise you get this error:
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
error: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Resources: v1.ResourceRequirements.Limits: ReadMapCB: expect { or n, but found [, error found in #10 byte of ...|"limits":[{"cpu":"75|..., bigger context ...|Always","name":"model-run","resources":{"limits":[{"cpu":"750m","memory":"2500Mi"}],"requests":[{"cp|...
Here is the fixed format of your command:
kubectl run model-run --image-pull-policy=Always --overrides='{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "nginx",
"imagePullPolicy": "Always",
"resources": {
"requests": {
"memory": "2048Mi",
"cpu": "500m"
},
"limits": {
"memory": "2500Mi",
"cpu": "750m"
}
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath": "/path/collection/keys"
}
],
"env": [
{
"name": "GOOGLE_APPLICATION_CREDENTIALS",
"value": "/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}' --image=gcr.io/some-project/news/model-run:development
Now after aplying it on my Kubernetes Engine Cluster v1.15.11-gke.13 , here is the output of kubectl get pod X -o yaml:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
model-run-7bd8d79c7d-brmrw 1/1 Running 0 17s
$ kubectl get pod model-run-7bd8d79c7d-brmrw -o yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: model-run
pod-template-hash: 7bd8d79c7d
run: model-run
name: model-run-7bd8d79c7d-brmrw
namespace: default
spec:
containers:
- env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /path/collection/keys/key.json
image: nginx
imagePullPolicy: Always
name: model-run
resources:
limits:
cpu: 750m
memory: 2500Mi
requests:
cpu: 500m
memory: 2Gi
volumeMounts:
- mountPath: /path/collection/keys
name: credentials
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-tjn5t
readOnly: true
nodeName: gke-cluster-115-default-pool-abca4833-4jtx
restartPolicy: Always
volumes:
- name: credentials
secret:
defaultMode: 420
secretName: credentials
You can see that the resources limits and requests were set.
If you still have any question let me know in the comments!
It seems we can not override limits through --overrides flag.
What you can do is you could pass limits with the kubectl command.
kubectl run model-run --image-pull-policy=Always --requests='cpu=500m,memory=2048Mi' --limits='cpu=750m,memory=2500Mi' --overrides='
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "gcr.io/some-project/news/model-run:development",
"imagePullPolicy": "Always",
"resouces": {
"requests": [
{
"memory": "2048Mi",
"cpu": "500m"
}
],
"limits": [
{
"memory": "2500Mi",
"cpu": "750m"
}
]
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath":"/path/collection/keys"
}
],
"env":[
{
"name":"GOOGLE_APPLICATION_CREDENTIALS",
"value":"/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}
' --image=gcr.io/some-project/news/model-run:development

Kubernetes using secrets in pod

I have a spring boot app image which needs the following property.
server.ssl.keyStore=/certs/keystore.jks
I am loading the keystore file to secrets using the bewloe command.
kubectl create secret generic ssl-keystore-cert --from-file=./server-ssl.jks
I use the below secret reference in my deployment.yaml
{
"name": "SERVER_SSL_KEYSTORE",
"valueFrom": {
"secretKeyRef": {
"name": "ssl-keystore-cert",
"key": "server-ssl.jks"
}
}
}
With the above reference, I am getting the below error.
Error: failed to start container "app-service": Error response from
daemon: oci runtime error: container_linux.go:265: starting container
process caused "process_linux.go:368: container init caused \"setenv:
invalid argument\"" Back-off restarting failed container
If i go with the volume mount option,
"spec": {
"volumes": [
{
"name": "keystore-cert",
"secret": {
"secretName": "ssl-keystore-cert",
"items": [
{
"key": "server-ssl.jks",
"path": "keycerts"
}
]
}
}
],
"containers": [
{
"env": [
{
"name": "JAVA_OPTS",
"value": "-Dserver.ssl.keyStore=/certs/keystore/keycerts"
}
],
"name": "app-service",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"name": "keystore-cert",
"mountPath": "/certs/keystore"
}
],
"imagePullPolicy": "IfNotPresent"
}
]
I am getting the below error with the above approach.
Caused by: java.lang.IllegalArgumentException: Resource location must
not be null at
org.springframework.util.Assert.notNull(Assert.java:134)
~[spring-core-4.3.7.RELEASE.jar!/:4.3.7.RELEASE] at
org.springframework.util.ResourceUtils.getURL(ResourceUtils.java:131)
~[spring-core-4.3.7.RELEASE.jar!/:4.3.7.RELEASE] at
org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory.configureSslKeyStore(JettyEmbeddedServletContainerFactory.java:301)
~[spring-boot-1.4.5.RELEASE.jar!/:1.4.5.RELEASE]
I tried with the below option also, instead of JAVA_OPTS,
{
"name": "SERVER_SSL_KEYSTORE",
"value": "/certs/keystore/keycerts"
}
Still the error is same.
Not sure what is the right approach.
I tried to repeat the situation with your configuration. I created a secret used command:
kubectl create secret generic ssl-keystore-cert --from-file=./server-ssl.jks
I used this YAML as a test environment:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: JAVA_OPTS
value: "-Dserver.ssl.keyStore=/certs/keystore/server-ssl.jks"
ports:
- containerPort: 8080
protocol: TCP
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/cert/keystore"
volumes:
- name: secret-volume
secret:
secretName: ssl-keystore-cert
As you see, I used "server-ssl.jks" file name in the variable. If you create the secret from a file, Kubernetes will store this file in the secret. When you mount this secret to any place, you just store the file. You tried to use /certs/keystore/keycerts but it doesn't exist, which you see in logs:
Resource location must not be null at org.springframework.util.Assert.notNull
Because your mounted secret is here /certs/keystore/keycerts/server-ssl.jks
It should work, but just fix the paths

Template PersistentVolumeSelector labels in StatefulSet's volumeClaimTemplate

So there is:
the StatefulSet to control several replicas of a Pod in an ordered manner.
the PersistentVolumeClaim to provide volume to a Pod.
the statefulset.spec.volumeClaimTemplate[] to bind the previous two together.
the PersistentVolumeSelector to control which PersistentVolume fulfills which PersistentVolumeClaim.
Suppose I have persistent volumes named pv0 and pv1, and a statefulset with 2 replicas called couchdb. Concretely, the statefulset is:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: couchdb
spec:
...
replicas: 2
template:
...
spec:
containers:
- name: couchdb
image: klaemo/couchdb:1.6
volumeMounts:
- name: db
mountPath: /usr/local/var/lib/couchdb
volumes:
- name: db
persistentVolumeClaim
claimName: db
volumeClaimTemplates:
- metadata:
name: db
spec:
...
this StatefulSet generates two PersistentVolumeClaim named db-couchdb-0 and db-couchdb-1. The problem is that it is not guaranteed that pvc db-couchdb-0 will be always bound to pv0.
The question is: how do you ensure controlled binds for PersistentVolumeClaim managed by a StatefulSet controller?
I tried adding a volume selector like this:
selector:
matchLabels:
name: couchdb
to the statefulset.spec.volumeClaimTemplate[0].spec but the value of name doesn't get templated. Both claims will end up looking for a PersistentVolume labeled name=couchdb.
What you're looking for is a claimRef inside the persistent volume, which have the name and namespace of PVC, to which you want to bind your PV. Please have a look at the following jsons:
Pv-0.json
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-data-vol-0",
"labels": {
"type": "local"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"local": {
"path": "/prafull/data/pv-0"
},
"claimRef": {
"namespace": "default",
"name": "data-test-sf-0"
},
"nodeAffinity": {
"required": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": [
"ip-10-0-1-46.ec2.internal"
]
}
]
}
]
}
}
}
}
Pv-1.json
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-data-vol-1",
"labels": {
"type": "local"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"local": {
"path": "/prafull/data/pv-1"
},
"claimRef": {
"namespace": "default",
"name": "data-test-sf-1"
},
"nodeAffinity": {
"required": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": [
"ip-10-0-1-46.ec2.internal"
]
}
]
}
]
}
}
}
}
Statefulset.json
{
"kind": "StatefulSet",
"apiVersion": "apps/v1beta1",
"metadata": {
"name": "test-sf",
"labels": {
"state": "test-sf"
}
},
"spec": {
"replicas": 2,
"template": {
"metadata": {
"labels": {
"app": "test-sf"
},
"annotations": {
"pod.alpha.kubernetes.io/initialized": "true"
}
}
...
...
},
"volumeClaimTemplates": [
{
"metadata": {
"name": "data"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"resources": {
"requests": {
"storage": "10Gi"
}
}
}
}
]
}
}
The volumeClaimTemplate will create two PVC test-sf-data-0 and test-sf-data-1. The two PV definition contains the claimRef section which has the namespace and PVC name on which PV should bind to. Please note that you have to provide the namespace as a mandatory because PV's are independent of namespace and there might be two PVC with same name on two different namespace. Hence, how does kubernetes controller manager will understand on which PVC, PV should bind, if we don't provide namespace name.
Hope this answers your question.
If you are using dynamic provisioning, the answer is No you can not. Because a volume was dynamically provisioned is always deleted after release.
If not dynamic provisioning, you need to reclaim the pv manually.
Check the reclaiming section of k8s doc.

Getting IP into env variable via DNS in kubernetes

I'm trying to get my head around K8s coming from docker compose. I would like to setup my first pod with two containers which I pushed to a registry. Following question:
How do I get the IP via DNS into a environment variable, so that registrator can connect to consul? See container registrtor in args consul://consul:8500. The consul needs to be changed with the env.
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "service-discovery",
"labels": {
"name": "service-discovery"
}
},
"spec": {
"containers": [
{
"name": "consul",
"image": "eu.gcr.io/{myproject}/consul",
"args": [
"-server",
"-bootstrap",
"-advertise=$(MY_POD_IP)"
],
"env": [{
"name": "MY_POD_IP",
"valueFrom": {
"fieldRef": {
"fieldPath": "status.podIP"
}
}
}],
"imagePullPolicy": "IfNotPresent",
"ports": [
{
"containerPort": 8300,
"name": "server"
},
{
"containerPort": 8400,
"name": "alt-port"
},
{
"containerPort": 8500,
"name": "ui-port"
},
{
"containerPort": 53,
"name": "udp-port"
},
{
"containerPort": 8443,
"name": "https-port"
}
]
},
{
"name": "registrator",
"image": "eu.gcr.io/{myproject}/registrator",
"args": [
"-internal",
"-ip=$(MY_POD_IP)",
"consul://consul:8500"
],
"env": [{
"name": "MY_POD_IP",
"valueFrom": {
"fieldRef": {
"fieldPath": "status.podIP"
}
}
}],
"imagePullPolicy": "Always"
}
]
}
}
Exposing pods to other applications is done with a Service in Kubernetes. Once you've defined a service you can use environment variables related to that services within your pods. Exposing the Pod directly is not a good idea as Pods might get rescheduled.
When e.g. using a service like this:
apiVersion: v1
kind: Service
metadata:
name: consul
namespace: kube-system
labels:
name: consul
spec:
ports:
- name: http
port: 8500
- name: rpc
port: 8400
- name: serflan
port: 8301
- name: serfwan
port: 8302
- name: server
port: 8300
- name: consuldns
port: 8600
selector:
app: consul
The related environment variable will be CONSUL_SERVICE_IP
Anyways it seems others actually stopped using that environment variables for some reasons as described here