I am trying to deploy a pod with second interface using multus-cni. However, when I deploy my pod I only see just one interface the main one. The secondary interface is not created.
I followed the steps in the quick start guide to install multus.
Environment:
minikube v1.12.1 on Microsoft Windows 10 Enterprise
Kubernetes v1.18.3 on Docker 19.03.12
Multus version
--cni-version=0.3.1
$00-multus.conf
{ "cniVersion": "0.3.1", "name": "multus-cni-network", "type": "multus", "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig", "delegates": [ { "cniVersion": "0.3.1", "name":
"bridge", "type": "bridge", "bridge": "bridge", "addIf": "true", "isDefaultGateway": true, "forceAddress": false, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local
", "subnet": "10.244.0.0/16" } } ] }
$1-k8s.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "bridge",
"addIf": "true",
"isDefaultGateway": true,
"forceAddress": false,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"subnet": "10.244.0.0/16"
}
}
$87-podman-bridge.conflist
{
"cniVersion": "0.4.0",
"name": "podman",
"plugins": [
{
"type": "bridge",
"bridge": "cni-podman0",
"isGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"routes": [{ "dst": "0.0.0.0/0" }],
"ranges": [
[
{
"subnet": "10.88.0.0/16",
"gateway": "10.88.0.1"
}
]
]
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
},
{
"type": "firewall"
},
{
"type": "tuning"
}
]
}
$multus.kubeconfig
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: https://[10.96.0.1]:443
certificate-authority-data: .....
users:
- name: multus
user:
token: .....
contexts:
- name: multus-context
context:
cluster: local
user: multus
current-context: multus-context
File of '/etc/cni/multus/net.d'
**NetworkAttachment info:**
cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.1",
"type": "macvlan",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'
EOF
Pod yaml info:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
name: samplepod
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
EOF
I installed new minikube version, now adding secondary interface seems to be fine.
Related
I am trying to deploy an api version with the following templates:
"apiVersion": "apiextensions.k8s.io/v1",
"kind": "CustomResourceDefinition",
"metadata": {
"name": "azureassignedidentities.aadpodidentity.k8s.io"
},
"spec":{
"conversion": {
"strategy": None
},
"group": "aadpodidentity.k8s.io",
"names": {
"kind": "AzureAssignedIdentity",
"listKind": "AzureAssignedIdentityList",
"plural": "azureassignedidentities",
"singular": "azureassignedidentity"
},
"preserveUnknownFields": true,
"scope": "Namespaced",
"versions":[
"name": "v1",
"served": true,
"storage": true,
]
},
"status": {
"acceptedNames":{
"kind": ""
"listKind": ""
"plural": ""
"singular": ""
},
"conditions": [],
"storedVersions": []
}
When I ran
kubectl get AzureAssignedIdentities -A -o yaml
I am getting empty response as below.
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Can anyone please tell what's wrong here.
Thanks in advance!
With the below yml file:
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:alpine
On running kubectl create -f nginx.pod.yml --save-config, then as per the documentation: If true, the configuration of current object will be saved in its annotation.
Where exactly is this annotation saved? How to view this annotation?
Below command would print all the annotations present in the pod my-nginx:
kubectl get pod my-nginx -o jsonpath='{.metadata.annotations}'
Under kubectl.kubernetes.io/last-applied-configuration of the above output, your configuration used is stored.
Here is an example showing the usage:
Original manifest for my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my-deploy
name: my-deploy
spec:
replicas: 1
selector:
matchLabels:
app: my-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
Created the deployment as follow:
k create -f x.yml --save-config
deployment.apps/my-deploy created
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
nginx
Now some user came and changed the image on nginx from nginx to httpd, using imperative commands.
k set image deployment/my-deploy nginx=httpd --record
deployment.apps/my-deploy image updated
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
httpd
However, we can check that the last applied declarative configuration is not updated.
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
Now, change the image name in the original manifest file from nginx to flask, then do kubectl apply(a declarative command)
kubectl apply -f orig.yml
deployment.apps/my-deploy configured
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
flask
Now check the last applied configuration annotation, this would have flask in it. Remember, it was missing when kubectl set image command was used.
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "flask",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
Where is the "last-applied" annotation saved:
Just like everything else, Its saved in etcd , created the pod using the manifest provided in the question and ran raw etcd command to print the content. (in this dev environment, etcd was not encrypted).
ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/pods/default/my-nginx
/registry/pods/default/my-nginx
k8s
v1Pod⚌
⚌
my-nginxdefault"*$a3s4b729-c96a-40f7-8de9-5d5f4ag21gfa2⚌⚌⚌b⚌
0kubectl.kubernetes.io/last-applied-configuration⚌{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"my-nginx","namespace":"default"},"spec":{"containers":[{"image":"nginx:alpine","name":"my-nginx"}]}}
I want to manually reroute (when I have needs) a web server (A Helm deployment on GKE) to another one.
To do that I have 3 Helm deployments :
Application X
Application Y
Ingress on application X
All work fine, but if I launch a Helm update with Ingress chart changing uniquely the selector of the service I target I have 502 errors :(
Source of service :
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.Service.Name }}-https
labels:
app: {{ .Values.Service.Name }}
type: svc
name: {{ .Values.Service.Name }}
environment: {{ .Values.Environment.Name }}
annotations:
cloud.google.com/neg: '{"ingress": true}'
beta.cloud.google.com/backend-config: '{"ports": {"{{ .Values.Application.Port }}":"{{ .Values.Service.Name }}-https"}}'
spec:
type: NodePort
selector:
name: {{ .Values.Application.Name }}
environment: {{ .Values.Environment.Name }}
ports:
- protocol: TCP
port: {{ .Values.Application.Port }}
targetPort: {{ .Values.Application.Port }}
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: {{ .Values.Service.Name }}-https
spec:
timeoutSec: 50
connectionDraining:
drainingTimeoutSec: 60
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 300
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Values.Service.Name }}-https
labels:
app: {{ .Values.Service.Name }}
type: ingress
name: {{ .Values.Service.Name }}
environment: {{ .Values.Environment.Name }}
annotations:
kubernetes.io/ingress.global-static-ip-name: {{ .Values.Service.PublicIpName }}
networking.gke.io/managed-certificates: "{{ join "," .Values.Service.DomainNames }}"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: {{ $.Values.Service.Name }}-https
servicePort: 80
rules:
{{- range .Values.Service.DomainNames }}
- host: {{ . | title | lower }}
http:
paths:
- backend:
serviceName: {{ $.Values.Service.Name }}-https
servicePort: 80
{{- end }}
The only thing which change from one call to another is the value of "{{ .Values.Application.Name }}", all other values are strictly the same.
Targeted PODS are always UP & RUNNING and all respond 200 using "kubectl" port forwarding test.
Here is the status of all my namespace objects :
NAME READY STATUS RESTARTS AGE
pod/drupal-dummy-404-v1-pod-744454b7ff-m4hjk 1/1 Running 0 2m32s
pod/drupal-dummy-404-v1-pod-744454b7ff-z5l29 1/1 Running 0 2m32s
pod/drupal-dummy-v1-pod-77f5bf55c6-9dq8n 1/1 Running 0 3m58s
pod/drupal-dummy-v1-pod-77f5bf55c6-njfl9 1/1 Running 0 3m57s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/drupal-dummy-v1-service-https NodePort 172.16.90.71 <none> 80:31391/TCP 3m49s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/drupal-dummy-404-v1-pod 2/2 2 2 2m32s
deployment.apps/drupal-dummy-v1-pod 2/2 2 2 3m58s
NAME DESIRED CURRENT READY AGE
replicaset.apps/drupal-dummy-404-v1-pod-744454b7ff 2 2 2 2m32s
replicaset.apps/drupal-dummy-v1-pod-77f5bf55c6 2 2 2 3m58s
NAME AGE
managedcertificate.networking.gke.io/d8.syspod.fr 161m
managedcertificate.networking.gke.io/d8gfi.syspod.fr 128m
managedcertificate.networking.gke.io/dummydrupald8.cnes.fr 162m
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/drupal-dummy-v1-service-https d8gfi.syspod.fr 34.120.106.136 80 3m50s
Another test has to pre-launch two services, one for each deployment and just update the Ingress Helm deployment changing this time "{{ $.Values.Service.Name }}", same problem, and the site indisponibility is here from 60s to 300s.
Here is the status of all my namespace objects (for this second test) :
root#47475bc8c41f:/opt/bin# k get all,svc,ingress,managedcertificates
NAME READY STATUS RESTARTS AGE
pod/drupal-dummy-404-v1-pod-744454b7ff-8r5pm 1/1 Running 0 26m
pod/drupal-dummy-404-v1-pod-744454b7ff-9cplz 1/1 Running 0 26m
pod/drupal-dummy-v1-pod-77f5bf55c6-56dnr 1/1 Running 0 30m
pod/drupal-dummy-v1-pod-77f5bf55c6-mg95j 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/drupal-dummy-404-v1-pod-https NodePort 172.16.106.121 <none> 80:31030/TCP 26m
service/drupal-dummy-v1-pod-https NodePort 172.16.245.251 <none> 80:31759/TCP 27m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/drupal-dummy-404-v1-pod 2/2 2 2 26m
deployment.apps/drupal-dummy-v1-pod 2/2 2 2 30m
NAME DESIRED CURRENT READY AGE
replicaset.apps/bastion-66bb77bfd5 1 1 1 148m
replicaset.apps/drupal-dummy-404-v1-pod-744454b7ff 2 2 2 26m
replicaset.apps/drupal-dummy-v1-pod-77f5bf55c6 2 2 2 30m
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/drupal-dummy-v1-service-https d8gfi.syspod.fr 34.120.106.136 80 14m
Does anybody have any explanation (and solution) ?
Added deployment DUMP (sure something is missing but I don't see) :
root#c55834fbdf1a:/# k get deployment.apps/drupal-dummy-v1-pod -o json
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"deployment.kubernetes.io/revision": "2",
"meta.helm.sh/release-name": "drupal-dummy-v1-pod",
"meta.helm.sh/release-namespace": "e1"
},
"creationTimestamp": "2020-06-23T18:49:59Z",
"generation": 2,
"labels": {
"app.kubernetes.io/managed-by": "Helm",
"environment": "e1",
"name": "drupal-dummy-v1-pod",
"type": "dep"
},
"name": "drupal-dummy-v1-pod",
"namespace": "e1",
"resourceVersion": "3977170",
"selfLink": "/apis/apps/v1/namespaces/e1/deployments/drupal-dummy-v1-pod",
"uid": "56f74fb9-b582-11ea-9df2-42010a000006"
},
"spec": {
"progressDeadlineSeconds": 600,
"replicas": 2,
"revisionHistoryLimit": 10,
"selector": {
"matchLabels": {
"environment": "e1",
"name": "drupal-dummy-v1-pod",
"type": "dep"
}
},
"strategy": {
"rollingUpdate": {
"maxSurge": "25%",
"maxUnavailable": "25%"
},
"type": "RollingUpdate"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"environment": "e1",
"name": "drupal-dummy-v1-pod",
"type": "dep"
}
},
"spec": {
"containers": [
{
"env": [
{
"name": "APPLICATION",
"value": "drupal-dummy-v1-pod"
},
{
"name": "DB_PASS",
"valueFrom": {
"secretKeyRef": {
"key": "password",
"name": "dbpassword"
}
}
},
{
"name": "DB_FQDN",
"valueFrom": {
"configMapKeyRef": {
"key": "dbip",
"name": "gcpenv"
}
}
},
{
"name": "DB_PORT",
"valueFrom": {
"configMapKeyRef": {
"key": "dbport",
"name": "gcpenv"
}
}
},
{
"name": "DB_NAME",
"valueFrom": {
"configMapKeyRef": {
"key": "dbdatabase",
"name": "gcpenv"
}
}
},
{
"name": "DB_USER",
"valueFrom": {
"configMapKeyRef": {
"key": "dbuser",
"name": "gcpenv"
}
}
}
],
"image": "eu.gcr.io/gke-drupal-276313/drupal-dummy:1.0.0",
"imagePullPolicy": "Always",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/",
"port": 80,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"name": "drupal-dummy-v1-pod",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
}
],
"readinessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/",
"port": 80,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 5
},
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/www/html/sites/default",
"name": "drupal-dummy-v1-pod"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"name": "drupal-dummy-v1-pod",
"persistentVolumeClaim": {
"claimName": "drupal-dummy-v1-pod"
}
}
]
}
}
},
"status": {
"availableReplicas": 2,
"conditions": [
{
"lastTransitionTime": "2020-06-23T18:56:05Z",
"lastUpdateTime": "2020-06-23T18:56:05Z",
"message": "Deployment has minimum availability.",
"reason": "MinimumReplicasAvailable",
"status": "True",
"type": "Available"
},
{
"lastTransitionTime": "2020-06-23T18:49:59Z",
"lastUpdateTime": "2020-06-23T18:56:05Z",
"message": "ReplicaSet \"drupal-dummy-v1-pod-6865d969cd\" has successfully progressed.",
"reason": "NewReplicaSetAvailable",
"status": "True",
"type": "Progressing"
}
],
"observedGeneration": 2,
"readyReplicas": 2,
"replicas": 2,
"updatedReplicas": 2
}
}
Here service DUMP too :
root#c55834fbdf1a:/# k get service/drupal-dummy-v1-service-https -o json
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"annotations": {
"beta.cloud.google.com/backend-config": "{\"ports\": {\"80\":\"drupal-dummy-v1-service-https\"}}",
"cloud.google.com/neg": "{\"ingress\": true}",
"cloud.google.com/neg-status": "{\"network_endpoint_groups\":{\"80\":\"k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551\"},\"zones\":[\"europe-west3-a\",\"europe-west3-b\"]}",
"meta.helm.sh/release-name": "drupal-dummy-v1-service",
"meta.helm.sh/release-namespace": "e1"
},
"creationTimestamp": "2020-06-23T18:50:45Z",
"labels": {
"app": "drupal-dummy-v1-service",
"app.kubernetes.io/managed-by": "Helm",
"environment": "e1",
"name": "drupal-dummy-v1-service",
"type": "svc"
},
"name": "drupal-dummy-v1-service-https",
"namespace": "e1",
"resourceVersion": "3982781",
"selfLink": "/api/v1/namespaces/e1/services/drupal-dummy-v1-service-https",
"uid": "722d3a99-b582-11ea-9df2-42010a000006"
},
"spec": {
"clusterIP": "172.16.103.181",
"externalTrafficPolicy": "Cluster",
"ports": [
{
"nodePort": 32396,
"port": 80,
"protocol": "TCP",
"targetPort": 80
}
],
"selector": {
"environment": "e1",
"name": "drupal-dummy-v1-pod"
},
"sessionAffinity": "None",
"type": "NodePort"
},
"status": {
"loadBalancer": {}
}
}
And ingress one :
root#c55834fbdf1a:/# k get ingress.extensions/drupal-dummy-v1-service-https -o json
{
"apiVersion": "extensions/v1beta1",
"kind": "Ingress",
"metadata": {
"annotations": {
"ingress.gcp.kubernetes.io/pre-shared-cert": "mcrt-a15e339b-6c3f-4f23-8f6b-688dc98b33a6,mcrt-f3a385de-0541-4b9c-8047-6dcfcbd4d74f",
"ingress.kubernetes.io/backends": "{\"k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551\":\"HEALTHY\"}",
"ingress.kubernetes.io/forwarding-rule": "k8s-fw-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/https-forwarding-rule": "k8s-fws-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/https-target-proxy": "k8s-tps-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/ssl-cert": "mcrt-a15e339b-6c3f-4f23-8f6b-688dc98b33a6,mcrt-f3a385de-0541-4b9c-8047-6dcfcbd4d74f",
"ingress.kubernetes.io/target-proxy": "k8s-tp-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"ingress.kubernetes.io/url-map": "k8s-um-e1-drupal-dummy-v1-service-https--4846660e8b9bd880",
"kubernetes.io/ingress.global-static-ip-name": "gkxe-k1312-e1-drupal-dummy-v1",
"meta.helm.sh/release-name": "drupal-dummy-v1-service",
"meta.helm.sh/release-namespace": "e1",
"networking.gke.io/managed-certificates": "dummydrupald8.cnes.fr,d8.syspod.fr",
"nginx.ingress.kubernetes.io/rewrite-target": "/"
},
"creationTimestamp": "2020-06-23T18:50:45Z",
"generation": 1,
"labels": {
"app": "drupal-dummy-v1-service",
"app.kubernetes.io/managed-by": "Helm",
"environment": "e1",
"name": "drupal-dummy-v1-service",
"type": "ingress"
},
"name": "drupal-dummy-v1-service-https",
"namespace": "e1",
"resourceVersion": "3978178",
"selfLink": "/apis/extensions/v1beta1/namespaces/e1/ingresses/drupal-dummy-v1-service-https",
"uid": "7237fc51-b582-11ea-9df2-42010a000006"
},
"spec": {
"backend": {
"serviceName": "drupal-dummy-v1-service-https",
"servicePort": 80
},
"rules": [
{
"host": "dummydrupald8.cnes.fr",
"http": {
"paths": [
{
"backend": {
"serviceName": "drupal-dummy-v1-service-https",
"servicePort": 80
}
}
]
}
},
{
"host": "d8.syspod.fr",
"http": {
"paths": [
{
"backend": {
"serviceName": "drupal-dummy-v1-service-https",
"servicePort": 80
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "34.98.97.102"
}
]
}
}
}
I have seen that in kubernetes events (only when I reconfigure my service selector to target first or second deployment).
Switch to indisponibility page (3 seconds 502) :
81s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-b")
78s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-a")
Switch back to application (15 seconds 502 -> Never the same duration):
7s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-a")
7s Normal Attach service/drupal-dummy-v1-service-https Attach 1 network endpoint(s) (NEG "k8s1-4846660e-e1-drupal-dummy-v1-service-https-80-36c11551" in zone "europe-west3-b")
I could check that the NEG events appear just before 502 error ends, I suspect that when we change service definition a new NEG is implemented but the time is not immediate, and while we wait to have it we still not have the old service up and there is no service during this time :(
There is no "rolling update" of services definition ?
No solution at all, even with having two services and just upgrading the Ingress. Seen witn GCP support, GKE will always destroy then recretate something it is not possible to do any rolling update of service or ingress with no downtime. They suggest to have two full silos then play with DNS, we have chosen another solution, having only one deployment and just do a simple rolling update of deployment changing referenced docker image. Not really in the target, but it works...
I am trying to add resource and limits to my deployment on Kuberenetes Engine since one of my deployment on the pod is continuously getting evicted with an error message The node was low on resource: memory. Container model-run was using 1904944Ki, which exceeds its request of 0. I assume that the issue could be resolved by adding resource requests.
When I try to add resource requests and deploy, the deployment is successful but when I go back and and view detailed information about the Pod, with the command
kubectl get pod default-pod-name --output=yaml --namespace=default
It still says the pod has request of cpu: 100m and without any mention of memory that I have allotted. I am guessing that the cpu request of 100m was a default one. Please let me know how I can allot the requests and limits, the code I am using to deploy is as follows:
kubectl run model-run --image-pull-policy=Always --overrides='
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "gcr.io/some-project/news/model-run:development",
"imagePullPolicy": "Always",
"resouces": {
"requests": [
{
"memory": "2048Mi",
"cpu": "500m"
}
],
"limits": [
{
"memory": "2500Mi",
"cpu": "750m"
}
]
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath":"/path/collection/keys"
}
],
"env":[
{
"name":"GOOGLE_APPLICATION_CREDENTIALS",
"value":"/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}
' --image=gcr.io/some-project/news/model-run:development
Any solution will be appreciated
The node was low on resource: memory. Container model-run was using 1904944Ki, which exceeds its request of 0.
At first the message seems like there is a lack of resource in the node itself but the second part makes me believe you are correct in trying to raise the request limit for the container.
Just keep in mind that if you still face errors after this change, you might need to add mode powerful node-pools to your cluster.
I went through your command, there is a few issues I'd like to highlight:
kubectl run was deprecated in 1.12 to all resources except for pods and it is retired in version 1.18.
apiVersion": "apps/v1beta1 is deprecated, and starting on v 1.16 it is no longer be supported, I replaced with apps/v1.
In spec.template.spec.container it's written "resouces" instead of "resources"
after fixing the resources the next issue is that requests and limits are written in array format, but they need to be in a list, otherwise you get this error:
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
error: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Resources: v1.ResourceRequirements.Limits: ReadMapCB: expect { or n, but found [, error found in #10 byte of ...|"limits":[{"cpu":"75|..., bigger context ...|Always","name":"model-run","resources":{"limits":[{"cpu":"750m","memory":"2500Mi"}],"requests":[{"cp|...
Here is the fixed format of your command:
kubectl run model-run --image-pull-policy=Always --overrides='{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "nginx",
"imagePullPolicy": "Always",
"resources": {
"requests": {
"memory": "2048Mi",
"cpu": "500m"
},
"limits": {
"memory": "2500Mi",
"cpu": "750m"
}
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath": "/path/collection/keys"
}
],
"env": [
{
"name": "GOOGLE_APPLICATION_CREDENTIALS",
"value": "/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}' --image=gcr.io/some-project/news/model-run:development
Now after aplying it on my Kubernetes Engine Cluster v1.15.11-gke.13 , here is the output of kubectl get pod X -o yaml:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
model-run-7bd8d79c7d-brmrw 1/1 Running 0 17s
$ kubectl get pod model-run-7bd8d79c7d-brmrw -o yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: model-run
pod-template-hash: 7bd8d79c7d
run: model-run
name: model-run-7bd8d79c7d-brmrw
namespace: default
spec:
containers:
- env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /path/collection/keys/key.json
image: nginx
imagePullPolicy: Always
name: model-run
resources:
limits:
cpu: 750m
memory: 2500Mi
requests:
cpu: 500m
memory: 2Gi
volumeMounts:
- mountPath: /path/collection/keys
name: credentials
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-tjn5t
readOnly: true
nodeName: gke-cluster-115-default-pool-abca4833-4jtx
restartPolicy: Always
volumes:
- name: credentials
secret:
defaultMode: 420
secretName: credentials
You can see that the resources limits and requests were set.
If you still have any question let me know in the comments!
It seems we can not override limits through --overrides flag.
What you can do is you could pass limits with the kubectl command.
kubectl run model-run --image-pull-policy=Always --requests='cpu=500m,memory=2048Mi' --limits='cpu=750m,memory=2500Mi' --overrides='
{
"apiVersion": "apps/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "model-run",
"labels": {
"app": "model-run"
}
},
"spec": {
"selector": {
"matchLabels": {
"app": "model-run"
}
},
"template": {
"metadata": {
"labels": {
"app": "model-run"
}
},
"spec": {
"containers": [
{
"name": "model-run",
"image": "gcr.io/some-project/news/model-run:development",
"imagePullPolicy": "Always",
"resouces": {
"requests": [
{
"memory": "2048Mi",
"cpu": "500m"
}
],
"limits": [
{
"memory": "2500Mi",
"cpu": "750m"
}
]
},
"volumeMounts": [
{
"name": "credentials",
"readOnly": true,
"mountPath":"/path/collection/keys"
}
],
"env":[
{
"name":"GOOGLE_APPLICATION_CREDENTIALS",
"value":"/path/collection/keys/key.json"
}
]
}
],
"volumes": [
{
"name": "credentials",
"secret": {
"secretName": "credentials"
}
}
]
}
}
}
}
' --image=gcr.io/some-project/news/model-run:development
I'm trying to get my head around K8s coming from docker compose. I would like to setup my first pod with two containers which I pushed to a registry. Following question:
How do I get the IP via DNS into a environment variable, so that registrator can connect to consul? See container registrtor in args consul://consul:8500. The consul needs to be changed with the env.
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "service-discovery",
"labels": {
"name": "service-discovery"
}
},
"spec": {
"containers": [
{
"name": "consul",
"image": "eu.gcr.io/{myproject}/consul",
"args": [
"-server",
"-bootstrap",
"-advertise=$(MY_POD_IP)"
],
"env": [{
"name": "MY_POD_IP",
"valueFrom": {
"fieldRef": {
"fieldPath": "status.podIP"
}
}
}],
"imagePullPolicy": "IfNotPresent",
"ports": [
{
"containerPort": 8300,
"name": "server"
},
{
"containerPort": 8400,
"name": "alt-port"
},
{
"containerPort": 8500,
"name": "ui-port"
},
{
"containerPort": 53,
"name": "udp-port"
},
{
"containerPort": 8443,
"name": "https-port"
}
]
},
{
"name": "registrator",
"image": "eu.gcr.io/{myproject}/registrator",
"args": [
"-internal",
"-ip=$(MY_POD_IP)",
"consul://consul:8500"
],
"env": [{
"name": "MY_POD_IP",
"valueFrom": {
"fieldRef": {
"fieldPath": "status.podIP"
}
}
}],
"imagePullPolicy": "Always"
}
]
}
}
Exposing pods to other applications is done with a Service in Kubernetes. Once you've defined a service you can use environment variables related to that services within your pods. Exposing the Pod directly is not a good idea as Pods might get rescheduled.
When e.g. using a service like this:
apiVersion: v1
kind: Service
metadata:
name: consul
namespace: kube-system
labels:
name: consul
spec:
ports:
- name: http
port: 8500
- name: rpc
port: 8400
- name: serflan
port: 8301
- name: serfwan
port: 8302
- name: server
port: 8300
- name: consuldns
port: 8600
selector:
app: consul
The related environment variable will be CONSUL_SERVICE_IP
Anyways it seems others actually stopped using that environment variables for some reasons as described here