Kubernetes: Error kubectl edit deployment - kubernetes

I'm trying to edit deployment in kubernetes by:
kubectl -n <namespace> edit deployment <depolyment_name>.
after entering the command, vi windows for editing appears, then I make some changes for example in the command section or in volumeMounts section.
but I get the following error:
A copy of your changes has been stored to "/tmp/kubectl-edit-hv5dh.yaml"
error: map: map[] does not contain declared merge key: name
someone can help with it?
attached the edit deployment file of apiserver:
kubectl -n federation-system edit deployment apiserver
(codes between ** ** are the lines i added)
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
federation.alpha.kubernetes.io/federation-name: fed
creationTimestamp: 2018-04-01T13:26:40Z
generation: 1
labels:
app: federated-cluster
name: apiserver
namespace: federation-system
resourceVersion: "393140"
selfLink: /apis/extensions/v1beta1/namespaces/federation-system/deployments/apiserver
uid: <uid>
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: federated-cluster
module: federation-apiserver
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
federation.alpha.kubernetes.io/federation-name: fed
creationTimestamp: null
labels:
app: federated-cluster
module: federation-apiserver
name: apiserver
spec:
containers:
- command:
- /fcp
- federation-apiserver
- --admission-control=NamespaceLifecycle
- --advertise-address=<master-ip>
- --bind-address=0.0.0.0
- --client-ca-file=/etc/federation/apiserver/ca.crt
- --etcd-servers=http://localhost:2379
- --secure-port=8443
- --tls-cert-file=/etc/federation/apiserver/server.crt
- --tls-private-key-file=/etc/federation/apiserver/server.key
**- --enable-admission-plugins=SchedulingPolicy
- --admission-control-config-file=/etc/kubernetes/admission/config.yml**
image: gcr.io/k8s-jkns-e2e-gce-federation/fcp-amd64:v1.9.0-alpha.3
imagePullPolicy: IfNotPresent
name: apiserver
ports:
- containerPort: 8443
name: https
protocol: TCP
- containerPort: 8080
name: local
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/federation/apiserver
name: apiserver-credentials
readOnly: true
**volumeMounts:
- mountPath: /etc/kubernetes/admission
name: admission-config**
- command:
- /usr/local/bin/etcd
- --data-dir
- /var/etcd/data
image: gcr.io/google_containers/etcd:3.1.10
imagePullPolicy: IfNotPresent
name: etcd
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- {}
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: apiserver-credentials
secret:
defaultMode: 420
secretName: apiserver-credentials
**- name: admission-config
configMap:
name: admission**
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-04-01T13:26:40Z
lastUpdateTime: 2018-04-01T13:26:40Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2018-04-01T13:26:40Z
lastUpdateTime: 2018-04-01T13:27:20Z
message: ReplicaSet "apiserver-8484fd45f8" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
it's happened after I created configMap file:
kubectl create -f scheduling-policy-admission.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: admission
namespace: federation-system
data:
config.yml: |
apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: SchedulingPolicy
path: /etc/kubernetes/admission/scheduling-policy-config.yml
scheduling-policy-config.yml: |
kubeconfig: /etc/kubernetes/admission/opa-kubeconfig
opa-kubeconfig: |
clusters:
- name: opa-api
cluster:
server: http://opa.federation-system.svc.cluster.local:8181/v0/data/kubernetes/placement
users:
- name: scheduling-policy
user:
token: deadbeefsecret
contexts:
- name: default
context:
cluster: opa-api
user: scheduling-policy
current-context: default
I'm trying to configure Admission Controller in the Federation API.
Thanks,

dnsPolicy: ClusterFirst
# DELETE imagePullSecrets:
# DELETE - {}
restartPolicy: Always
I would strongly recommend removing that imagePullSecrets block. Since those objects have a mergeKey of name, but that object has no name, it would very easily cause the error you are experiencing. If the YAML was given to your editor in that condition, then I am almost certain that is a kubernetes bug: it should always(?) allow round-tripping YAML via kubectl edit, if for no other reason than this situation right here.

Related

Communication between pods

I am currently in the process to set up sentry.io but i am having problems in setting it up in openshift 3.11
I got pods running for sentry itself, postgresql, redis and memcache but according to the log messages they are not able to communicate together.
sentry.exceptions.InvalidConfiguration: Error 111 connecting to 127.0.0.1:6379. Connection refused.
Do i need to create a network like in docker or should the pods (all in the same namespace) be able to talk to each other by default? I got admin rights for the complete project so i can also work with the console and not only the web interface.
Best wishes
EDIT: Adding deployment config for sentry and its service and for the sake of simplicity the postgres config and service. I also blanked out some unnecessary information with the keyword BLANK if I went overboard please let me know and ill look it up.
Deployment config for sentry:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
generation: 20
labels:
app: sentry
name: sentry
namespace: test
resourceVersion: '506667843'
selfLink: BLANK
uid: BLANK
spec:
replicas: 1
selector:
app: sentry
deploymentconfig: sentry
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: sentry
deploymentconfig: sentry
spec:
containers:
- env:
- name: SENTRY_SECRET_KEY
value: Iamsosecret
- name: C_FORCE_ROOT
value: '1'
- name: SENTRY_FILESTORE_DIR
value: /var/lib/sentry/files/data
image: BLANK
imagePullPolicy: Always
name: sentry
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/sentry/files
name: sentry-1
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: sentry-1
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- sentry
from:
kind: ImageStreamTag
name: 'sentry:latest'
namespace: catcloud
lastTriggeredImage: BLANK
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: Deployment config has minimum availability.
status: 'True'
type: Available
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: replication controller "sentry-19" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 19
observedGeneration: 20
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
Service for sentry:
apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
labels:
app: sentry
name: sentry
namespace: test
resourceVersion: '505555608'
selfLink: BLANK
uid: BLANK
spec:
clusterIP: BLANK
ports:
- name: 9000-tcp
port: 9000
protocol: TCP
targetPort: 9000
selector:
deploymentconfig: sentry
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Deployment config for postgresql:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
generation: 10
labels:
app: postgres
type: backend
name: postgres
namespace: test
resourceVersion: '506664185'
selfLink: BLANK
uid: BLANK
spec:
replicas: 1
selector:
app: postgres
deploymentconfig: postgres
type: backend
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: postgres
deploymentconfig: postgres
type: backend
spec:
containers:
- env:
- name: PGDATA
value: /var/lib/postgresql/data/sql
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
- name: POSTGRESQL_USER
value: sentry
- name: POSTGRESQL_PASSWORD
value: sentry
- name: POSTGRESQL_DATABASE
value: sentry
image: BLANK
imagePullPolicy: Always
name: postgres
ports:
- containerPort: 5432
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: volume-uirge
subPath: sql
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 2000020900
terminationGracePeriodSeconds: 30
volumes:
- name: volume-uirge
persistentVolumeClaim:
claimName: postgressql
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- postgres
from:
kind: ImageStreamTag
name: 'postgres:latest'
namespace: catcloud
lastTriggeredImage: BLANK
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: Deployment config has minimum availability.
status: 'True'
type: Available
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: replication controller "postgres-9" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 9
observedGeneration: 10
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
Service config postgresql:
apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
labels:
app: postgres
type: backend
name: postgres
namespace: catcloud
resourceVersion: '506548841'
selfLink: /api/v1/namespaces/catcloud/services/postgres
uid: BLANK
spec:
clusterIP: BLANK
ports:
- name: 5432-tcp
port: 5432
protocol: TCP
targetPort: 5432
selector:
deploymentconfig: postgres
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Pods (even in the same namespace) are not able to talk directly to each other by default. You need to create a Service in order to allow a pod to receive connections from another pod. In general, one pod connects to another pod via the latter's service, as I illustrated below:
The connection info would look something like <servicename>:<serviceport> (e.g. elasticsearch-master:9200) rather than localhost:port.
You can read https://kubernetes.io/docs/concepts/services-networking/service/ for further info on a service.
N.B: localhost:port will only work for containers running inside the same pod to connect to each other, just like how nginx connects to gravitee-mgmt-api and gravitee-mgmt-ui in my illustration above.
Well for me it looks like you didn't configure the sentry correctly means you are not providing credential to sentry pod to connect to PostgreSQL pod and redis pod.
env:
- name: SENTRY_SECRET_KEY
valueFrom:
secretKeyRef:
name: sentry-sentry
key: sentry-secret
- name: SENTRY_DB_USER
value: "sentry"
- name: SENTRY_DB_NAME
value: "sentry"
- name: SENTRY_DB_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-postgresql
key: postgres-password
- name: SENTRY_POSTGRES_HOST
value: sentry-postgresql
- name: SENTRY_POSTGRES_PORT
value: "5432"
- name: SENTRY_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-redis
key: redis-password
- name: SENTRY_REDIS_HOST
value: sentry-redis
- name: SENTRY_REDIS_PORT
value: "6379"
- name: SENTRY_EMAIL_HOST
value: "smtp"
- name: SENTRY_EMAIL_PORT
value: "25"
- name: SENTRY_EMAIL_USER
value: ""
- name: SENTRY_EMAIL_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-sentry
key: smtp-password
- name: SENTRY_EMAIL_USE_TLS
value: "false"
- name: SENTRY_SERVER_EMAIL
value: "sentry#sentry.local"
for more info you could refer to this where they configured the sentry
https://github.com/maty21/sentry-kubernetes/blob/master/sentry.yaml
For communication between pods localhost or 127.0.0.1 does not work.
Get the IP of any pod using
kubectl describe podname
Use that IP in the other pod to communicate with above pod.
Since Pod IPs changes if the pod is recreated you should ideally use kubernetes service specifically clusterIP type for communication between pods within the cluster.

Azure AKS backup using Velero

I noticed that Velero can only backup AKS PVCs if those PVCs are disk and not Azure fileshares. To handle this i tried to use restic to backup by fileshares itself but i gives me a strange log:
This is how my actual pod looks like
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
backup.velero.io/backup-volumes: grafana-data
deployment.kubernetes.io/revision: "17"
And the log of my backup:
time="2020-05-26T13:51:54Z" level=info msg="Adding pvc grafana-data to additionalItems" backup=velero/grafana-test-volume cmd=/velero logSource="pkg/backup/pod_action.go:67" pluginName=velero
time="2020-05-26T13:51:54Z" level=info msg="Backing up item" backup=velero/grafana-test-volume group=v1 logSource="pkg/backup/item_backupper.go:169" name=grafana-data namespace=grafana resource=persistentvolumeclaims
time="2020-05-26T13:51:54Z" level=info msg="Executing custom action" backup=velero/grafana-test-volume group=v1 logSource="pkg/backup/item_backupper.go:330" name=grafana-data namespace=grafana resource=persistentvolumeclaims
time="2020-05-26T13:51:54Z" level=info msg="Skipping item because it's already been backed up." backup=velero/grafana-test-volume group=v1 logSource="pkg/backup/item_backupper.go:163" name=grafana-data namespace=grafana resource=persistentvolumeclaims
As you can see somehow it did not backup the grafana-data volume since it says it is already in the backup (where it is actually not).
My azurefile volume holds these contents:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1beta1","kind":"StorageClass","metadata":{"annotations":{},"labels":{"kubernetes.io/cluster-service":"true"},"name":"azurefile"},"parameters":{"skuName":"Standard_LRS"},"provisioner":"kubernetes.io/azure-file"}
creationTimestamp: "2020-05-18T15:18:18Z"
labels:
kubernetes.io/cluster-service: "true"
name: azurefile
resourceVersion: "1421202"
selfLink: /apis/storage.k8s.io/v1/storageclasses/azurefile
uid: e3cc4e52-c647-412a-bfad-81ab6eb222b1
mountOptions:
- nouser_xattr
parameters:
skuName: Standard_LRS
provisioner: kubernetes.io/azure-file
reclaimPolicy: Delete
volumeBindingMode: Immediate
As you can see i actually patched the storage class to hold the nouser_xattr mount option which was suggested earlier
When i check the Restic pod logs i see the following info:
E0524 10:22:08.908190 1 reflector.go:156] github.com/vmware-tanzu/velero/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1.PodVolumeBackup: Get https://10.0.0.1:443/apis/velero.io/v1/namespaces/velero/podvolumebackups?limit=500&resourceVersion=1212830: dial tcp 10.0.0.1:443: i/o timeout
I0524 10:22:08.909577 1 trace.go:116] Trace[1946538740]: "Reflector ListAndWatch" name:github.com/vmware-tanzu/velero/pkg/generated/informers/externalversions/factory.go:117 (started: 2020-05-24 10:21:38.908988405 +0000 UTC m=+487217.942875118) (total time: 30.000554209s):
Trace[1946538740]: [30.000554209s] [30.000554209s] END
When i check the PodVolumeBackup pod i see below contents. I don't know what is expected here though
➜ ~ kubectl -n velero get podvolumebackups -o yaml
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
To summarize this, i installed Velero like this:
velero install \
--provider azure \
--plugins velero/velero-plugin-for-microsoft-azure:v1.0.1 \
--bucket $BLOB_CONTAINER \
--secret-file ./credentials-velero \
--backup-location-config resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT_ID \
--snapshot-location-config apiTimeout=5m,resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP \
--use-restic
--wait
The end result is the deployment described below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
backup.velero.io/backup-volumes: app-upload
deployment.kubernetes.io/revision: "18"
creationTimestamp: "2020-05-18T16:55:38Z"
generation: 10
labels:
app: app
velero.io/backup-name: mekompas-tenant-production-20200518020012
velero.io/restore-name: mekompas-tenant-production-20200518020012-20200518185536
name: app
namespace: mekompas-tenant-production
resourceVersion: "427893"
selfLink: /apis/extensions/v1beta1/namespaces/mekompas-tenant-production/deployments/app
uid: c1961ec3-b7b1-4f81-9aae-b609fa3d31fc
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2020-05-18T20:24:19+02:00"
creationTimestamp: null
labels:
app: app
spec:
containers:
- image: nginx:1.17-alpine
imagePullPolicy: IfNotPresent
name: app-nginx
ports:
- containerPort: 80
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: app-files
- mountPath: /etc/nginx/conf.d
name: nginx-vhost
- env:
- name: CONF_DB_HOST
value: db.mekompas-tenant-production
- name: CONF_DB
value: mekompas
- name: CONF_DB_USER
value: mekompas
- name: CONF_DB_PASS
valueFrom:
secretKeyRef:
key: DATABASE_PASSWORD
name: secret
- name: CONF_EMAIL_FROM_ADDRESS
value: noreply#mekompas.nl
- name: CONF_EMAIL_FROM_NAME
value: mekompas
- name: CONF_EMAIL_REPLYTO_ADDRESS
value: slc#mekompas.nl
- name: CONF_UPLOAD_PATH
value: /uploads
- name: CONF_SMTP_HOST
value: smtp.sendgrid.net
- name: CONF_SMTP_PORT
value: "587"
- name: CONF_SMTP_USER
value: apikey
- name: CONF_SMTP_PASSWORD
valueFrom:
secretKeyRef:
key: MAIL_PASSWORD
name: secret
image: me.azurecr.io/mekompas/php-fpm-alpine:1.12.0
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- cp -r /app/. /var/www/html && chmod -R 777 /var/www/html/templates_c
&& chmod -R 777 /var/www/html/core/lib/htmlpurifier-4.9.3/library/HTMLPurifier/DefinitionCache
name: app-php
ports:
- containerPort: 9000
name: upstream-php
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: app-files
- mountPath: /uploads
name: app-upload
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: registrypullsecret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: app-upload
persistentVolumeClaim:
claimName: upload
- emptyDir: {}
name: app-files
- configMap:
defaultMode: 420
name: nginx-vhost
name: nginx-vhost
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-05-18T18:12:20Z"
lastUpdateTime: "2020-05-18T18:12:20Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-05-18T16:55:38Z"
lastUpdateTime: "2020-05-20T16:03:48Z"
message: ReplicaSet "app-688699c5fb" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 10
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Best,
Pim
Have you added nouser_xattr to your StorageClass mountOptions list?
This requirement is documented in GitHub issue 1800.
Also mentioned on the restic integration page (check under the Azure section), where they provide this snippet to patch your StorageClass resource:
kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME> \
--type json \
--patch '[{"op":"add","path":"/mountOptions/-","value":"nouser_xattr"}]'
If you have no existing mountOptions list, you can try:
kubectl patch storageclass azurefile \
--type merge \
--patch '{"mountOptions": ["nouser_xattr"]}'
Ensure the pod template of the Deployment resource includes the annotation backup.velero.io/backup-volumes. Annotations on Deployment resources will propagate to ReplicaSet resources, but not to Pod resources.
Specifically, in your example the annotation backup.velero.io/backup-volumes: app-upload should be a child of spec.template.metadata.annotations, rather than a child of metadata.annotations.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
# *** move velero annotiation from here ***
labels:
app: app
name: app
namespace: mekompas-tenant-production
spec:
template:
metadata:
annotations:
# *** velero annotation goes here in order to end up on the pod ***
backup.velero.io/backup-volumes: app-upload
labels:
app: app
spec:
containers:
- image: nginx:1.17-alpine

Kubernetes JSON Deployment patch definition

I am trying to patch a deployment, using below command.I trying to set environment variable in the pod by passing existing value from the metadata label definition of the my yaml.
Patching Command
kubectl patch deployment deploytest3 --type=json -p='[{"op": "add", "path": "/spec/template/spec/containers/1/env/2","value": {"name": "CUSTOMERID","valueFrom": {"fieldRef": {"fieldPath": "metadata.labels['customerid']"}}}}]'
Below is my Yaml for reference.(This is for reference not the original)
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "10"
creationTimestamp: 2019-01-22T11:24:17Z
generation: 10
labels:
app: my-cloud
name: deploytest3
namespace: default
resourceVersion: "60076984"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/deploytest3
uid: 41950c24-1e38-11e9-ba0c-42010a8000d6
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: my-cloud
customerid: "gx0388d"
customername: deploytest3
environment: dev
tier: backend
version: 11.0-latest
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: my-cloud
customerid: "gx0388d"
customername: deploytest3
environment: dev
tier: backend
version: 11.0-latest
spec:
containers:
- args:
- -c
- /bin/aSoftMS --simple-media-server;/bin/core_find.sh
command:
- /bin/bash
image: us.gcr.io/data/myms:11.0-latest
imagePullPolicy: IfNotPresent
name: ms
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/latest/system_rw/
name: deploytest3
- args:
- -c
command:
- /bin/bash
image: us.gcr.io/latest/core:11.0-latest
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /bin/CoreDumpGen
livenessProbe:
failureThreshold: 3
httpGet:
httpHeaders:
- name: User-Agent
value: DeploymentAdmin
path: /livenessstatus
port: 8443
scheme: HTTPS
initialDelaySeconds: 60
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
env:
- name: latest_CUSTOMER
value: abc-test
- name: latest
value: abc
image: us.gcr.io/latest/latest/core2:11.0-latest
imagePullPolicy: IfNotPresent
name: core2
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
hostname: deploytest3
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
volumes:
- name: bucket-auth
secret:
defaultMode: 256
secretName: bucket-auth
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2019-01-22T11:24:17Z
lastUpdateTime: 2019-01-22T11:24:17Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2019-01-22T11:24:17Z
lastUpdateTime: 2019-05-31T13:49:03Z
message: ReplicaSet "deploytest3-7ddbc9b9c4" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 10
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Below is error, i am getting while executing the above shared command
The Deployment "deploytest3" is invalid: spec.template.spec.containers[2].env[2].valueFrom.fieldRef.fieldPath: Invalid value: "metadata.labels[customerid]": error converting fieldPath:
field label not supported: metadata.labels[customerid]
I am manually add the environment variable in the deployment it works.
I think i am not setting correct with json format, Can someone help me.
"fieldPath": "metadata.labels['customerid']" - is incorrect
Apparently, it should be "fieldPath": "spec.template.metadata.labels['customerid']"
This is not exactly kubectl patch, but this will also set env variable from deployments label definition
kubectl set env deployment/deploytest3 CUSTOMERID=$(kubectl get deployments deploytest3 -o jsonpath='{$.spec.template.metadata.labels.customerid}')

Google Cloud Build can't find kubernetes deployment when updating image

I'm trying to set up a CI pipeline using Cloud Build. My build file builds and pushes the Docker images, and then uses the kubectl builder to update the images in a kubernetes deployment. However I'm getting the following error:
Error from server (NotFound): deployments.extensions "my-app" not found
Running: kubectl set image deployment my-app my-app-api=gcr.io/test-project-195004/my-app-api:ef53550e2ahy784a14iouyh79712c9f
I've verified via the UI that the deployment is active and has that name. Thought it could be a permissions issue but as far as I know the Cloud Build service account has the Kubernetes Engine Admin role, and is successfully able to pull the cluster auth data in the previous step.
EDIT: As requested, here is my build script:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA', '-f', 'deploy/api/Dockerfile', '--no-cache', '.']
- name: 'gcr.io/cloud-builders/kubectl'
args:
- set
- image
- deployment
- my-app
- my-app=gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER}'
images: ['gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA']
timeout: 5000s
And the deployment.yaml --
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-11-04T17:34:44Z
generation: 1
labels:
app: my-app
name: my-app
namespace: my-app
resourceVersion: "4370"
selfLink: /apis/extensions/v1beta1/namespaces/my-app/deployments/my-app
uid: 65kj54g3-e057-11e8-81bc-42010aa20094
spec:
replicas: 3
selector:
matchLabels:
app: my-app
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: my-app
spec:
containers:
- image: gcr.io/test-project/my-app-api:f16erogierjf1abd436e733398a08e1b76ce6b712
imagePullPolicy: IfNotPresent
name: my-appi-api
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: 2018-11-04T17:37:59Z
lastUpdateTime: 2018-11-04T17:37:59Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
You need to define the namespace for your deployment when using kubectl
if you're not using the default namespace.
Because you're using namespace for your deployment namespace: my-app you will need to add it to the command using the --namspace flag.
Here's how to do it inside cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA', '-f', 'deploy/api/Dockerfile', '--no-cache', '.']
- name: 'gcr.io/cloud-builders/kubectl'
args:
- set
- image
- deployment
- my-app
- my-app=gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA
- --namespace
- my-app
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER}'
images: ['gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA']
timeout: 5000s

Kubernetes isn't allowing me to create a pod with securityContext runAsUser

Summary:
I have pods with security context: runAsUser: 1337 that fail to start due to being disallowed by policy. I have altered admission-control to no success (as suggested here
and here)
What else do I need to force through this kind of security context?
Details
I'm working through the https://istio.io/docs/samples/bookinfo.html example to start porting over to istio.
I have a deployment named details-v1 (see below) from which a replica set and pod have been created. The pod is stuck in pending.
NAME READY STATUS RESTARTS AGE
details-v1-3207759430-nt9tt 0/2 Pending 0 34m
describe on the pod shows the cause of the error:
FailedValidation Error validating pod details-v1-3207759430-nt9tt.azs-master from api, ignoring: spec.initContainers[1].securityContext.privileged: Forbidden: disallowed by policy
In order to get this far, I have already made changes to the kube-apiserver:
/usr/local/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota \
--allow-privileged=true \
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"name":"details-v1","namespace":"azs-master"},"spec":{"replicas":1,"strategy":{},"template":{"metadata":{"annotations":{"alpha.istio.io/sidecar":"injected","alpha.istio.io/version":"jenkins#ubuntu-16-04-build-12ac793f80be71-0.1.6-dab2033","pod.beta.kubernetes.io/init-containers":"[{\"args\":[\"-p\",\"15001\",\"-u\",\"1337\"],\"image\":\"docker.io/istio/init:0.1\",\"imagePullPolicy\":\"Always\",\"name\":\"init\",\"securityContext\":{\"capabilities\":{\"add\":[\"NET_ADMIN\"]}}},{\"args\":[\"-c\",\"sysctl -w kernel.core_pattern=/tmp/core.%e.%p.%t \\u0026\\u0026 ulimit -c unlimited\"],\"command\":[\"/bin/sh\"],\"image\":\"alpine\",\"imagePullPolicy\":\"Always\",\"name\":\"enable-core-dump\",\"securityContext\":{\"privileged\":true}}]"},"creationTimestamp":null,"labels":{"app":"details","version":"v1"}},"spec":{"containers":[{"image":"istio/examples-bookinfo-details-v1","imagePullPolicy":"IfNotPresent","name":"details","ports":[{"containerPort":9080}],"resources":{}},{"args":["proxy","sidecar","-v","2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"POD_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}}],"image":"docker.io/istio/proxy_debug:0.1","imagePullPolicy":"Always","name":"proxy","resources":{},"securityContext":{"runAsUser":1337},"volumeMounts":[{"mountPath":"/etc/certs","name":"istio-certs","readOnly":true}]}],"volumes":[{"name":"istio-certs","secret":{"secretName":"istio.default"}}]}}},"status":{}}
creationTimestamp: 2017-06-23T13:30:00Z
generation: 1
labels:
app: details
version: v1
name: details-v1
namespace: azs-master
resourceVersion: "29678612"
selfLink: /apis/extensions/v1beta1/namespaces/azs-master/deployments/details-v1
uid: 0eacea4a-5818-11e7-af0e-0a55ca98bb17
spec:
replicas: 1
selector:
matchLabels:
app: details
version: v1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
alpha.istio.io/version: jenkins#ubuntu-16-04-build-12ac793f80be71-0.1.6-dab2033
pod.alpha.kubernetes.io/init-containers: '[{"name":"init","image":"docker.io/istio/init:0.1","args":["-p","15001","-u","1337"],"resources":{},"imagePullPolicy":"Always","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}},{"name":"enable-core-dump","image":"alpine","command":["/bin/sh"],"args":["-c","sysctl
-w kernel.core_pattern=/tmp/core.%e.%p.%t \u0026\u0026 ulimit -c unlimited"],"resources":{},"imagePullPolicy":"Always","securityContext":{"privileged":true}}]'
pod.beta.kubernetes.io/init-containers: '[{"name":"init","image":"docker.io/istio/init:0.1","args":["-p","15001","-u","1337"],"resources":{},"imagePullPolicy":"Always","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}},{"name":"enable-core-dump","image":"alpine","command":["/bin/sh"],"args":["-c","sysctl
-w kernel.core_pattern=/tmp/core.%e.%p.%t \u0026\u0026 ulimit -c unlimited"],"resources":{},"imagePullPolicy":"Always","securityContext":{"privileged":true}}]'
creationTimestamp: null
labels:
app: details
version: v1
spec:
containers:
- image: istio/examples-bookinfo-details-v1
imagePullPolicy: IfNotPresent
name: details
ports:
- containerPort: 9080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
- args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
image: docker.io/istio/proxy_debug:0.1
imagePullPolicy: Always
name: proxy
resources: {}
securityContext:
runAsUser: 1337
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /etc/certs
name: istio-certs
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: istio-certs
secret:
defaultMode: 420
secretName: istio.default
status:
conditions:
- lastTransitionTime: 2017-06-23T13:30:00Z
lastUpdateTime: 2017-06-23T13:30:00Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
Kubernetes server version: 1.5.6
The Pending state indicates that this was blocked by the Kubelet, which also needs the --allow-priveleged flag.