A pod cannot connect to a service by IP - kubernetes

My ingress pod is having trouble reaching two clusterIP services by IP. There are plenty of other clusterIP services it has no trouble reaching. Including in the same namespace. Another pod has no problem reaching the service (I tried the default backend in the same namespace and it was fine).
Where should I look? Here are my actual services, it cannot reach the first but can reach the second:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-08-23T16:59:10Z"
labels:
app: pka-168-emtpy-id
app.kubernetes.io/instance: palletman-pka-168-emtpy-id
app.kubernetes.io/managed-by: Tiller
helm.sh/chart: pal-0.0.1
release: palletman-pka-168-emtpy-id
name: pka-168-emtpy-id
namespace: palletman
resourceVersion: "108574168"
selfLink: /api/v1/namespaces/palletman/services/pka-168-emtpy-id
uid: 539364f9-c5c7-11e9-8699-0af40ce7ce3a
spec:
clusterIP: 100.65.111.47
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: pka-168-emtpy-id
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-03-05T19:57:26Z"
labels:
app: production
app.kubernetes.io/instance: palletman
app.kubernetes.io/managed-by: Tiller
helm.sh/chart: pal-0.0.1
release: palletman
name: production
namespace: palletman
resourceVersion: "81337664"
selfLink: /api/v1/namespaces/palletman/services/production
uid: e671c5e0-3f80-11e9-a1fc-0af40ce7ce3a
spec:
clusterIP: 100.65.82.246
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: production
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
My ingress pod:
apiVersion: v1
kind: Pod
metadata:
annotations:
sumologic.com/format: text
sumologic.com/sourceCategory: 103308/CT/LI/kube_ingress
sumologic.com/sourceName: kube_ingress
creationTimestamp: "2019-08-21T19:34:48Z"
generateName: ingress-nginx-65877649c7-
labels:
app: ingress-nginx
k8s-addon: ingress-nginx.addons.k8s.io
pod-template-hash: "2143320573"
name: ingress-nginx-65877649c7-5npmp
namespace: kube-ingress
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: ingress-nginx-65877649c7
uid: 97db28a9-c43f-11e9-920a-0af40ce7ce3a
resourceVersion: "108278133"
selfLink: /api/v1/namespaces/kube-ingress/pods/ingress-nginx-65877649c7-5npmp
uid: bcd92d96-c44a-11e9-8699-0af40ce7ce3a
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
- --configmap=$(POD_NAMESPACE)/ingress-nginx
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.13
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: ingress-nginx
ports:
- containerPort: 80
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-dg5wn
readOnly: true
dnsPolicy: ClusterFirst
nodeName: ip-10-55-131-177.eu-west-1.compute.internal
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-dg5wn
secret:
defaultMode: 420
secretName: default-token-dg5wn
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-08-21T19:34:48Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-08-21T19:34:50Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-08-21T19:34:48Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://d597673f4f38392a52e9537e6dd2473438c62c2362a30e3d58bf8a98e177eb12
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.13
imageID: docker-pullable://gcr.io/google_containers/nginx-ingress-controller#sha256:c9d2e67f8096d22564a6507794e1a591fbcb6461338fc655a015d76a06e8dbaa
lastState: {}
name: ingress-nginx
ready: true
restartCount: 0
state:
running:
startedAt: "2019-08-21T19:34:50Z"
hostIP: 10.55.131.177
phase: Running
podIP: 172.6.218.18
qosClass: BestEffort
startTime: "2019-08-21T19:34:48Z"

It could be connectivity to the node where your Pod is running. (Or network overlay related) You can check where that pod is running:
$ kubectl get pod -o=json | jq .items[0].spec.nodeName
Check if the node is 'Ready':
$ kubectl get node <node-from-above>
If it's ready, then ssh into the node to further troubleshoot:
$ ssh <node-from-above>
Is your overlay pod running on the node? (Calico, Weave, CNI, etc)
You can further troubleshoot connecting to the pod/container
# From <node-from-above>
$ docker exec -it <container-id-in-pod> bash
# Check connectivity (ping, dig, curl, etc)
Also, from using the kubectl command line (if you have network connectivity to the node)
$ kubectl exec -it <pod-id> -c <container-name> bash
# Troubleshoot...

Related

Communication between pods

I am currently in the process to set up sentry.io but i am having problems in setting it up in openshift 3.11
I got pods running for sentry itself, postgresql, redis and memcache but according to the log messages they are not able to communicate together.
sentry.exceptions.InvalidConfiguration: Error 111 connecting to 127.0.0.1:6379. Connection refused.
Do i need to create a network like in docker or should the pods (all in the same namespace) be able to talk to each other by default? I got admin rights for the complete project so i can also work with the console and not only the web interface.
Best wishes
EDIT: Adding deployment config for sentry and its service and for the sake of simplicity the postgres config and service. I also blanked out some unnecessary information with the keyword BLANK if I went overboard please let me know and ill look it up.
Deployment config for sentry:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
generation: 20
labels:
app: sentry
name: sentry
namespace: test
resourceVersion: '506667843'
selfLink: BLANK
uid: BLANK
spec:
replicas: 1
selector:
app: sentry
deploymentconfig: sentry
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: sentry
deploymentconfig: sentry
spec:
containers:
- env:
- name: SENTRY_SECRET_KEY
value: Iamsosecret
- name: C_FORCE_ROOT
value: '1'
- name: SENTRY_FILESTORE_DIR
value: /var/lib/sentry/files/data
image: BLANK
imagePullPolicy: Always
name: sentry
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/sentry/files
name: sentry-1
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: sentry-1
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- sentry
from:
kind: ImageStreamTag
name: 'sentry:latest'
namespace: catcloud
lastTriggeredImage: BLANK
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: Deployment config has minimum availability.
status: 'True'
type: Available
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: replication controller "sentry-19" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 19
observedGeneration: 20
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
Service for sentry:
apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
labels:
app: sentry
name: sentry
namespace: test
resourceVersion: '505555608'
selfLink: BLANK
uid: BLANK
spec:
clusterIP: BLANK
ports:
- name: 9000-tcp
port: 9000
protocol: TCP
targetPort: 9000
selector:
deploymentconfig: sentry
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Deployment config for postgresql:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
generation: 10
labels:
app: postgres
type: backend
name: postgres
namespace: test
resourceVersion: '506664185'
selfLink: BLANK
uid: BLANK
spec:
replicas: 1
selector:
app: postgres
deploymentconfig: postgres
type: backend
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: postgres
deploymentconfig: postgres
type: backend
spec:
containers:
- env:
- name: PGDATA
value: /var/lib/postgresql/data/sql
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
- name: POSTGRESQL_USER
value: sentry
- name: POSTGRESQL_PASSWORD
value: sentry
- name: POSTGRESQL_DATABASE
value: sentry
image: BLANK
imagePullPolicy: Always
name: postgres
ports:
- containerPort: 5432
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: volume-uirge
subPath: sql
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 2000020900
terminationGracePeriodSeconds: 30
volumes:
- name: volume-uirge
persistentVolumeClaim:
claimName: postgressql
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- postgres
from:
kind: ImageStreamTag
name: 'postgres:latest'
namespace: catcloud
lastTriggeredImage: BLANK
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: Deployment config has minimum availability.
status: 'True'
type: Available
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: replication controller "postgres-9" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 9
observedGeneration: 10
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
Service config postgresql:
apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
labels:
app: postgres
type: backend
name: postgres
namespace: catcloud
resourceVersion: '506548841'
selfLink: /api/v1/namespaces/catcloud/services/postgres
uid: BLANK
spec:
clusterIP: BLANK
ports:
- name: 5432-tcp
port: 5432
protocol: TCP
targetPort: 5432
selector:
deploymentconfig: postgres
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Pods (even in the same namespace) are not able to talk directly to each other by default. You need to create a Service in order to allow a pod to receive connections from another pod. In general, one pod connects to another pod via the latter's service, as I illustrated below:
The connection info would look something like <servicename>:<serviceport> (e.g. elasticsearch-master:9200) rather than localhost:port.
You can read https://kubernetes.io/docs/concepts/services-networking/service/ for further info on a service.
N.B: localhost:port will only work for containers running inside the same pod to connect to each other, just like how nginx connects to gravitee-mgmt-api and gravitee-mgmt-ui in my illustration above.
Well for me it looks like you didn't configure the sentry correctly means you are not providing credential to sentry pod to connect to PostgreSQL pod and redis pod.
env:
- name: SENTRY_SECRET_KEY
valueFrom:
secretKeyRef:
name: sentry-sentry
key: sentry-secret
- name: SENTRY_DB_USER
value: "sentry"
- name: SENTRY_DB_NAME
value: "sentry"
- name: SENTRY_DB_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-postgresql
key: postgres-password
- name: SENTRY_POSTGRES_HOST
value: sentry-postgresql
- name: SENTRY_POSTGRES_PORT
value: "5432"
- name: SENTRY_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-redis
key: redis-password
- name: SENTRY_REDIS_HOST
value: sentry-redis
- name: SENTRY_REDIS_PORT
value: "6379"
- name: SENTRY_EMAIL_HOST
value: "smtp"
- name: SENTRY_EMAIL_PORT
value: "25"
- name: SENTRY_EMAIL_USER
value: ""
- name: SENTRY_EMAIL_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-sentry
key: smtp-password
- name: SENTRY_EMAIL_USE_TLS
value: "false"
- name: SENTRY_SERVER_EMAIL
value: "sentry#sentry.local"
for more info you could refer to this where they configured the sentry
https://github.com/maty21/sentry-kubernetes/blob/master/sentry.yaml
For communication between pods localhost or 127.0.0.1 does not work.
Get the IP of any pod using
kubectl describe podname
Use that IP in the other pod to communicate with above pod.
Since Pod IPs changes if the pod is recreated you should ideally use kubernetes service specifically clusterIP type for communication between pods within the cluster.

HPA showing unknown in k8s

I configured HPA using a command as shown below
kubectl autoscale deployment isamruntime-v1 --cpu-percent=20 --min=1 --max=3 --namespace=default
horizontalpodautoscaler.autoscaling/isamruntime-v1 autoscaled
However, the HPA cannot identify the CPU load.
pranam#UNKNOWN kubernetes % kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
isamruntime-v1 Deployment/isamruntime-v1 <unknown>/20% 1 3 0 3s
I read a number of articles which suggested installing metrics server. So, I did that.
pranam#UNKNOWN kubernetes % kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator configured
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader configured
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
serviceaccount/metrics-server configured
deployment.apps/metrics-server configured
service/metrics-server configured
clusterrole.rbac.authorization.k8s.io/system:metrics-server configured
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server configured
I can see the metrics server.
pranam#UNKNOWN kubernetes % kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-7d88b45844-lz8zw 1/1 Running 0 22d 10.164.27.28 10.164.27.28 <none> <none>
calico-node-bsx6p 1/1 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
calico-node-g229m 1/1 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
calico-node-slwrh 1/1 Running 0 22d 10.164.27.28 10.164.27.28 <none> <none>
calico-node-tztjg 1/1 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
coredns-7d6bb98ccc-d8nrs 1/1 Running 0 25d 172.30.93.205 10.164.27.28 <none> <none>
coredns-7d6bb98ccc-n28dm 1/1 Running 0 25d 172.30.93.204 10.164.27.28 <none> <none>
coredns-7d6bb98ccc-zx5jx 1/1 Running 0 25d 172.30.93.197 10.164.27.28 <none> <none>
coredns-autoscaler-848db65fc6-lnfvf 1/1 Running 0 25d 172.30.93.201 10.164.27.28 <none> <none>
dashboard-metrics-scraper-576c46d9bd-k6z85 1/1 Running 0 25d 172.30.93.195 10.164.27.28 <none> <none>
ibm-file-plugin-7c57965855-494bz 1/1 Running 0 22d 172.30.93.216 10.164.27.28 <none> <none>
ibm-iks-cluster-autoscaler-7df84fb95c-fhtgv 1/1 Running 0 2d23h 172.30.137.98 10.164.27.46 <none> <none>
ibm-keepalived-watcher-9w4gb 1/1 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
ibm-keepalived-watcher-ps5zm 1/1 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
ibm-keepalived-watcher-rzxbs 1/1 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
ibm-keepalived-watcher-w6mxb 1/1 Running 0 25d 10.164.27.28 10.164.27.28 <none> <none>
ibm-master-proxy-static-10.164.27.28 2/2 Running 0 25d 10.164.27.28 10.164.27.28 <none> <none>
ibm-master-proxy-static-10.164.27.39 2/2 Running 0 8d 10.164.27.39 10.164.27.39 <none> <none>
ibm-master-proxy-static-10.164.27.44 2/2 Running 0 8d 10.164.27.44 10.164.27.44 <none> <none>
ibm-master-proxy-static-10.164.27.46 2/2 Running 0 8d 10.164.27.46 10.164.27.46 <none> <none>
ibm-storage-watcher-67466b969f-ps55m 1/1 Running 0 22d 172.30.93.217 10.164.27.28 <none> <none>
kubernetes-dashboard-c6b4b9d77-27zwb 1/1 Running 2 22d 172.30.93.218 10.164.27.28 <none> <none>
metrics-server-79d847cf58-6frsf 2/2 Running 0 3m23s 172.30.93.226 10.164.27.28 <none> <none>
public-crbro6um6l04jalpqrsl5g-alb1-8465f75bb4-88vl5 4/4 Running 0 11h 172.30.93.225 10.164.27.28 <none> <none>
public-crbro6um6l04jalpqrsl5g-alb1-8465f75bb4-vx68d 4/4 Running 0 11h 172.30.137.104 10.164.27.46 <none> <none>
vpn-58b48cdc7c-4lp9c 1/1 Running 0 25d 172.30.93.193 10.164.27.28 <none> <none>
I am using Istio and sysdig. Not sure if that breaks anything. My k8s versions are shown below.
pranam#UNKNOWN kubernetes % kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:34:02Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.7+IKS", GitCommit:"3305158dfe9ee1f89f596ef260135dcba881848c", GitTreeState:"clean", BuildDate:"2020-06-17T18:32:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
My YAML file is
#Assumes create-docker-store-secret.sh used to create dockerlogin secret
#Assumes create-secrets.sh used to create key file, sam admin, and cfgsvc secrets
apiVersion: storage.k8s.io/v1beta1
# Create StorageClass with gidallocate=true to allow non-root user access to mount
# This is used by PostgreSQL container
kind: StorageClass
metadata:
name: ibmc-file-bronze-gid
labels:
kubernetes.io/cluster-service: "true"
provisioner: ibm.io/ibmc-file
parameters:
type: "Endurance"
iopsPerGB: "2"
sizeRange: "[1-12000]Gi"
mountOptions: nfsvers=4.1,hard
billingType: "hourly"
reclaimPolicy: "Delete"
classVersion: "2"
gidAllocate: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldaplib
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapslapd
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapsecauthority
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresqldata
spec:
storageClassName: ibmc-file-bronze-gid
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: isamconfig
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openldap
labels:
app: openldap
spec:
selector:
matchLabels:
app: openldap
replicas: 1
template:
metadata:
labels:
app: openldap
spec:
volumes:
- name: ldaplib
persistentVolumeClaim:
claimName: ldaplib
- name: ldapslapd
persistentVolumeClaim:
claimName: ldapslapd
- name: ldapsecauthority
persistentVolumeClaim:
claimName: ldapsecauthority
- name: openldap-keys
secret:
secretName: openldap-keys
containers:
- name: openldap
image: ibmcom/isam-openldap:9.0.7.0
ports:
- containerPort: 636
env:
- name: LDAP_DOMAIN
value: ibm.com
- name: LDAP_ADMIN_PASSWORD
value: Passw0rd
- name: LDAP_CONFIG_PASSWORD
value: Passw0rd
volumeMounts:
- mountPath: /var/lib/ldap
name: ldaplib
- mountPath: /etc/ldap/slapd.d
name: ldapslapd
- mountPath: /var/lib/ldap.secAuthority
name: ldapsecauthority
- mountPath: /container/service/slapd/assets/certs
name: openldap-keys
# This line is needed when running on Kubernetes 1.9.4 or above
args: [ "--copy-service"]
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
# Just this line to get debug output from openldap startup
# args: [ "--loglevel" , "trace","--copy-service"]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: openldap
labels:
app: openldap
spec:
ports:
- port: 636
name: ldaps
protocol: TCP
selector:
app: openldap
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
labels:
app: postgresql
spec:
selector:
matchLabels:
app: postgresql
replicas: 1
template:
metadata:
labels:
app: postgresql
spec:
securityContext:
runAsNonRoot: true
runAsUser: 70
fsGroup: 0
volumes:
- name: postgresqldata
persistentVolumeClaim:
claimName: postgresqldata
- name: postgresql-keys
secret:
secretName: postgresql-keys
containers:
- name: postgresql
image: ibmcom/isam-postgresql:9.0.7.0
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Passw0rd
- name: POSTGRES_DB
value: isam
- name: POSTGRES_SSL_KEYDB
value: /var/local/server.pem
- name: PGDATA
value: /var/lib/postgresql/data/db-files/
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresqldata
- mountPath: /var/local
name: postgresql-keys
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
name: postgresql
protocol: TCP
selector:
app: postgresql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamconfig
labels:
app: isamconfig
spec:
selector:
matchLabels:
app: isamconfig
replicas: 1
template:
metadata:
labels:
app: isamconfig
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
persistentVolumeClaim:
claimName: isamconfig
- name: isamconfig-logs
emptyDir: {}
containers:
- name: isamconfig
image: ibmcom/isam:9.0.7.1_IF4
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamconfig-logs
env:
- name: SERVICE
value: config
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: ADMIN_PWD
valueFrom:
secretKeyRef:
name: samadmin
key: adminpw
readinessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 120
periodSeconds: 20
# command: [ "/sbin/bootstrap.sh" ]
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamconfig
spec:
# To make the LMI internet facing, make it a NodePort
type: NodePort
ports:
- port: 9443
name: isamconfig
protocol: TCP
# make this one statically allocated
nodePort: 30442
selector:
app: isamconfig
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrprp1-v1
labels:
app: isamwrprp1
spec:
selector:
matchLabels:
app: isamwrprp1
version: v1
replicas: 1
template:
metadata:
labels:
app: isamwrprp1
version: v1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrprp1-logs
emptyDir: {}
containers:
- name: isamwrprp1
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrprp1-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: rp1
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrprp1-v2
labels:
app: isamwrprp1
spec:
selector:
matchLabels:
app: isamwrprp1
version: v2
replicas: 1
template:
metadata:
labels:
app: isamwrprp1
version: v2
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrprp1-logs
emptyDir: {}
containers:
- name: isamwrprp1
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrprp1-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: rp1
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrprp1
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrprp1
protocol: TCP
nodePort: 30443
selector:
app: isamwrprp1
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrpmobile
labels:
app: isamwrpmobile
spec:
selector:
matchLabels:
app: isamwrpmobile
replicas: 1
template:
metadata:
labels:
app: isamwrpmobile
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrpmobile-logs
emptyDir: {}
containers:
- name: isamwrpmobile
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrpmobile-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: mobile
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrpmobile
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrpmobile
protocol: TCP
nodePort: 30444
selector:
app: isamwrpmobile
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v1
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v1
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v2
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v2
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v2
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: v1
kind: Service
metadata:
name: isamruntime
spec:
ports:
- port: 443
name: isamruntime
protocol: TCP
selector:
app: isamruntime
---
I am not sure why the CPU load is shown as unknown. Have I missed a step or made any mistake ? Can someone help ?
Regards
Pranam
Based on the issue shown, it appears that you have not set the resource limits in the delployment.yaml file.
if you go for executing kubectl explain deployment then you will see in containers specs -
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
If you add values to above mentioned keys then surely the hpa issue will get solved
Did you specify the resources block when you defined your app's deployment?
I don't remembered where it was stated but I encountered this case once when I forgot that.
More information about managing resources for containers: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

unmarshalerDecoder: quantities must match the regular expression

When I am installing CoreDNS using this command ,by the way,the OS version is: CentOS 7.6 and Kubernetes version is: v1.15.2:
kubectl create -f coredns.yaml
The output is:
[root#ops001 coredns]# kubectl create -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
service/kube-dns created
Error from server (BadRequest): error when creating "coredns.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Resources: v1.ResourceRequirements.Requests: Limits: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|__LIMIT__"},"request|..., bigger context ...|limits":{"memory":"__PILLAR__DNS__MEMORY__LIMIT__"},"requests":{"cpu":"100m","memory":"70Mi"}},"secu|...
this is my coredns.yaml:
# __MACHINE_GENERATED_WARNING__
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: coredns
image: gcr.azk8s.cn/google-containers/coredns:1.3.1
imagePullPolicy: IfNotPresent
resources:
limits:
memory: __PILLAR__DNS__MEMORY__LIMIT__
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
am I missing something?
From this error message
Error from server (BadRequest):
error when creating "coredns.yaml":
Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec:
v1.DeploymentSpec.Template: v
1.PodTemplateSpec.Spec:
v1.PodSpec.Containers: []v1.Container:
v1.Container.Resources:
v1.ResourceRequirements.Requests: Limits: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|__LIMIT__"},"request|..., bigger context ...|limits":{"memory":"__PILLAR__DNS__MEMORY__LIMIT__"},"requests":{"cpu":"100m","memory":"70Mi"}},"secu|...
This part is root-cause.
unmarshalerDecoder:
quantities must match the regular expression
'^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
What quantities are there?
Seems like
v1.ResourceRequirements.Requests: Limits:
So please, change Requests.Limits from __PILLAR__DNS__MEMORY__LIMIT__ to other value.
Please refer to coredns/deployment in your deployments there are fields like limits {"memory":"__PILLAR__DNS__MEMORY__LIMIT__".
As described in the docs you can use own script to override some parameters while switching from kube-dns to COREDNS there is deploy script.
Installing CoreDNS
In Kubernetes version 1.13 and later the CoreDNS feature gate is removed and CoreDNS is used by default.
So you can use your original installation and see default values in config map and deployment.
kubectl get configmap coredns -n kube-system -o yaml
Hope this help.

Kubernetes: Error kubectl edit deployment

I'm trying to edit deployment in kubernetes by:
kubectl -n <namespace> edit deployment <depolyment_name>.
after entering the command, vi windows for editing appears, then I make some changes for example in the command section or in volumeMounts section.
but I get the following error:
A copy of your changes has been stored to "/tmp/kubectl-edit-hv5dh.yaml"
error: map: map[] does not contain declared merge key: name
someone can help with it?
attached the edit deployment file of apiserver:
kubectl -n federation-system edit deployment apiserver
(codes between ** ** are the lines i added)
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
federation.alpha.kubernetes.io/federation-name: fed
creationTimestamp: 2018-04-01T13:26:40Z
generation: 1
labels:
app: federated-cluster
name: apiserver
namespace: federation-system
resourceVersion: "393140"
selfLink: /apis/extensions/v1beta1/namespaces/federation-system/deployments/apiserver
uid: <uid>
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: federated-cluster
module: federation-apiserver
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
federation.alpha.kubernetes.io/federation-name: fed
creationTimestamp: null
labels:
app: federated-cluster
module: federation-apiserver
name: apiserver
spec:
containers:
- command:
- /fcp
- federation-apiserver
- --admission-control=NamespaceLifecycle
- --advertise-address=<master-ip>
- --bind-address=0.0.0.0
- --client-ca-file=/etc/federation/apiserver/ca.crt
- --etcd-servers=http://localhost:2379
- --secure-port=8443
- --tls-cert-file=/etc/federation/apiserver/server.crt
- --tls-private-key-file=/etc/federation/apiserver/server.key
**- --enable-admission-plugins=SchedulingPolicy
- --admission-control-config-file=/etc/kubernetes/admission/config.yml**
image: gcr.io/k8s-jkns-e2e-gce-federation/fcp-amd64:v1.9.0-alpha.3
imagePullPolicy: IfNotPresent
name: apiserver
ports:
- containerPort: 8443
name: https
protocol: TCP
- containerPort: 8080
name: local
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/federation/apiserver
name: apiserver-credentials
readOnly: true
**volumeMounts:
- mountPath: /etc/kubernetes/admission
name: admission-config**
- command:
- /usr/local/bin/etcd
- --data-dir
- /var/etcd/data
image: gcr.io/google_containers/etcd:3.1.10
imagePullPolicy: IfNotPresent
name: etcd
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- {}
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: apiserver-credentials
secret:
defaultMode: 420
secretName: apiserver-credentials
**- name: admission-config
configMap:
name: admission**
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-04-01T13:26:40Z
lastUpdateTime: 2018-04-01T13:26:40Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2018-04-01T13:26:40Z
lastUpdateTime: 2018-04-01T13:27:20Z
message: ReplicaSet "apiserver-8484fd45f8" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
it's happened after I created configMap file:
kubectl create -f scheduling-policy-admission.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: admission
namespace: federation-system
data:
config.yml: |
apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: SchedulingPolicy
path: /etc/kubernetes/admission/scheduling-policy-config.yml
scheduling-policy-config.yml: |
kubeconfig: /etc/kubernetes/admission/opa-kubeconfig
opa-kubeconfig: |
clusters:
- name: opa-api
cluster:
server: http://opa.federation-system.svc.cluster.local:8181/v0/data/kubernetes/placement
users:
- name: scheduling-policy
user:
token: deadbeefsecret
contexts:
- name: default
context:
cluster: opa-api
user: scheduling-policy
current-context: default
I'm trying to configure Admission Controller in the Federation API.
Thanks,
dnsPolicy: ClusterFirst
# DELETE imagePullSecrets:
# DELETE - {}
restartPolicy: Always
I would strongly recommend removing that imagePullSecrets block. Since those objects have a mergeKey of name, but that object has no name, it would very easily cause the error you are experiencing. If the YAML was given to your editor in that condition, then I am almost certain that is a kubernetes bug: it should always(?) allow round-tripping YAML via kubectl edit, if for no other reason than this situation right here.

Kubernetes isn't allowing me to create a pod with securityContext runAsUser

Summary:
I have pods with security context: runAsUser: 1337 that fail to start due to being disallowed by policy. I have altered admission-control to no success (as suggested here
and here)
What else do I need to force through this kind of security context?
Details
I'm working through the https://istio.io/docs/samples/bookinfo.html example to start porting over to istio.
I have a deployment named details-v1 (see below) from which a replica set and pod have been created. The pod is stuck in pending.
NAME READY STATUS RESTARTS AGE
details-v1-3207759430-nt9tt 0/2 Pending 0 34m
describe on the pod shows the cause of the error:
FailedValidation Error validating pod details-v1-3207759430-nt9tt.azs-master from api, ignoring: spec.initContainers[1].securityContext.privileged: Forbidden: disallowed by policy
In order to get this far, I have already made changes to the kube-apiserver:
/usr/local/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota \
--allow-privileged=true \
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"name":"details-v1","namespace":"azs-master"},"spec":{"replicas":1,"strategy":{},"template":{"metadata":{"annotations":{"alpha.istio.io/sidecar":"injected","alpha.istio.io/version":"jenkins#ubuntu-16-04-build-12ac793f80be71-0.1.6-dab2033","pod.beta.kubernetes.io/init-containers":"[{\"args\":[\"-p\",\"15001\",\"-u\",\"1337\"],\"image\":\"docker.io/istio/init:0.1\",\"imagePullPolicy\":\"Always\",\"name\":\"init\",\"securityContext\":{\"capabilities\":{\"add\":[\"NET_ADMIN\"]}}},{\"args\":[\"-c\",\"sysctl -w kernel.core_pattern=/tmp/core.%e.%p.%t \\u0026\\u0026 ulimit -c unlimited\"],\"command\":[\"/bin/sh\"],\"image\":\"alpine\",\"imagePullPolicy\":\"Always\",\"name\":\"enable-core-dump\",\"securityContext\":{\"privileged\":true}}]"},"creationTimestamp":null,"labels":{"app":"details","version":"v1"}},"spec":{"containers":[{"image":"istio/examples-bookinfo-details-v1","imagePullPolicy":"IfNotPresent","name":"details","ports":[{"containerPort":9080}],"resources":{}},{"args":["proxy","sidecar","-v","2"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"POD_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}}],"image":"docker.io/istio/proxy_debug:0.1","imagePullPolicy":"Always","name":"proxy","resources":{},"securityContext":{"runAsUser":1337},"volumeMounts":[{"mountPath":"/etc/certs","name":"istio-certs","readOnly":true}]}],"volumes":[{"name":"istio-certs","secret":{"secretName":"istio.default"}}]}}},"status":{}}
creationTimestamp: 2017-06-23T13:30:00Z
generation: 1
labels:
app: details
version: v1
name: details-v1
namespace: azs-master
resourceVersion: "29678612"
selfLink: /apis/extensions/v1beta1/namespaces/azs-master/deployments/details-v1
uid: 0eacea4a-5818-11e7-af0e-0a55ca98bb17
spec:
replicas: 1
selector:
matchLabels:
app: details
version: v1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
alpha.istio.io/version: jenkins#ubuntu-16-04-build-12ac793f80be71-0.1.6-dab2033
pod.alpha.kubernetes.io/init-containers: '[{"name":"init","image":"docker.io/istio/init:0.1","args":["-p","15001","-u","1337"],"resources":{},"imagePullPolicy":"Always","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}},{"name":"enable-core-dump","image":"alpine","command":["/bin/sh"],"args":["-c","sysctl
-w kernel.core_pattern=/tmp/core.%e.%p.%t \u0026\u0026 ulimit -c unlimited"],"resources":{},"imagePullPolicy":"Always","securityContext":{"privileged":true}}]'
pod.beta.kubernetes.io/init-containers: '[{"name":"init","image":"docker.io/istio/init:0.1","args":["-p","15001","-u","1337"],"resources":{},"imagePullPolicy":"Always","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}},{"name":"enable-core-dump","image":"alpine","command":["/bin/sh"],"args":["-c","sysctl
-w kernel.core_pattern=/tmp/core.%e.%p.%t \u0026\u0026 ulimit -c unlimited"],"resources":{},"imagePullPolicy":"Always","securityContext":{"privileged":true}}]'
creationTimestamp: null
labels:
app: details
version: v1
spec:
containers:
- image: istio/examples-bookinfo-details-v1
imagePullPolicy: IfNotPresent
name: details
ports:
- containerPort: 9080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
- args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
image: docker.io/istio/proxy_debug:0.1
imagePullPolicy: Always
name: proxy
resources: {}
securityContext:
runAsUser: 1337
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /etc/certs
name: istio-certs
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: istio-certs
secret:
defaultMode: 420
secretName: istio.default
status:
conditions:
- lastTransitionTime: 2017-06-23T13:30:00Z
lastUpdateTime: 2017-06-23T13:30:00Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
Kubernetes server version: 1.5.6
The Pending state indicates that this was blocked by the Kubelet, which also needs the --allow-priveleged flag.