Missing required field in DaemonSet - kubernetes

I'm trying to run Cadvisor on a Kubernetes cluster following this doc https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
Contents of the yaml file below:
apiVersion: v1
kind: Namespace
metadata:
name: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cadvisor
namespace: kube-system
labels:
name: cadvisor
spec:
selector:
matchLabels:
name: cadvisor
template:
metadata:
labels:
name: cadvisor
spec:
containers:
- image: google/cadvisor:latest
name: cadvisor
ports:
- containerPort: 8080
restartPolicy: Always
status: {}
But when I try to deploy it :
kubectl apply -f cadvisor.daemonset.yaml
I get the output + error:
error: error validating "cadvisor.daemonset.yaml": error validating data: [ValidationError(DaemonSet.status): missing required field "currentNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberMisscheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "desiredNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberReady" in io.k8s.api.apps.v1.DaemonSetStatus]; if you choose to ignore these errors, turn validation off with --validate=false
But there is no infos about these required fields in the documentation or anywhere on Google :(

Do not pass status: {} in the yaml when creating resources. That field is only for status information returned from the API server.

Related

kubernetes pod deployment not updating

I have a pod egress-operator-controller-manager created from makefile by command make deploy IMG=my_azure_repo/egress-operator:v0.1.
This pod was showing unexpected status: 401 Unauthorized error in description, so I created imagePullSecrets and trying to update this pod with secret by creating pod's deployment.yaml [egress-operator-manager.yaml] file. But when I am applying this yaml file its giving below error:
root#Ubuntu18-VM:~/egress-operator# kubectl apply -f /home/user/egress-operator-manager.yaml
The Deployment "egress-operator-controller-manager" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"moduleId":"egress-operator"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
egress-operator-manager.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: egress-operator-controller-manager
namespace: egress-operator-system
labels:
moduleId: egress-operator
spec:
replicas: 1
selector:
matchLabels:
moduleId: egress-operator
strategy:
type: Recreate
template:
metadata:
labels:
moduleId: egress-operator
spec:
containers:
- image: my_azure_repo/egress-operator:v0.1
name: egress-operator
imagePullSecrets:
- name: mysecret
Can somene let me know that how can I update this pod's deployment.yaml ?
Delete the deployment once and try applying the YAML agian.
it could be due to K8s service won't allow the rolling update once deployed the label selectors of K8s service can not be updated until you decide to delete the existing deployment
Changing selectors leads to undefined behaviors - users are not expected to change the selectors
https://github.com/kubernetes/kubernetes/issues/50808

imagePullSecrets on default service account don't seem to work

I am basically trying to pull GCR images from Azure kubernetes cluster.
I have the folowing for my default service account:
kubectl get serviceaccounts default -o yaml
apiVersion: v1
imagePullSecrets:
- name: gcr-json-key-stg
kind: ServiceAccount
metadata:
creationTimestamp: "2019-12-24T03:42:15Z"
name: default
namespace: default
resourceVersion: "151571"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 7f88785d-05de-4568-b050-f3a5dddd8ad1
secrets:
- name: default-token-gn9vb
If I add the same imagePullSecret to individual deployments, it works. So, the secret is correct. However, when I use it for a default service account, I get a ImagePullBackOff error which on describing confirms that it's a permission issue.
Am I missing something?
I have made sure that my deployment is not configured with any other specific serviceaccount and should be using the default serviceaccount.
ok, the problem was that the default service account that I added the imagePullSecret wasn't in the same namespace.
Once, I patched the default service account in that namespace, it works perfectly well.
After you add the secret for pulling the image to the service account, then you need to add the service account into your pod or deployment. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
run: helloworld
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: yourPrivateRegistry/image:tag
ports:
- containerPort: 80
serviceAccountName: pull-image # your service account
And the service account pull-image looks like this:

Can not apply Service as "type: ClusterIP" in my GKE cluster

I want to deploy my service as a ClusterIP but am not able to apply it for the given error message:
[xetra11#x11-work coopr-infrastructure]$ kubectl apply -f teamcity-deployment.yaml
deployment.apps/teamcity unchanged
ingress.extensions/teamcity unchanged
The Service "teamcity" is invalid: spec.ports[0].nodePort: Forbidden: may not be used when `type` is 'ClusterIP'
This here is my .yaml file:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: teamcity
labels:
app: teamcity
spec:
replicas: 1
selector:
matchLabels:
app: teamcity
template:
metadata:
labels:
app: teamcity
spec:
containers:
- name: teamcity-server
image: jetbrains/teamcity-server:latest
ports:
- containerPort: 8111
---
apiVersion: v1
kind: Service
metadata:
name: teamcity
labels:
app: teamcity
spec:
type: ClusterIP
ports:
- port: 8111
targetPort: 8111
protocol: TCP
selector:
app: teamcity
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: teamcity
annotations:
kubernetes.io/ingress.class: nginx
spec:
backend:
serviceName: teamcity
servicePort: 8111
Apply a configuration to the resource by filename:
kubectl apply -f [.yaml file] --force
This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.
2) If the first one fails, you can force replace, delete and then re-create the resource:
kubectl replace -f grav-deployment.yml
This command is only used when grace-period=0. If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.
On GKE the ingress can only point to a service of type LoadBalancer or NodePort you can see the error output of the ingress by running:
kubectl describe ingress teamcity
You can see an error, as per your yaml if you are using an nginx controller you have to use the service of type NodePort
Somo documentation:
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#gce-gke
Did you just recently change the service description from NodePort to ClusterIP?
Then it might be this issue github.com/kubernetes/kubectl/issues/221.
You need to use kubectl replace or kubectl apply --force.

Received error about getting "array" and expecting "map" while my YAML seems right

I'm using k8s 1.11.2 to build my service, the YAML file looks like this:
Deployment
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx-test
namespace: default
labels:
- type: test
spec:
replicas: 1
selector:
matchLabels:
- type: test
template:
metadata:
labels:
- type: test
spec:
containers:
- image: nginx:1.14
name: filebeat
ports:
- containerPort: 80
Service
apiVersion: v1
kind: Service
metadata:
labels:
- type:test
spec:
type: ExternalName
externalName: my.nginx.com
externalIPs:
- 192.168.125.123
clusterIP: 10.240.20.1
ports:
- port: 80
name: tcp
selector:
- type: test
and I get this error:
error validating data: [ValidationError(Service.metadata.labels):
invalid type for
io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.labels: got "array",
expected "map", ValidationError(Service.spec.selector): invalid type
for io.k8s.api.core.v1.ServiceSpec.selector: got "array", expected
"map"];
I am sure the format of my YAML file is right, because I used the website http://www.yamllint.com/ to validate it.
Why am I getting this error?
yamllint.com is a dubious service because it does not tell us which YAML version it is checking against and which implementation it is using. Avoid it.
More importantly, while your input may be valid YAML, this does not mean that it is a valid input for kubernetes. YAML allows you to create any kind of structure, while kubernetes expects a certain structure from you. This is what the error is telling you:
got "array", expected "map"
This means that at a place where kubernetes expects a mapping you provided an array (sequence in proper YAML terms). The error message also gives you the path where this problem occurs:
ValidationError(Service.metadata.labels):
A quick check on metadata labels in kubernetes reveals this documentation, which states that labels need to be mappings, not arrays.
So in your input, the last line here is the culprit:
metadata:
name: nginx-test
namespace: default
labels:
- type: test
- is a YAML indicator for a sequence item, creating a sequence as value for the key labels:. Dropping it will make it a mapping instead:
metadata:
name: nginx-test
namespace: default
labels:
type: test
In yaml formatting the character "-" implies the start of an array.
You have:
apiVersion: v1
kind: Service
metadata:
labels:
- type:test
You want:
apiVersion: v1
kind: Service
metadata:
labels:
type:test
The problem is in your second file:
apiVersion: v1
kind: Service
metadata:
labels:
- type:test
# ^
Above the caret (^) it is missing a space making type:test a single scalar (string) instead of the mapping which is what you get through using
apiVersion: v1
kind: Service
metadata:
labels:
- type: test
and what is that you program expects.
Both are valid YAML so primitive syntax checking doesn't help you.
Rendering of values from values.yaml to config.yaml :
values.yaml :
sites:
- dataprovider: abcd
- dataprovider: xyzx
config.yaml :
sites:
{{ toYaml .Values.sites | indent 10 }}

imagePullSecrets not working with Kind deployment

I'm tying to create a deployment with 3 replicas, whcih will pull image from a private registry. I have stored the credentials in a secret and using the imagePullSecrets in the deployment file. Im getting below error in the deploy it.
error: error validating "private-reg-pod.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "containers" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "imagePullSecrets" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "template" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false
Any help on this?
Below is my deployment file :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Thanks,
Sundar
Image section should be placed in container specification. ImagePullSecret should be placed in spec section so proper yaml file looks like this (please note indent):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Very common issue with kubernetes Deployment.
The valid format for pulling image from private repository in your Kubernetes Deployment file is:
spec:
imagePullSecrets:
- name: <your secret name>
containers:
Please make sure you have created the secret,then please try to make it like the below .
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-pod-deployment
labels:
app: test-pod
spec:
replicas: 3
selector:
matchLabels:
app: test-pod
template:
metadata:
labels:
app: test-pod
spec:
containers:
- name: test-pod
image: nginx
imagePullSecrets:
- name: regcred
Both #Jakub-Bujny and #itmaven are correct. The indentation is really important in creating and using .yaml (or .yml) file. The yaml file has been parsed based on these indentations. So, both of these are correct:
1)
spec:
imagePullSecrets:
- name: regcred
containers:
- name: test-pod
image:
2)
spec:
containers:
- name: test-pod
image: <private-registry>
imagePullSecrets:
- name: regcred
Note: before you used the imagePullSecrets you have to create that using the following code:
kubectl create secret docker-registry <private-registry> --docker-server=
<cluster_CA_domain>:[some port] --docker-username=<user_name> --docker-
password=<user_password> --docker-email=<user_email>
also check if the imagePullSecrets was created successfully using the following code:
kubectl get secret