kubernetes pod deployment not updating - kubernetes

I have a pod egress-operator-controller-manager created from makefile by command make deploy IMG=my_azure_repo/egress-operator:v0.1.
This pod was showing unexpected status: 401 Unauthorized error in description, so I created imagePullSecrets and trying to update this pod with secret by creating pod's deployment.yaml [egress-operator-manager.yaml] file. But when I am applying this yaml file its giving below error:
root#Ubuntu18-VM:~/egress-operator# kubectl apply -f /home/user/egress-operator-manager.yaml
The Deployment "egress-operator-controller-manager" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"moduleId":"egress-operator"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
egress-operator-manager.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: egress-operator-controller-manager
namespace: egress-operator-system
labels:
moduleId: egress-operator
spec:
replicas: 1
selector:
matchLabels:
moduleId: egress-operator
strategy:
type: Recreate
template:
metadata:
labels:
moduleId: egress-operator
spec:
containers:
- image: my_azure_repo/egress-operator:v0.1
name: egress-operator
imagePullSecrets:
- name: mysecret
Can somene let me know that how can I update this pod's deployment.yaml ?

Delete the deployment once and try applying the YAML agian.
it could be due to K8s service won't allow the rolling update once deployed the label selectors of K8s service can not be updated until you decide to delete the existing deployment
Changing selectors leads to undefined behaviors - users are not expected to change the selectors
https://github.com/kubernetes/kubernetes/issues/50808

Related

Hashicorp Vault for Kubernetes - Deployment can't get secret injected

I am currently doing a PoC on Vault for K8s, but I am having some issues injecting a secret into an example application. I have created a Service Account which is associated with a role, which is then associated with a policy that allows the service account to read secrets.
I have created a secret basic-secret, which I am trying to inject to my example application. The application is then associated with a Service Account. Below you can see the code for deploying the example application (Hello World) and the service account:
apiVersion: apps/v1
kind: Deployment
metadata:
name: basic-secret
labels:
app: basic-secret
spec:
selector:
matchLabels:
app: basic-secret
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/tls-skip-verify: "true"
vault.hashicorp.com/agent-inject-secret-helloworld: "secret/basic-secret/helloworld"
vault.hashicorp.com/agent-inject-template-helloworld: |
{{- with secret "secret/basic-secret/helloworld" -}}
{
"username" : "{{ .Data.username }}",
"password" : "{{ .Data.password }}"
}
{{- end }}
vault.hashicorp.com/role: "basic-secret-role"
labels:
app: basic-secret
spec:
serviceAccountName: basic-secret
containers:
- name: app
image: jweissig/app:0.0.1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: basic-secret
labels:
app: basic-secret
When I describe the pod (kubectl describe pod basic-secret-7d6777cdb8-tlfsw -n vault) of the deployment I get:
Furthermore, for logs (kubectl logs pods/basic-secret-7d6777cdb8-tlfsw vault-agent -n vault) I get:
Error from server (BadRequest): container "vault-agent" in pod "basic-secret-7d6777cdb8-tlfsw" is waiting to start: PodInitializing
I am not sure why the Vault-Agent is not initiallizing. If someone has any idea what might be the issue, I would appreciate it a lot!
Best William.
Grab your container logs with kubectl logs pods/basic-secret-7d6777cdb8-tlfsw vault-agent-init -n vault because container vault-agent-init has to finish first.
Does your policy allow access to that secret (secret/basic-secret/helloworld)?
Did you create your role (basic-secret-role) in k8s auth? In role creation process, you can authorize certain namespaces so that might be a problem.
But, let's see those agent-init logs first

imagePullSecrets on default service account don't seem to work

I am basically trying to pull GCR images from Azure kubernetes cluster.
I have the folowing for my default service account:
kubectl get serviceaccounts default -o yaml
apiVersion: v1
imagePullSecrets:
- name: gcr-json-key-stg
kind: ServiceAccount
metadata:
creationTimestamp: "2019-12-24T03:42:15Z"
name: default
namespace: default
resourceVersion: "151571"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 7f88785d-05de-4568-b050-f3a5dddd8ad1
secrets:
- name: default-token-gn9vb
If I add the same imagePullSecret to individual deployments, it works. So, the secret is correct. However, when I use it for a default service account, I get a ImagePullBackOff error which on describing confirms that it's a permission issue.
Am I missing something?
I have made sure that my deployment is not configured with any other specific serviceaccount and should be using the default serviceaccount.
ok, the problem was that the default service account that I added the imagePullSecret wasn't in the same namespace.
Once, I patched the default service account in that namespace, it works perfectly well.
After you add the secret for pulling the image to the service account, then you need to add the service account into your pod or deployment. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
run: helloworld
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: yourPrivateRegistry/image:tag
ports:
- containerPort: 80
serviceAccountName: pull-image # your service account
And the service account pull-image looks like this:

Missing required field in DaemonSet

I'm trying to run Cadvisor on a Kubernetes cluster following this doc https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
Contents of the yaml file below:
apiVersion: v1
kind: Namespace
metadata:
name: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cadvisor
namespace: kube-system
labels:
name: cadvisor
spec:
selector:
matchLabels:
name: cadvisor
template:
metadata:
labels:
name: cadvisor
spec:
containers:
- image: google/cadvisor:latest
name: cadvisor
ports:
- containerPort: 8080
restartPolicy: Always
status: {}
But when I try to deploy it :
kubectl apply -f cadvisor.daemonset.yaml
I get the output + error:
error: error validating "cadvisor.daemonset.yaml": error validating data: [ValidationError(DaemonSet.status): missing required field "currentNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberMisscheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "desiredNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberReady" in io.k8s.api.apps.v1.DaemonSetStatus]; if you choose to ignore these errors, turn validation off with --validate=false
But there is no infos about these required fields in the documentation or anywhere on Google :(
Do not pass status: {} in the yaml when creating resources. That field is only for status information returned from the API server.

Can not apply Service as "type: ClusterIP" in my GKE cluster

I want to deploy my service as a ClusterIP but am not able to apply it for the given error message:
[xetra11#x11-work coopr-infrastructure]$ kubectl apply -f teamcity-deployment.yaml
deployment.apps/teamcity unchanged
ingress.extensions/teamcity unchanged
The Service "teamcity" is invalid: spec.ports[0].nodePort: Forbidden: may not be used when `type` is 'ClusterIP'
This here is my .yaml file:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: teamcity
labels:
app: teamcity
spec:
replicas: 1
selector:
matchLabels:
app: teamcity
template:
metadata:
labels:
app: teamcity
spec:
containers:
- name: teamcity-server
image: jetbrains/teamcity-server:latest
ports:
- containerPort: 8111
---
apiVersion: v1
kind: Service
metadata:
name: teamcity
labels:
app: teamcity
spec:
type: ClusterIP
ports:
- port: 8111
targetPort: 8111
protocol: TCP
selector:
app: teamcity
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: teamcity
annotations:
kubernetes.io/ingress.class: nginx
spec:
backend:
serviceName: teamcity
servicePort: 8111
Apply a configuration to the resource by filename:
kubectl apply -f [.yaml file] --force
This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.
2) If the first one fails, you can force replace, delete and then re-create the resource:
kubectl replace -f grav-deployment.yml
This command is only used when grace-period=0. If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.
On GKE the ingress can only point to a service of type LoadBalancer or NodePort you can see the error output of the ingress by running:
kubectl describe ingress teamcity
You can see an error, as per your yaml if you are using an nginx controller you have to use the service of type NodePort
Somo documentation:
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#gce-gke
Did you just recently change the service description from NodePort to ClusterIP?
Then it might be this issue github.com/kubernetes/kubectl/issues/221.
You need to use kubectl replace or kubectl apply --force.

Kube / create deployment with config map

I new in kube, and im trying to create deployment with configmap file. I have the following:
app-mydeploy.yaml
--------
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-mydeploy
labels:
app: app-mydeploy
spec:
replicas: 3
selector:
matchLabels:
app: mydeploy
template:
metadata:
labels:
app: mydeploy
spec:
containers:
- name: mydeploy-1
image: mydeploy:tag-latest
envFrom:
- configMapRef:
name: map-mydeploy
map-mydeploy
-----
apiVersion: v1
kind: ConfigMap
metadata:
name: map-mydeploy
namespace: default
data:
my_var: 10.240.12.1
I created the config and the deploy with the following commands:
kubectl create -f app-mydeploy.yaml
kubectl create configmap map-mydeploy --from-file=map-mydeploy
when im doing kubectl describe deployments, im getting among the rest:
Environment Variables from:
map-mydeploy ConfigMap Optional: false
also kubectl describe configmaps map-mydeploy give me the right results.
the issue is that my container is CrashLoopBackOff, when I look at the logs, it says: time="2019-02-05T14:47:53Z" level=fatal msg="Required environment variable my_var is not set.
this log is from my container that says that the my_var is not defined in the env vars.
what im doing wrong?
I think you are missing you key in the command
kubectl create configmap map-mydeploy --from-file=map-mydeploy
try to this kubectl create configmap map-mydeploy --from-file=my_var=map-mydeploy
also I highly recommend that if you are just using one value, create you configMap from literal kubectl create configmap my-config --from-literal=my_var=10.240.12.1 then related the configMap in your deployment as you are currently doing it.