Google cloud Kubernetes deployment error: Field is immutable - kubernetes

After fixing the problem from this topic Can't use Google Cloud Kubernetes substitutions (yaml files are all there, to not copy-paste them once again) I got a new problem. Making a new topic because there is the correct answer for the previous one.
Step #2: Running: kubectl apply -f deployment.yaml
Step #2: Warning:
kubectl apply should be used on resource created by either kubectl
create --save-config or kubectl apply
Step #2: The Deployment
"myproject" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"app":"myproject",
"run":"myproject"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
immutable
I've checked similar issues but hasn't been able to find anything related.
Also, is that possible that this error related to upgrading App Engine -> Docker -> Kubernetes? I created valid configuration on each step. Maybe there are some things that were created and immutable now? What should I do in this case?
One more note, maybe that matters, it says "kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply" (you can see above), but executing
kubectl create deployment myproject --image=gcr.io/myproject/myproject
gives me this
Error from server (AlreadyExists): deployments.apps "myproject" already exists
which is actually expected, but, at the same time, controversial with warning above (at least from my prospective)
Any idea?
Output of kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.7", GitCommit:"8fca2ec50a6133511b771a11559e24191b1aa2b4", GitTreeState:"clean", BuildDate:"2019-09-18T14:47:22Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
Current YAML file:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/$PROJECT_ID/myproject:latest || exit 0'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA',
'-t',
'gcr.io/$PROJECT_ID/myproject:latest',
'.'
]
- name: 'gcr.io/cloud-builders/kubectl'
args: [ 'apply', '-f', 'deployment.yaml' ]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'myproject',
'myproject=gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- 'DB_PORT=5432'
- 'DB_SCHEMA=public'
- 'TYPEORM_CONNECTION=postgres'
- 'FE=myproject'
- 'V=1'
- 'CLEAR_DB=true'
- 'BUCKET_NAME=myproject'
- 'BUCKET_TYPE=google'
- 'KMS_KEY_NAME=storagekey'
timeout: 1600s
images:
- 'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
- 'gcr.io/$PROJECT_ID/myproject:latest
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myproject
spec:
replicas: 1
selector:
matchLabels:
app: myproject
template:
metadata:
labels:
app: myproject
spec:
containers:
- name: myproject
image: gcr.io/myproject/github.com/weekendman/{{repo name here}}:latest
ports:
- containerPort: 80

From apps/v1 on, a Deployment’s label selector is immutable after it gets created.
excerpt from Kubernetes's document:
Note: In API version apps/v1, a Deployment’s label selector is
immutable after it gets created.
So, you can delete this deployment first, then apply it.

The MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable because it is different from your previous deployment.
Try looking at the existing deployment with kubectl get deployment -o yaml. I suspect the existing yaml has a different matchLables stanza.
Specifically your file has:
matchLabels:
app: myproject
my guess is the output of kubectl get deployment -o yaml while have something different, like:
matchLabels:
app: old-project-name
or
matchLabels:
app: myproject
version: alpha
The new deployment cannot change the matchLabels stanza because, well, because it is immutable. That stanza in the new deployment must match the old. If you want to change it, you need to delete the old deployment with kubectl delete deployment myproject.
Note: if you do that in production your app will be unavailable for a while. (A much longer discussion about how to do this in production is not useful here.)

Related

Kubernetes: Cannot deploy a simple "Couchbase" service

I am new to Kubernetes I am trying to mimic a behavior a bit like what I do with docker-compose when I serve a Couchbase database in a docker container.
couchbase:
image: couchbase
volumes:
- ./couchbase:/opt/couchbase/var
ports:
- 8091-8096:8091-8096
- 11210-11211:11210-11211
I managed to create a cluster in my localhost using a tool called "kind"
kind create cluster --name my-cluster
kubectl config use-context my-cluster
Then I am trying to use that cluster to deploy a Couchbase service
I created a file named couchbase.yaml with the following content (I am trying to mimic what I do with my docker-compose file).
apiVersion: apps/v1
kind: Deployment
metadata:
name: couchbase
namespace: my-project
labels:
platform: couchbase
spec:
replicas: 1
selector:
matchLabels:
platform: couchbase
template:
metadata:
labels:
platform: couchbase
spec:
volumes:
- name: couchbase-data
hostPath:
# directory location on host
path: /home/me/my-project/couchbase
# this field is optional
type: Directory
containers:
- name: couchbase
image: couchbase
volumeMounts:
- mountPath: /opt/couchbase/var
name: couchbase-data
Then I start the deployment like this:
kubectl create namespace my-project
kubectl apply -f couchbase.yaml
kubectl expose deployment -n my-project couchbase --type=LoadBalancer --port=8091
However my deployment never actually start:
kubectl get deployments -n my-project couchbase
NAME READY UP-TO-DATE AVAILABLE AGE
couchbase 0/1 1 0 6m14s
And when I look for the logs I see this:
kubectl logs -n my-project -lplatform=couchbase --all-containers=true
Error from server (BadRequest): container "couchbase" in pod "couchbase-589f7fc4c7-th2r2" is waiting to start: ContainerCreating
As OP mentioned in a comment, issue was solved using extra mount as explained in documentation: https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
Here is OP's comment but formated so it's more readable:
the error shows up when I run this command:
kubectl describe pods -n my-project couchbase
I could fix it by creating a new kind cluster:
kind create cluster --config cluster.yaml
Passing this content in cluster.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: inf
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/me/my-project/couchbase
containerPath: /couchbase
In couchbase.yaml the path becomes path: /couchbase of course.

Unknown field "setHostnameAsFQDN" despite using latest kubectl client

I have a deployment yaml file that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
setHostnameAsFQDN: true
hostname: hello
subdomain: world
containers:
- name: hello-kubernetes
image: redis
However, I am getting this error:
$ kubectl apply -f dep.yaml
error: error validating "dep.yaml": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "setHostnameAsFQDN" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
My kubectl version:
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
After specifying --validate=falsee, hostname and hostname -f still return different values.
I believe I missunderstood something. Doc says that setHostnameAsFQDN will be available from kubernetes v1.20
You showed kubectl version. Your kubernetes version also need to be v1.20. Make sure you are using kubernetes version v1.20.
Use kubectl version for seeing both client and server version. Where client version refers to kubectl version and server version refers to kubernetes version.
As far the k8s v1.20 release note doc: Previously introduced in 1.19 behind a feature gate, SetHostnameAsFQDN is now enabled by default. More details on this behavior is available in documentation for DNS for Services and Pods

kubectl apply -f works on PC but not in Gitlab Runner

I am trying to deploy to kubernetes using Gitlab CICD. No matter what I do, kubectl apply -f helloworld-deployment.yml --record in my .gitlab-ci.yml always returns that the deployment is unchanged:
$ kubectl apply -f helloworld-deployment.yml --record
deployment.apps/helloworld-deployment unchanged
Even if I change the tag on the image, or if the deployment doesn't exist at all. However, if I run kubectl apply -f helloworld-deployment.yml --record from my own computer, it works fine and updates when a tag changes and creates the deployment when no deployment exist. Below is my .gitlab-ci.yml that I'm testing with:
image: docker:dind
services:
- docker:dind
stages:
- deploy
deploy-prod:
stage: deploy
image: google/cloud-sdk
environment: production
script:
- kubectl apply -f helloworld-deployment.yml --record
Below is helloworld-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: registry.gitlab.com/repo/helloworld:test
imagePullPolicy: Always
ports:
- containerPort: 3000
imagePullSecrets:
- name: regcred
Update:
This is what I see if I run kubectl rollout history deployments/helloworld-deployment and there is no existing deployment:
Error from server (NotFound): deployments.apps "helloworld-deployment" not found
If the deployment already exists, I see this:
REVISION CHANGE-CAUSE
1 kubectl apply --filename=helloworld-deployment.yml --record=true
With only one revision.
I did notice this time that when I changed the tag, the output from my Gitlab Runner was:
deployment.apps/helloworld-deployment configured
However, there were no new pods. When I ran it from my PC, then I did see new pods created.
Update:
Running kubectl get pods shows two different pods in Gitlab runner than I see on my PC.
I definitely only have one kubernetes cluster, but kubectl config view shows some differences (the server url is the same). The output for contexts shows different namespaces. Does this mean I need to set a namespace either in my yml file or pass it in the command? Here is the output from the Gitlab runner:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: URL
name: gitlab-deploy
contexts:
- context:
cluster: gitlab-deploy
namespace: helloworld-16393682-production
user: gitlab-deploy
name: gitlab-deploy
current-context: gitlab-deploy
kind: Config
preferences: {}
users:
- name: gitlab-deploy
user:
token: [MASKED]
And here is the output from my PC:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: URL
contexts:
- context:
cluster: do-nyc3-helloworld
user: do-nyc3-helloworld-admin
name: do-nyc3-helloworld
current-context: do-nyc3-helloworld
kind: Config
preferences: {}
users:
- name: do-nyc3-helloworld-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- kubernetes
- cluster
- kubeconfig
- exec-credential
- --version=v1beta1
- --context=default
- VALUE
command: doctl
env: null
It looks like Gitlab adds their own default for namespace:
<project_name>-<project_id>-<environment>
Because of this, I put this in the metadata section of helloworld-deployment.yml:
namespace: helloworld-16393682-production
And then it worked as expected. It was deploying before, but kubectl get pods didn't show it since that command was using the default namespace.
Since Gitlab use a custom namespace you need to add a namespace flag to you command to display your pods:
kubectl get pods -n helloworld-16393682-production
You can set the default namespace for kubectl commands. See here.
You can permanently save the namespace for all subsequent kubectl commands in that contex
In your case it could be:
kubectl config set-context --current --namespace=helloworld-16393682-production
Or if you are using multiples cluster, you can switch between namespaces using:
kubectl config use-context helloworld-16393682-production
In this link you can see a lot of useful commands and configurations.
I hope it helps! =)

Kubectl apply command for updating existing service resource

Currently I'm using Kubernetes version 1.11.+. Previously I'm always using the following command for my cloud build scripts:
- name: 'gcr.io/cloud-builders/kubectl'
id: 'deploy'
args:
- 'apply'
- '-f'
- 'k8s'
- '--recursive'
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_REGION}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
And the commands just working as expected, at that time I'm using k8s version 1.10.+. However recently I got the following error:
spec.clusterIP: Invalid value: "": field is immutable
metadata.resourceVersion: Invalid value: "": must be specified for an update
So I'm wondering if this is an expected behavior for Service resources?
Here's my YAML config for my service:
apiVersion: v1
kind: Service
metadata:
name: {name}
namespace: {namespace}
annotations:
beta.cloud.google.com/backend-config: '{"default": "{backend-config-name}"}'
spec:
ports:
- port: {port-num}
targetPort: {port-num}
selector:
app: {label}
environment: {env}
type: NodePort
This is due to https://github.com/kubernetes/kubernetes/issues/71042
https://github.com/kubernetes/kubernetes/pull/66602 should be picked to 1.11
I sometimes meet this error when manually running kubectl apply -f somefile.yaml.
I think it happens when someone have changed the specification through the Kubernetes Dashboard instead of by applying new changes through kubectl apply.
To fix it, I run kubectl edit services/servicename which opens the yaml specification in my default editor. Then remove the fields metadata.resourceVersion and spec.clusterIP, hit save and run kubectl apply -f somefile.yaml again.
You need to set the spec.clusterIP on your service yaml file with value to be replaced with clusterIP address from service as shown below:
spec:
clusterIP:
Your issue is discuused on the following github there as well a workaround to help you bypass this issue.

Kubectl apply does not update pods or deployments

I'm using a CI to update my kubernetes cluster whenever there's an update to an image. Whenever the image is pushed and has the latest tag it kubectl apply's the existing deployment but nothing gets updated.
this is what runs
$ kubectl apply --record --filename /tmp/deployment.yaml
My goal is when the apply is ran that a rolling deployment gets executed.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us.gcr.io/joule-eed41/api:latest
imagePullPolicy: Always
ports:
- containerPort: 1337
args:
- /bin/sh
- -c
- echo running api;npm start
env:
- name: NAMESPACE
valueFrom:
configMapKeyRef:
name: config
key: NAMESPACE
As others suggested, have a specific tag.
Set new image using following command
kubectl set image deployment/deployment_name deployment_name=image_name:image_tag
In your case it would be
kubectl set image deployment/api api=us.gcr.io/joule-eed41/api:0.1
As #ksholla20 mentionedm using kubectl set image is a good option for many (most?) cases.
But if you can't change the image tag consider using:
1 ) kubectl rollout restart deployment/<name>
(reference).
2 ) kubectl patch deployment <name> -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"version\":\"$CURRENT_BUILD_HASH_OR_DATE\"}}}}}}" (reference)
(*) Notice that the patch command allow you to change specific properties in the deployment (or any other object chosen) like the label selector and the pod label or other properties like the value of the NAMESPACE environment variable in your example.
I've run into the same problem and none of the solutions posted so far will help. The solution is easy, but not easy to see or predict. The applied yaml will generate both a deployment and a replicaset the first time it's run. Unfortunately, applying changes to the manifest likely only replaces the replicaset, while the deployment will remain unchanged. This is a problem because some changes need to happen at the deployment level, but the old deployment hangs around. To have best results, delete the deployment and ensure all previous deployments and replicasets are deleted. Then apply the updated manifest.