Unknown field "setHostnameAsFQDN" despite using latest kubectl client - kubernetes

I have a deployment yaml file that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
setHostnameAsFQDN: true
hostname: hello
subdomain: world
containers:
- name: hello-kubernetes
image: redis
However, I am getting this error:
$ kubectl apply -f dep.yaml
error: error validating "dep.yaml": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "setHostnameAsFQDN" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
My kubectl version:
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
After specifying --validate=falsee, hostname and hostname -f still return different values.
I believe I missunderstood something. Doc says that setHostnameAsFQDN will be available from kubernetes v1.20

You showed kubectl version. Your kubernetes version also need to be v1.20. Make sure you are using kubernetes version v1.20.
Use kubectl version for seeing both client and server version. Where client version refers to kubectl version and server version refers to kubernetes version.
As far the k8s v1.20 release note doc: Previously introduced in 1.19 behind a feature gate, SetHostnameAsFQDN is now enabled by default. More details on this behavior is available in documentation for DNS for Services and Pods

Related

Create deployments with kubectl version 1.18 +

At page 67 of Kubernetes: Up and Running, 2nd Edition, the author uses the command below in order to create a Deployment:
kubectl run alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=2 \
--labels="ver=1,app=alpaca,env=prod"
However this command is deprecated with kubectl 1.19+, and it now creates a pod:
$ kubectl run alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=2 \
--labels="ver=1,app=alpaca,env=prod"
Flag --replicas has been deprecated, has no effect and will be removed in the future.
pod/alpaca-prod created
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Is there a way to use kubectl run to create a deployment with replicas and custom label with kubectl 1.19+?
It is now preferred to use kubectl create to create a new Deployment, instead of kubectl run.
This is the corresponsing command to your kubectl run
kubectl create deployment alpaca-prod --image=gcr.io/kuar-demo/kuard-amd64:blue --replicas=2
Labels
By default from kubectl create deployment alpaca-proc you will get the label app=alpaca.
The get the other labels, you need to add them later. Use kubectl label to add labels to the Deployment, e.g.
kubectl label deployment alpaca-prod ver=1
Note: this only adds the label to the Deployment and not to the Pod-template, e.g. the Pods will not get the label. To also add the label to the pods, you need to edit the template: part of the Deployment-yaml.
Note: With kubectl version 1.18 things have changed. Like its no longer possible to use kubectl run to create Jobs, CronJobs or Deployments, only Pods still work.
So yes you cannot create a deployment from kubectl run from 1.18.
Step 1: Create deployment from kubectl create command
kubectl create deploy alpaca-prod --image=gcr.io/kuar-demo/kuard-amd64:blue --replicas=2
Step 2 Update labels with kubectl label command
kubectl label deploy -l app=alpaca-prod ver=1
kubectl label deploy -l app=alpaca-prod app=alpaca
kubectl label deploy -l app=alpaca-prod env=prod
Here is the yaml file which produces the expected result for p67 of 'Kubernetes: Up and Running, 2nd Edition':
apiVersion: apps/v1
kind: Deployment
metadata:
name: alpaca-prod
spec:
selector:
matchLabels:
ver: "1"
app: "alpaca"
env: "prod"
replicas: 2
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
ver: "1"
app: "alpaca"
env: "prod"
spec:
containers:
- name: kuard
image: gcr.io/kuar-demo/kuard-amd64:blue

How does priorityClass Works

I try to use priorityClass.
I create two pods, the first has system-node-critical priority and the second cluster-node-critical priority.
Both pods need to run in a node labeled with nodeName: k8s-minion1 but such a node has only 2 cpus while both pods request 1.5 cpu.
I then expect that the second pod runs and the first is in pending status. Instead, the first pod always runs no matter the classpriority I affect to the second pod.
I even tried to label the node afted I apply my manifest but does not change anything.
Here is my manifest :
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
nodeSelector:
nodeName: k8s-minion1
priorityClassName: cluster-node-critical
---
apiVersion: v1
kind: Pod
metadata:
name: secondpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
priorityClassName: system-node-critical
nodeSelector:
nodeName: k8s-minion1
It is worth noting that I get an error "unknown object : priorityclass" when I do kubectl get priorityclass and when I export my running pod in yml with kubectl get pod secondpod -o yaml, I cant find any classpriority: field.
Here Are my version infos:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Any ideas why this is not working?
Thanks in advance,
Abdelghani
PriorityClasses first appeared in k8s 1.8 as alpha feature.
It graduated to beta in 1.11
You are using 1.10 and this means that this feature is in alpha.
Alpha features are not enabled by default so you would need to enable it.
Unfortunately k8s version 1.10 is no longer supported, so I'd suggest to upgrade at least to 1.14 where priorityClass feature became stable.

why was k8s service deleted but the cluster IP works still

I can easily reporduce this and could not find an answer for this issue either in k8s doc or the community.
Simple reproduce steps:
create service and endpoint with below config
---
kind: Service
apiVersion: v1
metadata:
name: hostname
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 9376
---
kind: Endpoints
apiVersion: v1
metadata:
name: hostname
subsets:
- addresses:
- ip: 10.244.44.250
- ip: 10.244.154.235
ports:
- port: 9376
kubectl apply -f <filename> to apply the config
test the service and it works perfect. Assume the cluster IP is A
kubectl delete -f <filename> to delete the service and endpoint and kubectl apply -f <filename> again
we got another cluser IP B, which works perfect also
however, cluser IP A was not removed as expected. I can use A to access the service still.
Update the endpoint definition (add new endpoint IP or remove one) and apply, B sees the change while A uses old config still.
Is there someone can explain what happens there?
My k8s version is:
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Google cloud Kubernetes deployment error: Field is immutable

After fixing the problem from this topic Can't use Google Cloud Kubernetes substitutions (yaml files are all there, to not copy-paste them once again) I got a new problem. Making a new topic because there is the correct answer for the previous one.
Step #2: Running: kubectl apply -f deployment.yaml
Step #2: Warning:
kubectl apply should be used on resource created by either kubectl
create --save-config or kubectl apply
Step #2: The Deployment
"myproject" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"app":"myproject",
"run":"myproject"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
immutable
I've checked similar issues but hasn't been able to find anything related.
Also, is that possible that this error related to upgrading App Engine -> Docker -> Kubernetes? I created valid configuration on each step. Maybe there are some things that were created and immutable now? What should I do in this case?
One more note, maybe that matters, it says "kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply" (you can see above), but executing
kubectl create deployment myproject --image=gcr.io/myproject/myproject
gives me this
Error from server (AlreadyExists): deployments.apps "myproject" already exists
which is actually expected, but, at the same time, controversial with warning above (at least from my prospective)
Any idea?
Output of kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.7", GitCommit:"8fca2ec50a6133511b771a11559e24191b1aa2b4", GitTreeState:"clean", BuildDate:"2019-09-18T14:47:22Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
Current YAML file:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/$PROJECT_ID/myproject:latest || exit 0'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA',
'-t',
'gcr.io/$PROJECT_ID/myproject:latest',
'.'
]
- name: 'gcr.io/cloud-builders/kubectl'
args: [ 'apply', '-f', 'deployment.yaml' ]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'myproject',
'myproject=gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- 'DB_PORT=5432'
- 'DB_SCHEMA=public'
- 'TYPEORM_CONNECTION=postgres'
- 'FE=myproject'
- 'V=1'
- 'CLEAR_DB=true'
- 'BUCKET_NAME=myproject'
- 'BUCKET_TYPE=google'
- 'KMS_KEY_NAME=storagekey'
timeout: 1600s
images:
- 'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
- 'gcr.io/$PROJECT_ID/myproject:latest
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myproject
spec:
replicas: 1
selector:
matchLabels:
app: myproject
template:
metadata:
labels:
app: myproject
spec:
containers:
- name: myproject
image: gcr.io/myproject/github.com/weekendman/{{repo name here}}:latest
ports:
- containerPort: 80
From apps/v1 on, a Deployment’s label selector is immutable after it gets created.
excerpt from Kubernetes's document:
Note: In API version apps/v1, a Deployment’s label selector is
immutable after it gets created.
So, you can delete this deployment first, then apply it.
The MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable because it is different from your previous deployment.
Try looking at the existing deployment with kubectl get deployment -o yaml. I suspect the existing yaml has a different matchLables stanza.
Specifically your file has:
matchLabels:
app: myproject
my guess is the output of kubectl get deployment -o yaml while have something different, like:
matchLabels:
app: old-project-name
or
matchLabels:
app: myproject
version: alpha
The new deployment cannot change the matchLabels stanza because, well, because it is immutable. That stanza in the new deployment must match the old. If you want to change it, you need to delete the old deployment with kubectl delete deployment myproject.
Note: if you do that in production your app will be unavailable for a while. (A much longer discussion about how to do this in production is not useful here.)

How to enable extensions API in Kubernetes?

I'd like to try out the new Ingress resource available in Kubernetes 1.1 in Google Container Engine (GKE). But when I try to create for example the following resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
using:
$ kubectl create -f test-ingress.yaml
I end up with the following error message:
error: could not read an encoded object from test-ingress.yaml: API version "extensions/v1beta1" in "test-ingress.yaml" isn't supported, only supports API versions ["v1"]
error: no objects passed to create
When I run kubectl version it shows:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.7", GitCommit:"6234d6a0abd3323cd08c52602e4a91e47fc9491c", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"}
But I seem to have the latest kubectl component installed since running gcloud components update kubectl just gives me:
All components are up to date.
So how do I enable the extensions/v1beta1 in Kubernetes/GKE?
The issue is that your client (kubectl) doesn't support the new ingress resource because it hasn't been updated to 1.1 yet. This is mentioned in the Google Container Engine release notes:
The packaged kubectl is version 1.0.7, consequently new Kubernetes 1.1
APIs like autoscaling will not be available via kubectl until next
week's push of the kubectl binary.
along with the solution (download the newer binary manually).