New cluster 1.8.10 spinned with kops.
In K8S 1.8 there is a new feature Pod Priority and Preemption.
More information: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#how-to-use-priority-and-preemption
kube-apiserver is logging errors
I0321 16:27:50.922589 7 wrap.go:42] GET
/apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations:
(140.067µs) 404 [[kube-apiserver/v1.8.10 (linux/amd64)
kubernetes/044cd26] 127.0.0.1:47500] I0321 16:27:51.257756 7
wrap.go:42] GET
/apis/scheduling.k8s.io/v1alpha1/priorityclasses?resourceVersion=0:
(168.391µs) 404 [[kube-apiserver/v1.8.10 (linux/amd64)
kubernetes/044cd26] 127.0.0.1:47500] E0321 16:27:51.258176 7
reflector.go:205]
k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73:
Failed to list *scheduling.PriorityClass: the server could not find
the requested resource (get priorityclasses.scheduling.k8s.io)
I quite not understand why. No one should access it as it's not even enabled yet (it's alpha).
No pod is using priorityClassName.
Running explain:
kubectl explain priorityclass error: API version:
scheduling.k8s.io/v1alpha1 is not supported by the server. Use one of:
[apiregistration.k8s.io/v1beta1 extensions/v1beta1 apps/v1beta1
apps/v1beta2 authentication.k8s.io/v1
authentication.k8s.io/v1beta1 authorization.k8s.io/v1
authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1
batch/v1 batch/v1beta1 certificates.k8s.io/v1beta1
networking.k8s.io/v1 policy/ v1beta1
rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1
storage.k8s.io/v1 storage.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1
v1]
Is this normal or kops specific?
I think it is related to that Kops option in its config (kops get --name $NAME -oyaml):
kubeAPIServer:
runtimeConfig:
admissionregistration.k8s.io/v1alpha1: "true"
Anyway, all components working thru the API server and it is not a surprise that sometimes based on configuration it is trying to call some disable features. At least it has to check which APIs a supported, so why :)
So, I think you don't need to worry about it, that is the configuration-related message. Don't worry about it. Or just enable that feature, it will solve warning messages.
Related
I'm adding an Ingress as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.213.215.191.78.nip.io
http:
paths:
- backend:
service:
name: cheddar
port:
number: 80
path: /
pathType: ImplementationSpecific
but the logs complain:
W0205 15:14:07.482439 1 warnings.go:67] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
time="2021-02-05T15:14:07Z" level=info msg="Updated ingress status" namespace=default ingress=cheddar
W0205 15:18:19.104225 1 warnings.go:67] networking.k8s.io/v1beta1 IngressClass is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 IngressClassList
Why? What's the correct yaml to use?
I'm currently on microk8s 1.20
I have analyzed you issue and came to the following conclusions:
The Ingress will work and these Warnings you see are just to inform you about the available api versioning. You don't have to worry about this. I've seen the same Warnings:
#microk8s:~$ kubectl describe ing
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
As for the "why" this is happening even when you use apiVersion: networking.k8s.io/v1, I have found the following explanation:
This is working as expected. When you create an ingress object, it can
be read via any version (the server handles converting into the
requested version). kubectl get ingress is an ambiguous request,
since it does not indicate what version is desired to be read.
When an ambiguous request is made, kubectl searches the discovery docs
returned by the server to find the first group/version that contains
the specified resource.
For compatibility reasons, extensions/v1beta1 has historically been
preferred over all other api versions. Now that ingress is the only
resource remaining in that group, and is deprecated and has a GA
replacement, 1.20 will drop it in priority so that kubectl get ingress would read from networking.k8s.io/v1, but a 1.19 server
will still follow the historical priority.
If you want to read a specific version, you can qualify the get
request (like kubectl get ingresses.v1.networking.k8s.io ...) or can
pass in a manifest file to request the same version specified in the
file (kubectl get -f ing.yaml -o yaml)
Long story short: despite the fact of using the proper apiVersion, the deprecated one is still being seen as the the default one and thus generating the Warning you experience.
I also see that changes are still being made recently so I assume that it is still being worked on.
I had the same issue and was unable to update the k8s cluster which was subscribed to release channel.
One of the reasons for this log warning generation is the ClusterRole definition of external-dns. The external-dns keep querying the ingresses in k8s cluster as per the rules defined in the Cluster role
- apiGroups: ["extensions", "networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
Found in the helm chart in here
It queries the old extensions of ingress as well which keeps on generating those logs. Please update the cert-manager.
Small Kubernetes API question please.
(This is not helm related btw)
I am just running a basic kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1"
However, I got the following as result: Error from server (NotFound): the server could not find the requested resource
I am a bit confused here, hope this technical question is not too much of a trouble.
What does it even mean?
Is it because I failed to create something? (I never created this "custom.metrics.k8s.io" myself)
Maybe some kind of credential issues?
How can I root cause, troubleshoot and fix this please?
Thank you!
You need to create the APIs with for example:
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
spec:
service:
name: prometheus-adapter
namespace: monitoring
group: custom.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
This works with kube-prometheus. Maybe you have to change spec.service.name and spec.service.namepsace for whatever you use as a monitoring service.
I found this here, but had to change the version of apiregistration.k8s.io from v1beta1 to v1.
I have a question about the usage of apiVersion in Kuberntes.
For example I am trying to deploy traefik 2.2.1 into my kubernetes cluster. I have a traefik middleware deployment definition like this:
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-redirect
spec:
redirectScheme:
scheme: https
permanent: true
port: 443
When I try to deploy my objects with
$ kubectl apply -f middleware.yaml
I got the following error message:
unable to recognize "middleware.yaml": no matches for kind "Middleware" in version "traefik.containo.us/v1alpha1"
The same object works fine with Traefik version 2.2.0 but not with version 2.2.1.
On the traefik documentation there is no example other the ones using the version "traefik.containo.us/v1alpha1"
I dont't hink that my deployment issue is specific to traefik. It is a general problem with conflicting versions. Is there any way how I can figure out which apiVersions are supported in my cluster environment?
There are so many outdated examples posted around using deprecated apiVersions that I wonder if there is some kind of official apiVersion directory for kubernetes? Or maybe there is some kubectl command which I can ask for apiversions?
Most probably crds for traefik v2 are not installed. You could use below command which lists the API versions that are available on the Kubernetes cluster.
kubectl api-versions | grep traefik
traefik.containo.us/v1alpha1
Use below command to check crds installed on the Kubernetes cluster.
kubectl get crds
NAME CREATED AT
ingressroutes.traefik.containo.us 2020-05-09T13:58:09Z
ingressroutetcps.traefik.containo.us 2020-05-09T13:58:09Z
ingressrouteudps.traefik.containo.us 2020-05-09T13:58:09Z
middlewares.traefik.containo.us 2020-05-09T13:58:09Z
tlsoptions.traefik.containo.us 2020-05-09T13:58:09Z
tlsstores.traefik.containo.us 2020-05-09T13:58:09Z
traefikservices.traefik.containo.us 2020-05-09T13:58:09Z
Check traefik v1 vs v2 here
I found that if I just run the kubectl apply again after a few moments it will then work.
I have the following conf
cat kubeadm-conf.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
apiServerExtraArgs:
enable-admission-plugins: NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook
networking:
podSubnet: 192.168.0.0/16
but, when I do
ps -aux | grep admission
root 20697 7.4 2.8 446916 336660 ? Ssl 03:49 0:21 kube-apiserver --authorization-mode=Node,RBAC --advertise-address=10.0.2.15 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction
I only see the NodeRestriction
Please, let me know if anyone can help me make sure that the admission-webhook is indeed running on my cluster.
I assume that MutatingAdmissionWebhook and ValidatingAdmissionWebhook have not being properly propagated through api-server as per your provided outputs.
I suggest to proceed with the following steps to achieve your goal:
Check and edit /etc/kubernetes/manifests/kube-apiserver.yaml manifest file by adding required admission control plugins to enable-admission-plugins Kubernetes API server flag:
--enable-admission-plugins=NodeRestriction,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook
Delete current kube-apiserver Pod and wait until Kubernetes will respawn the new one with reflected changes:
kubectl delete pod <kube-apiserver-Pod> -n kube-system
Hope it will help you, I've successfully checked these steps on my environment.
More information about Kubernetes Admission Controllers you can find in the official documentation.
Thanks for the reply, even that works, posting the kubeadm answer just in case anyone needs it, following is the right kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
networking:
podSubnet: 192.168.0.0/16
apiServer:
extraArgs:
enable-admission-plugins: "NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook"
I faced with the problem that I cannot send emails from K8s pod using smtp.gmail.com and 587 port. I tried to use dnsPolicy: ClusterFirstWithHostNet but nothing has changed. With dnsPolicy: Default everything seems OK but I can't use this approach since pods should be able to resolve other pods from the cluster. Btw, ConfigMap with Google's dns didn't help too:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
[“8.8.8.8”, “8.8.4.4”]
Are there any ideas?
Thanks in advance.
PS, my Kubernetes version is v1.7.2
Maybe it just a syntax error in your configmap with quotes (" vs “)
If you run
kubectl -n kube-system logs kube-dns-xxxx -c dnsmasq
you will get a syntax error, instead of
upstreamNameservers to [8.8.8.8, 4.4.4.4]
There is another approach to solve this problem - you can write Google DNS (8.8.8.8) in container's resolve.conf during its startup.