How to find the correct api version in Kubernetes? - kubernetes

I have a question about the usage of apiVersion in Kuberntes.
For example I am trying to deploy traefik 2.2.1 into my kubernetes cluster. I have a traefik middleware deployment definition like this:
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-redirect
spec:
redirectScheme:
scheme: https
permanent: true
port: 443
When I try to deploy my objects with
$ kubectl apply -f middleware.yaml
I got the following error message:
unable to recognize "middleware.yaml": no matches for kind "Middleware" in version "traefik.containo.us/v1alpha1"
The same object works fine with Traefik version 2.2.0 but not with version 2.2.1.
On the traefik documentation there is no example other the ones using the version "traefik.containo.us/v1alpha1"
I dont't hink that my deployment issue is specific to traefik. It is a general problem with conflicting versions. Is there any way how I can figure out which apiVersions are supported in my cluster environment?
There are so many outdated examples posted around using deprecated apiVersions that I wonder if there is some kind of official apiVersion directory for kubernetes? Or maybe there is some kubectl command which I can ask for apiversions?

Most probably crds for traefik v2 are not installed. You could use below command which lists the API versions that are available on the Kubernetes cluster.
kubectl api-versions | grep traefik
traefik.containo.us/v1alpha1
Use below command to check crds installed on the Kubernetes cluster.
kubectl get crds
NAME CREATED AT
ingressroutes.traefik.containo.us 2020-05-09T13:58:09Z
ingressroutetcps.traefik.containo.us 2020-05-09T13:58:09Z
ingressrouteudps.traefik.containo.us 2020-05-09T13:58:09Z
middlewares.traefik.containo.us 2020-05-09T13:58:09Z
tlsoptions.traefik.containo.us 2020-05-09T13:58:09Z
tlsstores.traefik.containo.us 2020-05-09T13:58:09Z
traefikservices.traefik.containo.us 2020-05-09T13:58:09Z
Check traefik v1 vs v2 here

I found that if I just run the kubectl apply again after a few moments it will then work.

Related

istio v1.11.4 - install via helm chart; how to enable envoy proxy logging?

This is probably a very basic question. I am looking at Install Istio with Helm and Enable Envoy’s access logging.
How do I enable envoy access logging if I install istio via its helm charts?
Easiest, and probably only, way to do this is to install Istio with IstioOperator using Helm.
Steps to do so are almost the same, but instead of base chart, you need to use istio-operator chart.
First create istio-operator namespace:
kubectl create namespace istio-operator
then deploy IstioOperator using Helm (assuming you have downloaded Istio, and changed current working directory to istio root):
helm install istio-operator manifests/charts/istio-operator -n istio-operator
Having installed IstioOperator, you can now install Istio. This is a step where you can enable Envoy’s access logging:
kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
EOF
I tried enabling Envoy’s access logging with base chart, but could not succeed, no matter what I did.

Cannot deploy virtual-server on Minikube

I am just exploring and want to helm my k8dash, but got the weird error since I have been able to deploy on AWS EKS.
I am running them on my Minikube V1.23.2
My helm version is v3.6.2
Kubernetes kubectl version is v1.22.3
Basically if I do helm template, the VirtualServer would be like this:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: k8dash
namespace: k8dash
spec:
host: namahost.com
routes:
- action:
pass: RELEASE-NAME
path: /
upstreams:
- name: RELEASE-NAME
port: 80
service: RELEASE-NAME
and I got this error:
Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "VirtualServer" in version "k8s.nginx.org/v1"
It's weird, deploying this one on AWS EKS just fine but locally got this error and I could not find any clue while Googling. Does it has something to do with my tools version?
You have to install additional CRDs as both VirtualServer and VirtualServerRoute are not oob, but nginx resources.
CustomResourceDefinitions:
The CustomResourceDefinition API resource allows you to define custom
resources. Defining a CRD object creates a new custom resource with a
name and schema that you specify. The Kubernetes API serves and
handles the storage of your custom resource. The name of a CRD object
must be a valid DNS subdomain name.
This frees you from writing your own API server to handle the custom
resource, but the generic nature of the implementation means you have
less flexibility than with API server aggregation.
Nginx Create Custom Resources
Note: By default, it is required to create custom resource definitions
for VirtualServer, VirtualServerRoute, TransportServer and Policy.
Otherwise, the Ingress Controller pods will not become Ready. If you’d
like to disable that requirement, configure -enable-custom-resources
command-line argument to false and skip this section.
Create custom resource definitions for VirtualServer and VirtualServerRoute, TransportServer and Policy resources.
You can find crds under https://github.com/nginxinc/kubernetes-ingress/tree/master/deployments/common/crds:
$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress/deployments
$ git checkout v2.0.3 (or latest, as you wish)
$ kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
After successful applying you will be able to create both VirtualServer and VirtualServerRoute

Kubernetes - NginxIngressController resources not creating

We are using Nginx ingress operator version 0.2.0 and the controller version 1.11.1. Following steps are completed to deploy the CRD and operator.
https://github.com/nginxinc/nginx-ingress-operator/blob/release-0.2.0/docs/manual-installation.md
After that, we are deploying the controller using the following yaml:
apiVersion: k8s.nginx.org/v1alpha1
kind: NginxIngressController
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
type: deployment
image:
repository: nginx/nginx-ingress
tag: 1.11.1
pullPolicy: Always
serviceType: NodePort
nginxPlus: False
The manifest gets applied successfully but none of the required resources are getting created (deployment and service). Hence, the ingress is not getting the address.
kubectl get all -n ingress-nginx
No resources found in ingress-nginx namespace.
kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress <none> * 80 6h23m
kubeadm, kubelet & kubectl version 1.21.2.
Earlier we had deployed it on minikube and it was working fine.
I have reproduced the use case using Nginx ingress operator version 0.4.0 and the controller version 2.0.x by following the documentation and successfully created the Nginx Ingress Operator and NginxIngressController. Firstly, I didn't create the namespace ingress-nginx,While running the command
kubectl get all -n ingress-nginx. I was getting the error No resources found in ingress-nginx namespace.
After creating the required namespace by running the command kubectl create namespace ingress-nginx , I am able to get the resources(pod,service,deployment,replica set) successfully.
Can you try again by changing the nginx controller and operator versions to the latest one and also check the configurations correctly.

Logs complaining "extensions/v1beta1 Ingress is deprecated"

I'm adding an Ingress as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.213.215.191.78.nip.io
http:
paths:
- backend:
service:
name: cheddar
port:
number: 80
path: /
pathType: ImplementationSpecific
but the logs complain:
W0205 15:14:07.482439 1 warnings.go:67] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
time="2021-02-05T15:14:07Z" level=info msg="Updated ingress status" namespace=default ingress=cheddar
W0205 15:18:19.104225 1 warnings.go:67] networking.k8s.io/v1beta1 IngressClass is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 IngressClassList
Why? What's the correct yaml to use?
I'm currently on microk8s 1.20
I have analyzed you issue and came to the following conclusions:
The Ingress will work and these Warnings you see are just to inform you about the available api versioning. You don't have to worry about this. I've seen the same Warnings:
#microk8s:~$ kubectl describe ing
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
As for the "why" this is happening even when you use apiVersion: networking.k8s.io/v1, I have found the following explanation:
This is working as expected. When you create an ingress object, it can
be read via any version (the server handles converting into the
requested version). kubectl get ingress is an ambiguous request,
since it does not indicate what version is desired to be read.
When an ambiguous request is made, kubectl searches the discovery docs
returned by the server to find the first group/version that contains
the specified resource.
For compatibility reasons, extensions/v1beta1 has historically been
preferred over all other api versions. Now that ingress is the only
resource remaining in that group, and is deprecated and has a GA
replacement, 1.20 will drop it in priority so that kubectl get ingress would read from networking.k8s.io/v1, but a 1.19 server
will still follow the historical priority.
If you want to read a specific version, you can qualify the get
request (like kubectl get ingresses.v1.networking.k8s.io ...) or can
pass in a manifest file to request the same version specified in the
file (kubectl get -f ing.yaml -o yaml)
Long story short: despite the fact of using the proper apiVersion, the deprecated one is still being seen as the the default one and thus generating the Warning you experience.
I also see that changes are still being made recently so I assume that it is still being worked on.
I had the same issue and was unable to update the k8s cluster which was subscribed to release channel.
One of the reasons for this log warning generation is the ClusterRole definition of external-dns. The external-dns keep querying the ingresses in k8s cluster as per the rules defined in the Cluster role
- apiGroups: ["extensions", "networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
Found in the helm chart in here
It queries the old extensions of ingress as well which keeps on generating those logs. Please update the cert-manager.

How to enable extensions API in Kubernetes?

I'd like to try out the new Ingress resource available in Kubernetes 1.1 in Google Container Engine (GKE). But when I try to create for example the following resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
using:
$ kubectl create -f test-ingress.yaml
I end up with the following error message:
error: could not read an encoded object from test-ingress.yaml: API version "extensions/v1beta1" in "test-ingress.yaml" isn't supported, only supports API versions ["v1"]
error: no objects passed to create
When I run kubectl version it shows:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.7", GitCommit:"6234d6a0abd3323cd08c52602e4a91e47fc9491c", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"}
But I seem to have the latest kubectl component installed since running gcloud components update kubectl just gives me:
All components are up to date.
So how do I enable the extensions/v1beta1 in Kubernetes/GKE?
The issue is that your client (kubectl) doesn't support the new ingress resource because it hasn't been updated to 1.1 yet. This is mentioned in the Google Container Engine release notes:
The packaged kubectl is version 1.0.7, consequently new Kubernetes 1.1
APIs like autoscaling will not be available via kubectl until next
week's push of the kubectl binary.
along with the solution (download the newer binary manually).