How to enable extensions API in Kubernetes? - kubernetes

I'd like to try out the new Ingress resource available in Kubernetes 1.1 in Google Container Engine (GKE). But when I try to create for example the following resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
using:
$ kubectl create -f test-ingress.yaml
I end up with the following error message:
error: could not read an encoded object from test-ingress.yaml: API version "extensions/v1beta1" in "test-ingress.yaml" isn't supported, only supports API versions ["v1"]
error: no objects passed to create
When I run kubectl version it shows:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.7", GitCommit:"6234d6a0abd3323cd08c52602e4a91e47fc9491c", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"}
But I seem to have the latest kubectl component installed since running gcloud components update kubectl just gives me:
All components are up to date.
So how do I enable the extensions/v1beta1 in Kubernetes/GKE?

The issue is that your client (kubectl) doesn't support the new ingress resource because it hasn't been updated to 1.1 yet. This is mentioned in the Google Container Engine release notes:
The packaged kubectl is version 1.0.7, consequently new Kubernetes 1.1
APIs like autoscaling will not be available via kubectl until next
week's push of the kubectl binary.
along with the solution (download the newer binary manually).

Related

Cannot deploy virtual-server on Minikube

I am just exploring and want to helm my k8dash, but got the weird error since I have been able to deploy on AWS EKS.
I am running them on my Minikube V1.23.2
My helm version is v3.6.2
Kubernetes kubectl version is v1.22.3
Basically if I do helm template, the VirtualServer would be like this:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: k8dash
namespace: k8dash
spec:
host: namahost.com
routes:
- action:
pass: RELEASE-NAME
path: /
upstreams:
- name: RELEASE-NAME
port: 80
service: RELEASE-NAME
and I got this error:
Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "VirtualServer" in version "k8s.nginx.org/v1"
It's weird, deploying this one on AWS EKS just fine but locally got this error and I could not find any clue while Googling. Does it has something to do with my tools version?
You have to install additional CRDs as both VirtualServer and VirtualServerRoute are not oob, but nginx resources.
CustomResourceDefinitions:
The CustomResourceDefinition API resource allows you to define custom
resources. Defining a CRD object creates a new custom resource with a
name and schema that you specify. The Kubernetes API serves and
handles the storage of your custom resource. The name of a CRD object
must be a valid DNS subdomain name.
This frees you from writing your own API server to handle the custom
resource, but the generic nature of the implementation means you have
less flexibility than with API server aggregation.
Nginx Create Custom Resources
Note: By default, it is required to create custom resource definitions
for VirtualServer, VirtualServerRoute, TransportServer and Policy.
Otherwise, the Ingress Controller pods will not become Ready. If you’d
like to disable that requirement, configure -enable-custom-resources
command-line argument to false and skip this section.
Create custom resource definitions for VirtualServer and VirtualServerRoute, TransportServer and Policy resources.
You can find crds under https://github.com/nginxinc/kubernetes-ingress/tree/master/deployments/common/crds:
$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress/deployments
$ git checkout v2.0.3 (or latest, as you wish)
$ kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
After successful applying you will be able to create both VirtualServer and VirtualServerRoute

Kubernetes - NginxIngressController resources not creating

We are using Nginx ingress operator version 0.2.0 and the controller version 1.11.1. Following steps are completed to deploy the CRD and operator.
https://github.com/nginxinc/nginx-ingress-operator/blob/release-0.2.0/docs/manual-installation.md
After that, we are deploying the controller using the following yaml:
apiVersion: k8s.nginx.org/v1alpha1
kind: NginxIngressController
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
type: deployment
image:
repository: nginx/nginx-ingress
tag: 1.11.1
pullPolicy: Always
serviceType: NodePort
nginxPlus: False
The manifest gets applied successfully but none of the required resources are getting created (deployment and service). Hence, the ingress is not getting the address.
kubectl get all -n ingress-nginx
No resources found in ingress-nginx namespace.
kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress <none> * 80 6h23m
kubeadm, kubelet & kubectl version 1.21.2.
Earlier we had deployed it on minikube and it was working fine.
I have reproduced the use case using Nginx ingress operator version 0.4.0 and the controller version 2.0.x by following the documentation and successfully created the Nginx Ingress Operator and NginxIngressController. Firstly, I didn't create the namespace ingress-nginx,While running the command
kubectl get all -n ingress-nginx. I was getting the error No resources found in ingress-nginx namespace.
After creating the required namespace by running the command kubectl create namespace ingress-nginx , I am able to get the resources(pod,service,deployment,replica set) successfully.
Can you try again by changing the nginx controller and operator versions to the latest one and also check the configurations correctly.

Unknown field "setHostnameAsFQDN" despite using latest kubectl client

I have a deployment yaml file that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
setHostnameAsFQDN: true
hostname: hello
subdomain: world
containers:
- name: hello-kubernetes
image: redis
However, I am getting this error:
$ kubectl apply -f dep.yaml
error: error validating "dep.yaml": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "setHostnameAsFQDN" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
My kubectl version:
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
After specifying --validate=falsee, hostname and hostname -f still return different values.
I believe I missunderstood something. Doc says that setHostnameAsFQDN will be available from kubernetes v1.20
You showed kubectl version. Your kubernetes version also need to be v1.20. Make sure you are using kubernetes version v1.20.
Use kubectl version for seeing both client and server version. Where client version refers to kubectl version and server version refers to kubernetes version.
As far the k8s v1.20 release note doc: Previously introduced in 1.19 behind a feature gate, SetHostnameAsFQDN is now enabled by default. More details on this behavior is available in documentation for DNS for Services and Pods

How to find the correct api version in Kubernetes?

I have a question about the usage of apiVersion in Kuberntes.
For example I am trying to deploy traefik 2.2.1 into my kubernetes cluster. I have a traefik middleware deployment definition like this:
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-redirect
spec:
redirectScheme:
scheme: https
permanent: true
port: 443
When I try to deploy my objects with
$ kubectl apply -f middleware.yaml
I got the following error message:
unable to recognize "middleware.yaml": no matches for kind "Middleware" in version "traefik.containo.us/v1alpha1"
The same object works fine with Traefik version 2.2.0 but not with version 2.2.1.
On the traefik documentation there is no example other the ones using the version "traefik.containo.us/v1alpha1"
I dont't hink that my deployment issue is specific to traefik. It is a general problem with conflicting versions. Is there any way how I can figure out which apiVersions are supported in my cluster environment?
There are so many outdated examples posted around using deprecated apiVersions that I wonder if there is some kind of official apiVersion directory for kubernetes? Or maybe there is some kubectl command which I can ask for apiversions?
Most probably crds for traefik v2 are not installed. You could use below command which lists the API versions that are available on the Kubernetes cluster.
kubectl api-versions | grep traefik
traefik.containo.us/v1alpha1
Use below command to check crds installed on the Kubernetes cluster.
kubectl get crds
NAME CREATED AT
ingressroutes.traefik.containo.us 2020-05-09T13:58:09Z
ingressroutetcps.traefik.containo.us 2020-05-09T13:58:09Z
ingressrouteudps.traefik.containo.us 2020-05-09T13:58:09Z
middlewares.traefik.containo.us 2020-05-09T13:58:09Z
tlsoptions.traefik.containo.us 2020-05-09T13:58:09Z
tlsstores.traefik.containo.us 2020-05-09T13:58:09Z
traefikservices.traefik.containo.us 2020-05-09T13:58:09Z
Check traefik v1 vs v2 here
I found that if I just run the kubectl apply again after a few moments it will then work.

Kubernetes ud615 newbie secure-monolith.yaml `error validating data`?

I'm a Kubernetes newbie trying to follow along with the Udacity tutorial class linked on the Kubernetes website.
I execute
kubectl create -f pods/secure-monolith.yaml
That is referencing this official yaml file: https://github.com/udacity/ud615/blob/master/kubernetes/pods/secure-monolith.yaml
I get this error:
error: error validating "pods/secure-monolith.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}; if you choose to ignore these errors, turn validation off with --validate=false
FYI, the official lesson link is here: https://classroom.udacity.com/courses/ud615/lessons/7824962412/concepts/81991020770923
My first guess is that the provided yaml is out of date and incompatible with the current Kubernetes. Is this right? How can I fix/update?
I ran into the exact same problem but with a much simpler example.
Here's my yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
ports:
- containerPort: 80
The command kubectl create -f pod-nginx.yaml returns:
error: error validating "pod-nginx.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}; if you choose to ignore these errors, turn validation off with --validate=false
As the error says, I am able to override it but I am still at a loss as to the cause of the original issue.
Local versions:
Ubuntu 16.04
minikube version: v0.22.2
kubectl version: 1.8
Thanks in advance!
After correct kubectl version (Same with server version), then the issue is fixed, see:
$ kubectl create -f config.yml
configmap "test-cfg" created
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", ...
Server Version: version.Info{Major:"1", Minor:"7", ...
This is the case before modification:
$ kubectl create -f config.yml
error: error validating "config.yml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"ConfigMap"}; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8",...
Server Version: version.Info{Major:"1", Minor:"7",...
In general, we should used same version for kubectl and kubernetes.