Due to Kubernetes removing deprecated API I am trying to figure out an apiVersion of resources in a cluster and came across the following behavior.
When requesting definition for specific deployment, it reports an old api version:
> kubectl get deployment/some-deployment -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
...
Bud when requesting it for all resources api version reported as new for the same resource:
> kubectl get all -o yaml
...
- apiVersion: apps/v1
kind: Deployment
...
Same goes when checking selfLink in metadata:
kubectl get all -o=custom-columns=NAME:.metadata.name,API-version:.metadata.selfLink
and
kubectl get deployment/some-deployment -o=custom-columns=NAME:.metadata.name,API-version:.metadata.selfLink
How should this be interpreted?
Kubernetes version is 1.15.7.
Quoted from here
kubectl get deployment foo is ambiguous, since the server has deployments in multiple api groups. When a resource exists in multiple api groups, kubectl uses the first group listed in discovery docs published by the server which contains the resource. For backwards compatibility, that is the extensions api group.
If you want to ensure you get a deployment in the apps api group, fully qualify the resource you are requesting by running kubectl get deployments.apps test-nginx
If you want a specific version, like v1, in the apps api group, include that as well: kubectl get deployments.v1.apps test-nginx
Also from here
What stored version is used for?
It is the serialized version in etcd. Whenever you fetch an object, the api server reads it from etcd, converts it into an internal version, then converts it to the version you requested.
So, If you run
$ kubectl get deployments nginx -o name
deployment.extensions/nginx
As you can see requested version for above command is deployment.extensions
And If you run
$ kubectl get all -o name
...
deployment.apps/nginx
The requested version is deployment.apps. so, it returns the version with apps/v1
Ref: https://github.com/kubernetes/kubernetes/issues/58131
Related
I am just exploring and want to helm my k8dash, but got the weird error since I have been able to deploy on AWS EKS.
I am running them on my Minikube V1.23.2
My helm version is v3.6.2
Kubernetes kubectl version is v1.22.3
Basically if I do helm template, the VirtualServer would be like this:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: k8dash
namespace: k8dash
spec:
host: namahost.com
routes:
- action:
pass: RELEASE-NAME
path: /
upstreams:
- name: RELEASE-NAME
port: 80
service: RELEASE-NAME
and I got this error:
Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "VirtualServer" in version "k8s.nginx.org/v1"
It's weird, deploying this one on AWS EKS just fine but locally got this error and I could not find any clue while Googling. Does it has something to do with my tools version?
You have to install additional CRDs as both VirtualServer and VirtualServerRoute are not oob, but nginx resources.
CustomResourceDefinitions:
The CustomResourceDefinition API resource allows you to define custom
resources. Defining a CRD object creates a new custom resource with a
name and schema that you specify. The Kubernetes API serves and
handles the storage of your custom resource. The name of a CRD object
must be a valid DNS subdomain name.
This frees you from writing your own API server to handle the custom
resource, but the generic nature of the implementation means you have
less flexibility than with API server aggregation.
Nginx Create Custom Resources
Note: By default, it is required to create custom resource definitions
for VirtualServer, VirtualServerRoute, TransportServer and Policy.
Otherwise, the Ingress Controller pods will not become Ready. If you’d
like to disable that requirement, configure -enable-custom-resources
command-line argument to false and skip this section.
Create custom resource definitions for VirtualServer and VirtualServerRoute, TransportServer and Policy resources.
You can find crds under https://github.com/nginxinc/kubernetes-ingress/tree/master/deployments/common/crds:
$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress/deployments
$ git checkout v2.0.3 (or latest, as you wish)
$ kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
After successful applying you will be able to create both VirtualServer and VirtualServerRoute
While deploying mojaloop, Kubernetes responds with the following errors:
Error: validation failed: [unable to recognize "": no matches for kind
"Deployment" in version "apps/v1beta2", unable to recognize "": no
matches for kind "Deployment" in version "extensions/v1beta1", unable
to recognize "": no matches for kind "StatefulSet" in version
"apps/v1beta2", unable to recognize "": no matches for kind
"StatefulSet" in version "apps/v1beta1"]
My Kubernetes version is 1.16.
How can I fix the problem with the API version?
From investigating, I have found that Kubernetes doesn't support apps/v1beta2, apps/v1beta1.
How can I make Kubernetes use a not deprecated version or some other supported version?
I am new to Kubernetes and anyone who can support me I am happy
In Kubernetes 1.16 some apis have been changed.
You can check which apis support current Kubernetes object using
$ kubectl api-resources | grep deployment
deployments deploy apps true Deployment
This means that only apiVersion with apps is correct for Deployments (extensions is not supporting Deployment). The same situation with StatefulSet.
You need to change Deployment and StatefulSet apiVersion to apiVersion: apps/v1.
If this does not help, please add your YAML to the question.
EDIT
As issue is caused by HELM templates included old apiVersions in Deployments which are not supported in version 1.16, there are 2 possible solutions:
1. git clone whole repo and replace apiVersion to apps/v1 in all templates/deployment.yaml using script
2. Use older version of Kubernetes (1.15) when validator accept extensions as apiVersion for Deployment and StatefulSet.
to convert an older Deployment to apps/v1, you can run:
kubectl convert -f ./my-deployment.yaml --output-version apps/v1
You can change manually as an alternative. Fetch the helm chart:
helm fetch --untar stable/metabase
Access the chart folder:
cd ./metabase
Change API version:
sed -i 's|extensions/v1beta1|apps/v1|g' ./templates/deployment.yaml
Add spec.selector.matchLabels:
spec:
[...]
selector:
matchLabels:
app: {{ template "metabase.name" . }}
[...]
Finally install your altered chart:
helm install ./ \
-n metabase \
--namespace metabase \
--set ingress.enabled=true \
--set ingress.hosts={metabase.$(minikube ip).nip.io}
Enjoy!
I prefer kubectl explain.
# kubectl explain deploy
KIND: Deployment
VERSION: apps/v1
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object metadata.
spec <Object>
Specification of the desired behavior of the Deployment.
status <Object>
Most recently observed status of the Deployment.
With kubectl explain you can also see specific parameters of an object:
# kubectl explain Service.spec.externalTrafficPolicy
KIND: Service
VERSION: v1
FIELD: externalTrafficPolicy <string>
DESCRIPTION:
externalTrafficPolicy denotes if this Service desires to route external
traffic to node-local or cluster-wide endpoints. "Local" preserves the
client source IP and avoids a second hop for LoadBalancer and Nodeport type
services, but risks potentially imbalanced traffic spreading. "Cluster"
obscures the client source IP and may cause a second hop to another node,
but should have good overall load-spreading.
To put it simply, you don't force the current installation to use an outdated version of the API; you fix the version in your config files.
If you want to check which version your current kube supports, run :
root#ubn64:~# kubectl api-versions | grep -i apps
apps/v1
I was getting below error -
error: unable to recognize "deployment.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
solution that worked for me -
modified the line from apiVersion: extensions/v1beta1 to apiVersion: apps/v1 in deployment.yaml
Reason -
we had upgraded the K8 cluster hence this error occured.
This was annoying me because I am testing lots of helm packages so I wrote a quick script - which could be modified to sort your workflow perhaps
see below
New workflow
First fetch the chart as a tgz to your working directory
helm fetch repo/chart
then in your working directly run bash script below - which I named helmk
helmk myreleasename mynamespace chart.tgz [any parameters for kubectl create]
Contents of helmk - need to edit your kubeconfig clustername to work
#!/bin/bash
echo usage $0 releasename namespace chart.tgz [createparameter1] [createparameter2] ... [createparameter n]
echo This will use your namespace then shift back to default so be careful!!
kubectl create namespace $2 #this will create harmless error if namespace exists have to ignore
kubectl config set-context MYCLUSTERNAME --namespace $2
helm template -n $1 --namespace $2 $3 | kubectl convert -f /dev/stdin | kubectl create --save-config=true ${#:4} -f /dev/stdin
#note the --namespace parameter in helm template above seems to be ignored so we have to manually switch context
kubectl config set-context MYCLUSTERNAME --namespace default
It's a slightly dangerous hack since I manually switch to your new desired namespace context then back again so only to be used for single user devs really or comment that out.
You will get a warning about using the kubectl convert facility like this
If you need to edit the YAML to customise - just replace one of the /dev/stdin to intermediate files but It's probably better to get it up using "create" with a save-config as I have and then simply "apply" your changes which means that they will be recorded in kubernetes too.
Good luck
I was facing the same issue on a cluster that was upgraded to a version that does not support certain api versions (v1.17 and apps/v1beta2).
$ helm get manifest some-deployment
...
# Source: some-deployment/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: some-deployment
labels:
...
Looking at the helm docs, it seems that the manifest is stored in the cluster for helm to reference, and it may include invalid api versions, leading to errors.
The 2 proposed methods are to either manually edit the manifest (a rather tedious multi-stage process), or use a helm plugin called mapkubeapis that does it automatically.
$ helm plugin install https://github.com/helm/helm-mapkubeapis
It can be run with the --dry-run flag to simulate the effects:
$ helm mapkubeapis --dry-run some-deployment
2021/02/15 09:33:29 NOTE: This is in dry-run mode, the following actions will not be executed.
2021/02/15 09:33:29 Run without --dry-run to take the actions described below:
2021/02/15 09:33:29
2021/02/15 09:33:29 Release 'some-deployment' will be checked for deprecated or removed Kubernetes APIs and will be updated if necessary to supported API versions.
2021/02/15 09:33:29 Get release 'some-deployment' latest version.
2021/02/15 09:33:30 Check release 'some-deployment' for deprecated or removed APIs...
2021/02/15 09:33:30 Found deprecated or removed Kubernetes API:
"apiVersion: apps/v1beta2
kind: Deployment"
Supported API equivalent:
"apiVersion: apps/v1
kind: Deployment"
2021/02/15 09:33:30 Finished checking release 'some-deployment' for deprecated or removed APIs.
2021/02/15 09:33:30 Deprecated or removed APIs exist, updating release: some-deployment.
2021/02/15 09:33:30 Map of release 'some-deployment' deprecated or removed APIs to supported versions, completed successfully.
and then run without the flag to apply the changes.
I'm trying to get metadata for a given kubernetes resource. Similar to a describe for a REST end-point.
Is there a kubectl to get all possible things that I could provide for any k8s resource ?
For example for the deployment resource, it could be something like this.
apiVersion: apps/v1
kind: Deployment
metadata:
name: <type:String>
<desc: name for the deployment>
namespace: <type:String>
<desc: Valid namespace>
annotations:
...
Thanks in advance !
You can use the kubectl explain CLI command:
This command describes the fields associated with each supported API
resource. Fields are identified via a simple JSONPath identifier:
<type>.<fieldName>[.<fieldName>]
Add the --recursive flag to display all of the fields at once without descriptions. Information about each field is retrieved from the server in OpenAPI format.
Example to view all Deployment associated fields:
kubectl explain deployment --recursive
You can dig into specific fields:
kubectl explain deployment.spec.template
You can also rely on Kubernetes API Reference Docs.
Are you familiar with OpenApi/Swagger? Try to open the following file in swagger-ui https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json
If you have a live kubernetes api available the file should be available under /openapi/v2 like they describe here: https://kubernetes.io/docs/concepts/overview/kubernetes-api/#openapi-and-swagger-definitions
I am creating namespace using kubectl with yaml. The following is my yaml configuration
apiVersion: v1
kind: Namespace
metadata:
name: "slackishapp"
labels:
name: "slackishapp"
But, when I run kubectl create -f ./slackish-namespace-manifest.yaml, I got an error like the following
error: SchemaError(io.k8s.api.autoscaling.v2beta2.PodsMetricStatus): invalid object doesn't have additional properties.
What goes wrong on my yaml? I am reading about it on the documentation as well. I don't see any difference with my configuration.
There is nothing wrong with your yaml but I suspect you have the wrong version of kubectl.
kubectl needs to be within 1 minor from the cluster you are using as described here.
You can check your versions with
kubectl version
Download the kubectl.exe on https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin then replace it on Program Files\Docker\Docker\resources\bin if you have docker desktop
Like Andreas Wederbrand said, it is a version problem.
As a workaround, try to create your namespace within imperative mode
kubectl create ns slackishapp && kubectl label ns slackishapp name=slackishapp
And then compare existing yaml with one you have written in order to check what is missing
kubectl get ns slackishapp -o yaml --export
I have deployed a Kubernetes service and when I query to get the Deployment $ kubectl get deployments, I can see the Deployment.
the json of the Deployment looks like below --
apiVersion: v1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
template:
metadata:
labels:
app: test
release: testRelease
customProp: xyz
My question is how can I frame a query by which I can get the Deployment by specifying the 'customProp' value. Does kubectl support to pass a jsonpath as part of the query? so that I can pass a json path like jsonpath='{$.spec.template.metadata.labels.customProp}' and value against this jsonPath as 'xyz'.
This is what I am thinking to execute:
$ kubectl get deployments -n <namespace> <json path query>
However not sure how to frame the json path query and pass along with $kubectl get deployments.
Kubectl does support query feature, you can use below query
kubectl get pods --selector=customProp=xyz
Kubectl also supports JSON path expressions too, to get more details, follow the link. You can write query following the syntax shown on the link.
Yes, one can query to the kube-apiserver for a resource using jsonpath. Run following command to get what you want:
$ kubectl get deploy test -o=jsonpath='{.spec.template.metadata.labels.customProp}'
For more usage, see https://kubernetes.io/docs/reference/kubectl/jsonpath.
Add a label to your deployment object. Then with below command to query specific deployment
kubectl get deploy - l labelname=labelvalue
To get deployed image name as a string, you can choose from any attribute in deployment yaml.
kubectl get deploy/${image.name} -o jsonpath="{..image}"