kubectl version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-18T14:56:51Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
error
When I run kubectl run, an error occurs.
$ kubectl run nginx --image=nginx
WARNING: New generator "deployment/apps.v1" specified, but it isn't available. Falling back to "run/v1".
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
error: no matches for kind "Deployment" in version "apps/v1"
It seems like this is caused by a new version(1.16.x), doesn't it?
As far as I searched, even official documents doesn't explicitly mention something related to this situation. How can I use kubectl run?
Try
kubectl create deployment --image nginx my-nginx
As kubectl Usage Conventions suggests,
Specify the --generator flag to pin to a specific behavior when you
use generator-based commands such as kubectl run or kubectl expose
Use kubectl run --generator=run-pod/v1 nginnnnnnx --image nginx instead.
Also #soltysh describes well enough why its better to use kubectl create instead of kubectl run
Related
While creating a deployment using command
kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
I am getting error Error: unknown flag: --replicas
controlplane $ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
controlplane $ kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
Error: unknown flag: --replicas
See 'kubectl create deployment --help' for usage.
Could anyone please help me with the reason for this as this command is working on other Kubernetes clusters?
You may try to put a blank character between -- and the commands
For example
kubectl create deploy nginx --image=nginx:1.7.8 -- replicas=2
It's work for me.
It looks like that --replicas and --port flags were added in version 1.19 based on the v1-19 release notes and that's why you are seeing the error.
So, you need the minimum version 1.19 to able to use the replicas and port flags as part of the kubectl create deployment command.
You can however use the kubectl scale/expose command after creating the deployment.
Relevant PR links for replicas and port.
if you trying to update the replica parameter in Azure release pipeline inside the help upgrade command then refer to the following link
Overriding Helm chart values
here it explains that you can override the parameters inside the vallues.yaml file with set command like this
helm upgrade $(RELEASE_ENV) --install \
infravc/manifests/helm/web \
--set namespace=$(NAMESPACE) \
--set replicas=$(replicas) \
--set replicasMax=$(replicasMax) \
--set ingress.envSuffix=$(envSuffix) \
--set ENV.SECRET=$(appSecretNonprod) \
--set ENV.CLIENT_ID=$(clientIdNonprod) \
Is there a way to inspect a container running in pod directly from the kubernetes command line (using kubectl) to see some details such as running in priveleged mode for instance.
something like:
kubectl inspect -c <containerName>
The only way I found is to ssh to the node hosting the pod and perform a docker inspect <containerID> but this is kind of tedious.
My kubernetes version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Check kubectl describe pods/<pod_name>
If it is not enough for you, you can go for JSON and filter it with jq
kubectl get <pod_name> -ojson | jq '.spec.containers[] | .securityContext'
Also, check kubectl Cheat Sheet
You have below kubectl commands to know details of a pod
kubectl describe <pod_name> -n <namespacename>
kubectl get <pod_name> -n <namespacename> -o yaml # output in yaml format
kubectl get <pod_name> -n <namespacename> -o json # output in json format
If you want to know which containers are running in privileged mode from an audit point of view then I suggest to look at project Falco which has a mechanism to write policies and trigger alert when a container is violating a policy. The policy could be no container can run in privileged mode.
I a similar problem, that I had a pod with status Evicted and needed to inspect it (on kubectl is describe). So i used:
kubectl describe pod <pod-name>
So I could see what I was looking for:
...
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [DiskPressure].
...
So searching i found a very nice article talking about 12 Critical Kubernetes Health Conditions You Need to Monitor and Why.
Still on solving, but this log may help others.
Folks, when running the following kubectl command:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: openvpn-data-claim
namespace: openvpn
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
error: SchemaError(io.k8s.api.autoscaling.v1.Scale): invalid object doesn't have additional properties
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.9-gke.24", GitCommit:"39e41a8d6b7221b901a95d3af358dea6994b4a40", GitTreeState:"clean", BuildDate:"2020-02-29T01:24:35Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
This answer is is an addition to #Cmag answer and
my intention is to provide more insights about this issue to help the community.
According to Kubernetes Version Skew Policy:
kubectl is supported within one minor version (older or newer) of kube-apiserver.
IF kube-apiserver is at 1.15: kubectl is supported at 1.16, 1.15, and 1.14.
Note: If version skew exists between kube-apiserver instances in an HA cluster, for example kube-apiserver instances are at 1.15 and 1.14, kubectl will support only 1.15 and 1.14 since any other versions would be more than one minor version skewed.
Each update of kubernetes has many components that are added, changed, moved, deprecated or removed. Here is the Kubernetes Changelog of version 1.15.
Even running a much newer client versions may give you some issues
In K8s 1.10 the kubectl run had a default behavior of creating deployments:
❯ ./kubectl-110 run ubuntu --image=ubuntu
deployment.apps "ubuntu" created
Starting on 1.12 the kubectl run was deprecated to all generators except pods, here is an example with kubectl 1.16:
❯ ./kubectl-116 run ubuntu --image=ubuntu --dry-run
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/ubuntu created (dry run)
Besides the warning, it still work as intended, but it changed in K8s 1.18 client:
❯ ./kubectl-118 version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.9-gke.24", GitCommit:"39e41a8d6b7221b901a95d3af358dea6994b4a40", GitTreeState:"clean", BuildDate:"2020-02-29T01:24:35Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl run --generator=deployment/apps.v1 ubuntu --image=ubuntu --dry-run=client
Flag --generator has been deprecated, has no effect and will be removed in the future.
pod/ubuntu created (dry run)
It ignored the flag and created only a pod. That flag is supported by kubernetes 1.15 as we saw in the test, but the kubectl 1.18 had significant changes that did not allowed running it.
This is a simple example to illustrate the importance to follow the skew policy on Kubernetes, it can save a lot of troubleshooting time in the future!
Easily fixed by upgrading local kubectl with asdf.
asdf install kubectl 1.15.9
When I'm trying to describe on hpa following error is thrown:
kubectl describe hpa go-auth
Error from server (NotFound): the server could not find the requested resource
My kubectl version is :
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.7-gke.7", GitCommit:"b80664a77d3bce5b4701bc881d972b1a702290bf", GitTreeState:"clean", BuildDate:"2019-04-04T03:12:09Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"}
Beware of kubectl version skew. Running kubectl v1.14 with kube-apiserver v1.12 is not supported.
As per kubectl docs:
You must use a kubectl version that is within one minor version
difference of your cluster. For example, a v1.2 client should work
with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl
helps avoid unforeseen issues.
Give it another try using kubectl v1.12.x and you probably will get rid of this problem. Also, take a look at the #568 issue (especially this comment), which addresses the same problem that you have.
If you are wondering on how to manage multiple kubectl versions, I recommend this read: Using different kubectl versions with multiple Kubernetes clusters.
I need to deploy some docker images, and manage them with the Kubernetes.
I followed the tutorial"Interactive Tutorial - Deploying an App"(https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/).
But after I typing the command kuberctl get deployments, in the result table, the deployment column shows 0 instead of 1, it's confusing me.
If there is anyone kindly guides me what's going wrong and what shall I do?
The OS is Ubuntu16.04;
The kuberctl version command shows the server and client version informations well.
The docker image is tagged already(a mysql:5.7 image).
devserver:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
devserver:~$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
ap-mysql 1 1 1 0 1
hello-node 1 1 1 0 1
I expect the answer about the phenomenon and the resolution. And I need to deploy my image on the minikube.
Katacoda uses hosted VM's so sometimes it may be slow to respond to the terminal input.
To verify if any deployment is present you may run kubectl get deployments --all-namespaces.To see what's going on with your deployment you can run kubectl describe DEPLOYMENT_NAME -n NAMESPACE.To inspect a pod you can do the same kubectl describe POD_NAME -n NAMESPACE.