Kubernetes deployment Error: unknown flag: --replicas issue - kubernetes

While creating a deployment using command
kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
I am getting error Error: unknown flag: --replicas
controlplane $ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
controlplane $ kubectl create deploy nginx --image=nginx:1.7.8 --replicas=2 --port=80
Error: unknown flag: --replicas
See 'kubectl create deployment --help' for usage.
Could anyone please help me with the reason for this as this command is working on other Kubernetes clusters?

You may try to put a blank character between -- and the commands
For example
kubectl create deploy nginx --image=nginx:1.7.8 -- replicas=2
It's work for me.

It looks like that --replicas and --port flags were added in version 1.19 based on the v1-19 release notes and that's why you are seeing the error.
So, you need the minimum version 1.19 to able to use the replicas and port flags as part of the kubectl create deployment command.
You can however use the kubectl scale/expose command after creating the deployment.
Relevant PR links for replicas and port.

if you trying to update the replica parameter in Azure release pipeline inside the help upgrade command then refer to the following link
Overriding Helm chart values
here it explains that you can override the parameters inside the vallues.yaml file with set command like this
helm upgrade $(RELEASE_ENV) --install \
infravc/manifests/helm/web \
--set namespace=$(NAMESPACE) \
--set replicas=$(replicas) \
--set replicasMax=$(replicasMax) \
--set ingress.envSuffix=$(envSuffix) \
--set ENV.SECRET=$(appSecretNonprod) \
--set ENV.CLIENT_ID=$(clientIdNonprod) \

Related

you must specify an existing container or a new image when specifying args

According to the Kubernetes docs, you can start a debug version of a container and run a command on it like this:
$ kubectl debug (POD | TYPE[[.VERSION].GROUP]/NAME) [ -- COMMAND [args...] ]
But when I try and do this in real life I get the following:
$ kubectl debug mypod \
--copy-to=mypod-dev \
--env='PYTHONPATH="/my_app"' \
--set-image=mycontainer=myimage:dev -- python do_the_debugging.py
error: you must specify an existing container or a new image when specifying args.
If I don't specify -- python do_the_debugging.py I can create the debug container, but then I need a separate command to actually do the debugging:
kubectl exec -it mypod-dev -- python do_the_debugging.py
Why can't I do this all in one line as the docs seem to specify?
Some kubernetes details:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-23T02:22:53Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-eks-ad4801", GitCommit:"ad4801fd44fe0f125c8d13f1b1d4827e8884476d", GitTreeState:"clean", BuildDate:"2020-10-20T23:27:12Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Try to add -it and --container flags to your command. In your specific case, it might look like this:
$ kubectl debug mypod \
--copy-to=mypod-dev \
--env='PYTHONPATH="/my_app"' \
--set-image=mycontainer=myimage:dev \
--container=mycontainer -it -- python do_the_debugging.py
I am not able to reproduce your exact issue because I don't have the do_the_debugging.py script, but I've created simple example.
First, I created Pod with name web using nginx image:
root#kmaster:~# kubectl run web --image=nginx
pod/web created
And then I ran kubectl debug command to create a copy of web named web-test-1 but with httpd image:
root#kmaster:~# kubectl debug web --copy-to=web-test-1 --set-image=web=httpd --container=web -it -- bash
If you don't see a command prompt, try pressing enter.
root#web-test-1:/usr/local/apache2#
Furthermore, I recommend you to upgrade your cluster to a newer version because your client and server versions are very diffrent.
Your kubectl version is 1.20, therefore you should have kube-apiserver in version 1.19 or 1.20.
Generally speaking if kube-apiserver is in version X, kubectl should be in version X-1 or X or X+1.

Helm 3: x509 error when connecting to local Kubernetes

I'm a perfect noob with K8s. I installed microk8s and Helm using snap to experiment locally. I wonder whether my current issue comes from the use of snap (purpose of which is encapsulation, from what I understood)
Environment
Ubuntu 20.04LTS
helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.4-1+6f17be3f1fd54a", GitCommit:"6f17be3f1fd54a88681869d1cf8bedd5a2174504", GitTreeState:"clean", BuildDate:"2020-06-23T21:16:24Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.4-1+6f17be3f1fd54a", GitCommit:"6f17be3f1fd54a88681869d1cf8bedd5a2174504", GitTreeState:"clean", BuildDate:"2020-06-23T21:17:52Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"linux/amd64"}
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* microk8s microk8s-cluster admin
Post install set up
microk8s enable helm3
Kubernetes is up and running
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:16443
CoreDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Problem while connecting helm to microk8s
helm ls --kube-token ~/token --kube-apiserver https://127.0.0.1:16443
Error: Kubernetes cluster unreachable: Get https://127.0.0.1:16443/version?timeout=32s: x509: certificate signed by unknown authority
How can I tell helm
to trust microk8s certs or
to ignore this verification step
From what I read, I may overcome this issue by pointing to kube's config using --kubeconfig.
helm ls --kube-token ~/token --kube-apiserver https://127.0.0.1:16443 --kubeconfig /path/to/kubernetes/config
In the context of microk8s installed with snap, I am not quite sure what this conf file is nor where to find it.
/snap/microk8s/1503 ?
/var/snap/microk8s/1503 ?
Helm looks for kubeconfig at this path $HOME/.kube/config.
Please run this command
microk8s.kubectl config view --raw > $HOME/.kube/config
This will save the config at required path in your directory and shall work
Reference Link here
Please try exporting kubeconfig file using following command:
export KUBECONFIG=/var/snap/microk8s/current/credentials/client.config
If you happen to be using WSL with docker desktop with k8s running in docker desktop but helm running in WSL a very similar command as provided by Tarun will also work.
Assuming you are running the Windows version of kubectl
➜ which kubectl.exe
➜ /mnt/c/Program Files/Docker/Docker/resources/bin/kubectl.exe
➜ which kubectl
➜ kubectl: aliased to /mnt/c/Program\ Files/Docker/Docker/resources/bin/kubectl.exe
➜ kubectl config view --raw > $HOME/.kube/config

Why does simple kubectl(1.16) run show an error?

kubectl version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-18T14:56:51Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
error
When I run kubectl run, an error occurs.
$ kubectl run nginx --image=nginx
WARNING: New generator "deployment/apps.v1" specified, but it isn't available. Falling back to "run/v1".
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
error: no matches for kind "Deployment" in version "apps/v1"
It seems like this is caused by a new version(1.16.x), doesn't it?
As far as I searched, even official documents doesn't explicitly mention something related to this situation. How can I use kubectl run?
Try
kubectl create deployment --image nginx my-nginx
As kubectl Usage Conventions suggests,
Specify the --generator flag to pin to a specific behavior when you
use generator-based commands such as kubectl run or kubectl expose
Use kubectl run --generator=run-pod/v1 nginnnnnnx --image nginx instead.
Also #soltysh describes well enough why its better to use kubectl create instead of kubectl run

Authentication problem when installing something

The command fails due to credentials problems, but when you test with kubectl get nodes everything looks fine.
Output of helm install:
⋊> ~/t/mtltech on master ⨯ helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true 00:31:41
Error: the server has asked for the client to provide credentials
Output of kubectl get nodes:
⋊> ~/t/mtltech on master ⨯ kubectl get nodes 00:37:41
NAME STATUS ROLES AGE VERSION
gke-mtltech-default-pool-977ee0b2-5lmi Ready <none> 7h v1.11.7-gke.4
gke-mtltech-default-pool-977ee0b2-hi4v Ready <none> 7h v1.11.7-gke.4
gke-mtltech-default-pool-977ee0b2-mjiv Ready <none> 7h v1.11.7-gke.4
Output of helm version:
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.7-gke.4", GitCommit:"618716cbb236fb7ca9cabd822b5947e298ad09f7", GitTreeState:"clean", BuildDate:"2019-02-05T19:22:29Z", GoVersion:"go1.10.7b4", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider: Google Cloud
I've tried to reset it several times with rm -rf ~/.helm && helm init --service-account tiller but it doesn't change anything.
Any idea ?
Thanks.
The problem here is the Tiller. I do not know how you deployed Helm and Tiller, but the mistake was there.
I used this chart and all works correctly, then I deleted my service account and cluster role binding and I met the same error - deleting only cluster role binding gives error:
Error: release nginx-ingress failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:tiller" cannot get namespaces in the namespace "default"
So the error is due to missing Service Account or both.
Solution for this:
rm -rf ~/.helm
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account=tiller
kubectl get pods -n kube-system
check the full name of tiller pod:
kubectl delete pod -n kube-system tiller-deploy-xxx
Wait till the tiller pod will be redeployed and install your helm chart:
helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true

How to install latest Kong chart with ingress controller?

I have a problem installing Helm chart for Kong and not sure if I'm doing something wrong or it's a chart issue.
If I run:
helm upgrade kong stable/kong \
--install \
--set ingressController.enabled=true
Everything is installed but on the second run (and I would like to execute always same command in CI/CD) it fails because migraiton tasks have wrong password.
If I try to hardcode the password:
helm upgrade kong stable/kong \
--install \
--set ingressController.enabled=true,env.pg_password=<hardcoded-string>
It fails even on first try with failed password.
What is the expected way to deploy Kong as ingress controller in a continous manner?
Edit 1
Versions:
rafal.wrzeszcz#devel0:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6", GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean", BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
rafal.wrzeszcz#devel0:~$ helm version
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Edit 2
kong-postgres proof of existance:
rafal.wrzeszcz#devel0:~$ kubectl get secret kong-postgresql -o yaml --namespace <my-namespace>
apiVersion: v1
data:
postgresql-password: <base64-encoded-string>
…