Minikube crashes exec'ing into Pod using Alpine linux - kubernetes

Everytime I try and exec into a pod through the minikube dashboard running alpine linux it crashes and closes the connection with the following error
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:262: starting container process caused "exec: \"bash\": executable file not found in $PATH"
CONNECTION CLOSED
Output from the command "kubectl version" reads as follows:
Client Version: version.Info{Major:"1", Minor:"8",
GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8",
GitVersion:"v1.8.0",
GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4",
GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
Can anybody please advise? I can run other containers perfectly OK as long as they have BASH not ASH.
Many thanks

Normally Alpine linux doesn't contain bash.
Have you tried executing into the container with any of the following?
/bin/ash
/bin/sh
ash
sh
so for example kubectl exec -it my-alpine-shell-293fj2fk-fifni2 -- sh should do the job.

Everytime I try and exec into a pod
You didn't specify the command you provided to kubectl exec but based on your question I'm going to assume it is kubectl exec -it $pod -- bash
The problem, as the error message states, is that the container image you are using does not provide bash. Many, many "slim" images don't ship with bash because of the dependencies doing so would bring with them.
If you want a command that works across all images, use sh, since 90% of the time if bash is present, it is symlinked to /bin/sh and the other cases (as you mentioned with ash or dash or whatever) then using sh will still work and allow you to determine if you need to adjust the command to specifically request a different shell.
Thus kubectl exec -it $pod -- sh is the command I would expect to work

Related

Running kubectl patch --local fails due to missing config

I have a GitHub Actions workflow that substitutes value in a deployment manifest. I use kubectl patch --local=true to update the image. This used to work flawlessly until now. Today the workflow started to fail with a Missing or incomplete configuration info error.
I am running kubectl with --local flag so the config should not be needed. Does anyone know what could be the reason why kubectl suddenly started requiring a config? I can't find any useful info in Kubernetes GitHub issues and hours of googling didn't help.
Output of the failed step in GitHub Actions workflow:
Run: kubectl patch --local=true -f authserver-deployment.yaml -p '{"spec":{"template":{"spec":{"containers":[{"name":"authserver","image":"test.azurecr.io/authserver:20201230-1712-d3a2ae4"}]}}}}' -o yaml > temp.yaml && mv temp.yaml authserver-deployment.yaml
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
To view or setup config directly use the 'config' command.
Error: Process completed with exit code 1.
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0",
GitCommit:"ffd68360997854d442e2ad2f40b099f5198b6471", GitTreeState:"clean",
BuildDate:"2020-11-18T13:35:49Z", GoVersion:"go1.15.0", Compiler:"gc",
Platform:"linux/amd64"}
As a workaround I installed kind (it does take longer for the job to finish, but at least it's working and it can be used for e2e tests later).
Added this step:
- name: Setup kind
run: kubectl version
uses: engineerd/setup-kind#v0.5.0
Also use --dry-run=client as an option for your kubectl command.
I do realize this is not the proper solution.
You still need to set the config to access kubernetes cluster. Even tho you are modifying the file locally, you are still executing kubectl command that has to be ran against the cluster. By default, kubectl looks for a file named config in the $HOME/.kube directory.
error: current-context is not set indicates that there is no current context set for the cluster and kubectl cannot be executed against a cluster. You can create a context for Service Account using this tutorial.
Exporting KUBERNETES_MASTER environment variable should do the trick:
$ export KUBERNETES_MASTER=localhost:8081 # 8081 port, just to ensure it works
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate
:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8081 was refused - did you specify the right host or port?
# Notice the port 8081 in the error message ^^^^^^
Now patch also should work as always:
$ kubectl patch --local=true -f testnode.yaml -p '{"metadata":{"managedFields":[]}}' # to get the file content use -o yaml
node/k8s-w1 patched
Alternatively, you can update kubectl to a later version. (v1.18.8 works fine even without the trick)
Explanation section:
The change is likely to be introduced by PR #86173 stop defaulting kubeconfig to http://localhost:8080
The change was reverted in Revert "stop defaulting kubeconfig to http://localhost:8080" #90243 for the later 18.x versions, see the issue kubectl --local requires a valid kubeconfig file #90074 for the details
I ended up using sed to replace the string with image
- name: Update manifests with new images
working-directory: test/cloud
run: |
sed -i "s~image:.*$~image: ${{ steps.image_tags.outputs.your_new_tag }}~g" your-deployment.yaml
Works like a charm now.

Kubernetes create deployment unexpected SchemaError

I'm following that tutorial (https://www.baeldung.com/spring-boot-minikube)
I want to create Kubernetes deployment in yaml file (simple-crud-dpl.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-crud
spec:
selector:
matchLabels:
app: simple-crud
replicas: 3
template:
metadata:
labels:
app: simple-crud
spec:
containers:
- name: simple-crud
image: simple-crud:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
but when I run kubectl create -f simple-crud-dpl.yaml i got:
error: SchemaError(io.k8s.api.autoscaling.v2beta2.MetricTarget): invalid object doesn't have additional properties
I'm using the newest version of kubectl:
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
I'm also using minikube locally as it's described in tutorial. Everything is working till deployment and service. I'm not able to do it.
After installing kubectl with brew you should run:
rm /usr/local/bin/kubectl
brew link --overwrite kubernetes-cli
And also optionally:
brew link --overwrite --dry-run kubernetes-cli.
I second #rennekon's answer. I found that I had docker running on my machine which also installs kubectl. That installation of kubectl causes this issue to show.
I took the following steps:
uninstalled it using brew uninstall kubectl
reinstalled it using brew install kubectl
(due to symlink creation failure) I forced brew to create symlinks using brew link --overwrite kubernetes-cli
I was then able to run my kubectl apply commands successfully.
I too had the same problem. In my Mac system kubectl is running from docker which is preinstalled when I install Docker. You can check this by using below command
ls -l $(which kubectl)
which returns as
/usr/local/bin/kubectl ->
/Applications/Docker.app/Contents/Resources/bin/kubectlcode.
Now we have to overwrite the symlink with kubectl which is installed using brew
rm /usr/local/bin/kubectl
brew link --overwrite kubernetes-cli
(optinal)
brew unlink kubernetes-cli && brew link kubernetes-cli
To Verify
ls -l $(which kubectl)
I encountered the same issue on minikube/ Windows 10 after installing Docker.
It was caused by the version mismatch of kubectl that was mentioned a couple of times already in this thread. Docker installs version 1.10 of kubectl.
You have a couple of options:
1) Make sure the path to your k8s bin is above the ones in docker
2) Replace the kubectl in 'c:\Program Files\Docker\Docker\resources\bin' with the correct one
Your client version is too old. In my env this version comes with Docker. I have to download new client from https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe and now works fine:
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
You can use "--validate=false" in your command. For example:
kubectl create -f simple-crud-dpl.yaml --validate=false
You are using the wrong kubectl version.
Kubectl is compatible 1 version up and down as described in the official docs
The error is confusing but it simply means that your version 1.10 isn't sending all the required parameters to the 1.14 api.
I am on Windows 10 with Docker Client and Minikube both installed. I was getting the error below;
error: SchemaError(io.k8s.api.core.v1.Node): invalid object doesn't have additional properties
I resolved it by updating the version of kubectl.exe to that being used by minikube. Here are the steps:
Note: Minikube tends to use the latest version of Kubernetes so it will be advisable to grab the latest kubectl.
Download the matching version of kubectl.exe.
Navigate to your Docker path where your kubectl is located e.g.
C:\Program Files\Docker\Docker\resources\bin
Place your downloaded kubectl.exe there. If it asks you replace it, please do.
Now type refreshenv in Powershell.
Check the new version if it's what you have placed there; kubectl version.
Now you are good, retry whatever tasks you was doing.
I was getting below error while running kubectl explain pod on windows 10
error: SchemaError(io.k8s.api.core.v1.NodeCondition): invalid object doesn't have additional properties
I had both Minikube and Docker Desktop installed. Reason for this error, as mentioned in earlier answers as well, was mismatch between server (major 1 minor 15) and client version (major 1 minor 10). Client version was coming from Docker Desktop.
To fix I upgraded kubectl client version to v1.15.1 as described here
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.1/bin/windows/amd64/kubectl.exe
Mac user !!! This is for those who installed docker desktop first. The error will show up when you use the apply command. The error comes for a version miss match as some people said here. I did not install kubectl using homebrew. Rather kubectl auto get install when you install docker desktop for mac.
To fix this what I have done is bellow:
Remove the kubectl executable file
rm /usr/local/bin/kubectl
Download kubectl:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
Change the permission:
chmod +x ./kubectl
Move the executable file :
sudo mv ./kubectl /usr/local/bin/kubectl
That is it folks!
Just to show it worked here is the output:
kubectl apply -f ./deployment.yaml
deployment.apps/tomcat-deployment created
Make sure the yml file is correct. I downloaded a valid file from here to test :
https://github.com/LevelUpEducation/kubernetes-demo/tree/master/Introduction%20to%20Kubernetes/Your%20First%20k8s%20App
Was running into the same issue after installing kubectl on my Mac today. Uninstalling kubectl [via brew uninstall kubectl] and reinstalling [brew install kubectl] resolved the issue for me.
According to the kubectl docs,
You must use a kubectl version that is within one minor version difference of your cluster.
kubectl v1.10 client apparently makes requests to kubectl v1.14 server without some newly (in 4 minor versions) required parameters.
For brew users, reinstall kubernetes-cli. It's worth checking what installed the incompatible version. For brew users, check the command symlink ls -l $(which kubectl).
For me Docker installation was the problem. As Docker now comes with Kubernetes support, it installs kubectl along with its own installation. I had downloaded kubectl and minikube without knowing it, then my minikube was being used by Docker's kubectl installation.
Make sure that it is not also happening with you.
A second cause would be a deprecated apiVersion in your .yaml files.
I had a similar problem with error
error: SchemaError(io.k8s.api.storage.v1beta1.CSIDriverList): invalid object doesn't have additional properties
My issue was that my mac was using google's kubectl that was installed with the gcp tools. My path looks there first before going into /usr/local/bin/
Once I run kubectl from /usr/local/bin my problem went away.
In my case, kubectl is always using google's kubectl by gcloud tool, or there was most probably a conflict between Homebrew installed and Gcloud Installed kubectl. I uninstalled Homebrew kubectl and upgrade gcloud tool to the latest, which eventually upgrades the kubectl also in the process. It resolved my issue.
I don't think the problem is with imagePullPolicy, unless you don't have the image locally. The error is about autoscaling, which means it's not able to create replicas of the container.
Can you set replicas: 1 and give it a try?
On windows 10.0, uninstalling Docker helped me get away this problem. Doing with kubectl and minikube.
I know this has already been answered but I though I should post my response since the responses above were helpful but it took me a while to relate it to Azure Dev Ops.
I was getting this error when I was trying to deploy an app to a AKS cluster from Azure Devops. As mentioned above, one of the issues this error could appear is because of version mismatch which was the cause in my case. I fixed it by updating my AKS version into the kubectl advanced configuration section as shown in the figure below

minikube start failure workaround clears any parameters passed to kubeadm init

Here are the versions I'm using
Docker-ce
Client:
Version: 17.06.1-ce
Server:
Engine:
Version: 17.06.1-ce
minikube:
kubectl
Kubectl:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Kubeadm:
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
VirtualBox:
Version 5.2.18 r124319 (Qt5.6.2)
I happen to need to specify the following:
kubeadm reset
kubeadm init --pod-network-cidr=192.168.0.0/16
However when I then start minikube it always fails with the following:
kubeconfig file "/etc/kubernetes/admin.conf" exists already but has got the wrong CA cert
The workaround I've been able to find is to delete all .conf files in /etc/kubernetes
cd /etc/kubernetes/
sudo rm *.conf
cd
sudo minikube delete # may also need rm -rf ~/.minikube
sudo minikube start --vm-driver=none
However new config files are generated and so are the .yaml files under `/etc/kubernetes/manifest' thus erasing all additional attributes of the configuration
Up to that point doing kubeadm config view would show the kube init pod-network-cidr parmameter but not after deleting the .conf files and starting minikube again
First:
Is this ...wrong CA cert error a bug with minikube?
Is there an alternate workaround that would maintain the extra parameters passed during kubeadm init?
I've also tried to pass the following 3 attributes that get cleared from the kube-controller-manager.yaml file as extra-config parameters on the minikube start command
The three missing attributes associated with --pod-network-cidr=192.168.0.0/16 that I've been able to ascertain are:
--allocate-node-cidrs=true
--cluster-cidr=192.168.0.0/16
--node-cidr-mask-size=24
My mikikube start command looks like this:
sudo minikube start --vm-driver=none --extra-config=controller-manager.allocate-node-cidrs=true, controller-manager.cluster-cidr=192.168.0.0/16, controller-manager.node-cidr-mask-size=24
But I get further error when trying this
Any suggestions?
You generally use minikube to setup a mini kubernetes cluster of its own. Generally on your local machine.
You generally use kubeadm to setup a full blown cluster of its own.
You don't generally use both of them together.
Hope it helps!

kubernetes kubectl stalls for 2 Minutes

I am running kubectl on:
Microsoft Windows [Version 10.0.14393]
Pointing to Kubernetes cluster deployed in Azure.
A kubectl version command with verbose logging and preceded with time echo shows a delay of ~ 2 Min before showing any activity on the API calls.
Note the first log line that show 2 Min after invoking the command.
C:\tmp>echo **19:12:50**.23
19:12:50.23
C:\tmp>kubectl version --kubeconfig=C:/Users/jbafd/.kube/config-hgfds-acs-containerservice-1 -v=20
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2
017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"windows/amd64"}
I0610 **19:14:58.311364 9488 loader.go:354]** Config loaded from file C:/Users/jbafd/.kube/config-hgfds-acs-containerservice-1
I0610 19:14:58.313864 9488 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl.exe/v1.6.4 (windows/amd64) kub
ernetes/d6f4332" https://xxjjmaster.australiasoutheast.cloudapp.azure.com/version
I0610 19:14:58.519869 9488 round_trippers.go:417] GET https://xxjjmaster.australiasoutheast.cloudapp.azure.com/version in 206 milliseconds
Other kubectl commands (get nodes etc.) exhibit the same delay.
Flushing dns cache didn't resolve the issue but it looks like the API requests are responsive. Also running the command as admin didn't help.
What other operation kubectl is attempting before loading the config?
there could be two reasons for latency
kubectl is on network drive(mostly H: drive) so kubectl is first copied to your. system and the run
.kube/config file is on network drive
So to summarise either of the thing is on network drive you will face.
You can try one more thing if this doesn't work out, you can run kubectl command -v=20 this will give all the time duration taken by it.
reference

Kubernetes kubectl set image: unknown command

I want to use the command kubectl set image to upgrade my deployment as detailed in the kubernetes hello world walkthrough, but whenever I do I get the following error:
Error: unknown command "set" for "kubectl"
My kubernetes version (obtained by running kubectl version) is the following:
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.5", GitCommit:"25eb53b54e08877d3789455964b3e97bdd3f3bce", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.5", GitCommit:"b0deb2eb8f4037421077f77cb163dbb4c0a2a9f5", GitTreeState:"clean"}
Any help would be really appreciated.
kubectl set was added in 1.3, and your client is 1.2.5
Furthermore, you can use this answer to get familiarize with the syntax of the set image command:
https://stackoverflow.com/a/71849740/14486701