kubernetes time-out after re-installation on macOS - kubernetes

I followed the instructions on https://kubernetes.io/docs/tasks/tools/install-kubectl/ on my Mac and installed Kubernetes CLI using brew.
brew install kubernetes-cli
kubectl and Minikube were installed already some time ago, so I was expecting an update. Now kubectl version and kubernetes cluster-info time out.
pa-demo jps$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
When I try to install kubernetes-cli again, I get:
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core, homebrew/cask).
==> New Formulae
topgrade
==> Updated Formulae
bison ✔ azure-cli bwfmetaedit erlang#20 ghostscript jfrog-cli-go ldc p11-kit smlnj youtube-dl
sphinx-doc ✔ babel crystal fauna-shell helmfile juju mkvtoolnix pyside tarsnap-gui
alexjs bat doctl fortio influxdb kore nginx re2c thors-serializer
Warning: kubernetes-cli 1.11.2 is already installed and up-to-date
To reinstall 1.11.2, run `brew reinstall kubernetes-cli`

You may have installed Minikube, but that doesn't necessarily mean it's actively running. You'd need to run minikube start to actually start the cluster on your machine. This also configures your kubeconfig file to point at the cluster it built.

Related

minikube detecting old version when kubectl is up to date

I am installing minikube again on my Windows machine (did before a couple years ago but hadn't used in over a year) and the installation of the most recent kubectl and minikube went well. That is up until I tried to start minikube with:
minikube start --vm-driver=virtualbox
Which gives the error:
C:\>minikube start --vm-driver=virtualbox
* minikube v1.6.2 on Microsoft Windows 10 Pro 10.0.18362 Build 18362
* Selecting 'virtualbox' driver from user configuration (alternates: [])
! Specified Kubernetes version 1.10.0 is less than the oldest supported version: v1.11.10
X Sorry, Kubernetes 1.10.0 is not supported by this release of minikube
Which doesn't make sense since my kubectl version --client gives back the version of v1.17.0:
C:\>kubectl version --client
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"windows/amd64"}
I did find that for some reason when I have the kubectl.exe that was downloaded to the correct kubectl folder in my program files(x86) (which the environment variable I had already was pointing to) it would say the version is v1.14.3. But then I copied the same file from that folder and just pasted it into the C Drive at its root and then it says the version is v1.17.0.
I am assuming that is just because it being at root is the same as adding it to the environment variables, but then that means something has an old v1.14.3 kubectl file, but there aren't any other kubectl files in there.
So basically, I am not sure if there is something that needs to be set in minikube (which from the documentation I haven't seen a reference to) but somehow minikube is detecting an old kubectl that I need to get rid of.
Since you already had the minikube installed before and update the installation, the best thing to do is execute minikube delete to clean up all previous configuration.
The minikube delete command can be used to delete your cluster. This command shuts down and deletes the Minikube Virtual Machine. No data or state is preserved.
After that execute minikube start --vm-driver=virtualbox and wait the cluster up.
References:
https://kubernetes.io/docs/setup/learning-environment/minikube/#deleting-a-cluster

Kubernetes create deployment unexpected SchemaError

I'm following that tutorial (https://www.baeldung.com/spring-boot-minikube)
I want to create Kubernetes deployment in yaml file (simple-crud-dpl.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-crud
spec:
selector:
matchLabels:
app: simple-crud
replicas: 3
template:
metadata:
labels:
app: simple-crud
spec:
containers:
- name: simple-crud
image: simple-crud:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
but when I run kubectl create -f simple-crud-dpl.yaml i got:
error: SchemaError(io.k8s.api.autoscaling.v2beta2.MetricTarget): invalid object doesn't have additional properties
I'm using the newest version of kubectl:
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
I'm also using minikube locally as it's described in tutorial. Everything is working till deployment and service. I'm not able to do it.
After installing kubectl with brew you should run:
rm /usr/local/bin/kubectl
brew link --overwrite kubernetes-cli
And also optionally:
brew link --overwrite --dry-run kubernetes-cli.
I second #rennekon's answer. I found that I had docker running on my machine which also installs kubectl. That installation of kubectl causes this issue to show.
I took the following steps:
uninstalled it using brew uninstall kubectl
reinstalled it using brew install kubectl
(due to symlink creation failure) I forced brew to create symlinks using brew link --overwrite kubernetes-cli
I was then able to run my kubectl apply commands successfully.
I too had the same problem. In my Mac system kubectl is running from docker which is preinstalled when I install Docker. You can check this by using below command
ls -l $(which kubectl)
which returns as
/usr/local/bin/kubectl ->
/Applications/Docker.app/Contents/Resources/bin/kubectlcode.
Now we have to overwrite the symlink with kubectl which is installed using brew
rm /usr/local/bin/kubectl
brew link --overwrite kubernetes-cli
(optinal)
brew unlink kubernetes-cli && brew link kubernetes-cli
To Verify
ls -l $(which kubectl)
I encountered the same issue on minikube/ Windows 10 after installing Docker.
It was caused by the version mismatch of kubectl that was mentioned a couple of times already in this thread. Docker installs version 1.10 of kubectl.
You have a couple of options:
1) Make sure the path to your k8s bin is above the ones in docker
2) Replace the kubectl in 'c:\Program Files\Docker\Docker\resources\bin' with the correct one
Your client version is too old. In my env this version comes with Docker. I have to download new client from https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe and now works fine:
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
You can use "--validate=false" in your command. For example:
kubectl create -f simple-crud-dpl.yaml --validate=false
You are using the wrong kubectl version.
Kubectl is compatible 1 version up and down as described in the official docs
The error is confusing but it simply means that your version 1.10 isn't sending all the required parameters to the 1.14 api.
I am on Windows 10 with Docker Client and Minikube both installed. I was getting the error below;
error: SchemaError(io.k8s.api.core.v1.Node): invalid object doesn't have additional properties
I resolved it by updating the version of kubectl.exe to that being used by minikube. Here are the steps:
Note: Minikube tends to use the latest version of Kubernetes so it will be advisable to grab the latest kubectl.
Download the matching version of kubectl.exe.
Navigate to your Docker path where your kubectl is located e.g.
C:\Program Files\Docker\Docker\resources\bin
Place your downloaded kubectl.exe there. If it asks you replace it, please do.
Now type refreshenv in Powershell.
Check the new version if it's what you have placed there; kubectl version.
Now you are good, retry whatever tasks you was doing.
I was getting below error while running kubectl explain pod on windows 10
error: SchemaError(io.k8s.api.core.v1.NodeCondition): invalid object doesn't have additional properties
I had both Minikube and Docker Desktop installed. Reason for this error, as mentioned in earlier answers as well, was mismatch between server (major 1 minor 15) and client version (major 1 minor 10). Client version was coming from Docker Desktop.
To fix I upgraded kubectl client version to v1.15.1 as described here
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.1/bin/windows/amd64/kubectl.exe
Mac user !!! This is for those who installed docker desktop first. The error will show up when you use the apply command. The error comes for a version miss match as some people said here. I did not install kubectl using homebrew. Rather kubectl auto get install when you install docker desktop for mac.
To fix this what I have done is bellow:
Remove the kubectl executable file
rm /usr/local/bin/kubectl
Download kubectl:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
Change the permission:
chmod +x ./kubectl
Move the executable file :
sudo mv ./kubectl /usr/local/bin/kubectl
That is it folks!
Just to show it worked here is the output:
kubectl apply -f ./deployment.yaml
deployment.apps/tomcat-deployment created
Make sure the yml file is correct. I downloaded a valid file from here to test :
https://github.com/LevelUpEducation/kubernetes-demo/tree/master/Introduction%20to%20Kubernetes/Your%20First%20k8s%20App
Was running into the same issue after installing kubectl on my Mac today. Uninstalling kubectl [via brew uninstall kubectl] and reinstalling [brew install kubectl] resolved the issue for me.
According to the kubectl docs,
You must use a kubectl version that is within one minor version difference of your cluster.
kubectl v1.10 client apparently makes requests to kubectl v1.14 server without some newly (in 4 minor versions) required parameters.
For brew users, reinstall kubernetes-cli. It's worth checking what installed the incompatible version. For brew users, check the command symlink ls -l $(which kubectl).
For me Docker installation was the problem. As Docker now comes with Kubernetes support, it installs kubectl along with its own installation. I had downloaded kubectl and minikube without knowing it, then my minikube was being used by Docker's kubectl installation.
Make sure that it is not also happening with you.
A second cause would be a deprecated apiVersion in your .yaml files.
I had a similar problem with error
error: SchemaError(io.k8s.api.storage.v1beta1.CSIDriverList): invalid object doesn't have additional properties
My issue was that my mac was using google's kubectl that was installed with the gcp tools. My path looks there first before going into /usr/local/bin/
Once I run kubectl from /usr/local/bin my problem went away.
In my case, kubectl is always using google's kubectl by gcloud tool, or there was most probably a conflict between Homebrew installed and Gcloud Installed kubectl. I uninstalled Homebrew kubectl and upgrade gcloud tool to the latest, which eventually upgrades the kubectl also in the process. It resolved my issue.
I don't think the problem is with imagePullPolicy, unless you don't have the image locally. The error is about autoscaling, which means it's not able to create replicas of the container.
Can you set replicas: 1 and give it a try?
On windows 10.0, uninstalling Docker helped me get away this problem. Doing with kubectl and minikube.
I know this has already been answered but I though I should post my response since the responses above were helpful but it took me a while to relate it to Azure Dev Ops.
I was getting this error when I was trying to deploy an app to a AKS cluster from Azure Devops. As mentioned above, one of the issues this error could appear is because of version mismatch which was the cause in my case. I fixed it by updating my AKS version into the kubectl advanced configuration section as shown in the figure below

Minikub Not Starting Ubuntu 18.4

Minikube not starting with several error messages.
kubectl version gives following message with port related message:
iqbal#ThinkPad:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
You didn't give more details, but there are some concerns that I solved few days ago about minikube issues with kubernetes 1.12.
Indeed, the compatibility matrix between kubernetes and docker recommends to run :
Docker 18.06 + kubernetes 1.12 (Docker 18.09 is not supported now).
Thus, make sure docker version is NOT above 18.06. Then, run the following:
# clean up
minikube delete
minikube start --vm-driver="none"
kubectl get nodes
If you are still encountering issues, please give more details, namely minikube logs.
If you want to change the VM driver add the appropriate --vm-driver=xxx flag to minikube start. Minikube supports
the following drivers:
virtualbox
vmwarefusion
KVM2
KVM (deprecated in favor of KVM2)
hyperkit
xhyve
hyperv
none (Linux-only) - this driver can be used to run the Kubernetes cluster components on the host instead of in a VM. This can be useful for CI workloads which do not support nested virtualization. For example, if your vm is virtualbox then use:
$ minikube delete
$ minikube start --vm-driver=virtualbox

kubectl : connection refused

I am on the way of installing minkube 0.19.1 in Ubuntu 16.04 following the kubernetes documentation. As prerequisits I have installed kubectl and Oracle VirtualBox.
When I check kubectl with kubectl version it gives following.
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:34:20Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
But when I netstat the port to check the process it gives nothing for the results.
I have setup Google cloud SDK as well.
I have searched and tried for many solutions inclusing this but was not able to resolve my issue.
Here are my gcloud config and info results.
$gcloud config list
[compute]
zone = asia-southeast1-a
[core]
account = userName#mail.com
disable_usage_reporting = False
project = sampleproject1990
$gcloud info
Google Cloud SDK [159.0.0]
Platform: [Linux, x86_64] ('Linux', 'userName', '4.8.0-54-generic', '#57~16.04.1-Ubuntu SMP Wed May 24 16:22:28 UTC 2017', 'x86_64', 'x86_64')
Python Version: [2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]]
Python Location: [/usr/bin/python2]
Site Packages: [Disabled]
Installation Root: [/home/userName/products/google-cloud-sdk]
Installed Components:
kubectl: []
core: [2017.06.09]
gcloud: []
gsutil: [4.26]
bq: [2.0.24]
alpha: [2017.03.24]
System PATH: [PATH=/usr/lib/jvm/java-8-oracle/bin:/home/userName/bin:/home/userName/.local/bin:/usr/local/maven/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin:/usr/local/apache-maven-3.3.9/bin]
Python PATH: [/home/userName/products/./google-cloud-sdk/lib/third_party:/home/userName/products/google-cloud-sdk/lib:/usr/lib/python2.7/:/usr/lib/python2.7/plat-x86_64-linux-gnu:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-old:/usr/lib/python2.7/lib-dynload]
Cloud SDK on PATH: [False]
Kubectl on PATH: [/usr/local/bin/kubectl]
WARNING: There are old versions of the Google Cloud Platform tools on your system PATH.
/usr/local/bin/kubectl
Installation Properties: [/home/userName/products/google-cloud-sdk/properties]
User Config Directory: [/home/userName/.config/gcloud]
Active Configuration Name: [my-configuration]
Active Configuration Path: [/home/userName/.config/gcloud/configurations/config_my-configuration]
Account: [userName#mail.com]
Project: [sampleproject1990]
Current Properties:
[core]
project: [sampleproject1990]
account: [userName#mail.com]
disable_usage_reporting: [False]
[compute]
zone: [asia-southeast1-a]
Logs Directory: [/home/userName/.config/gcloud/logs]
Last Log File: [/home/userName/.config/gcloud/logs/2017.06.21/12.39.23.391849.log]
git: [git version 2.7.4]
ssh: [OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g 1 Mar 2016]
Can anyone tell me how I can fix this issue ?
I had similar issues with Minikube and virtualbox driver. Please ensure the interface to which the virtualbox is configured, is up .
I did a sudo ifconfig vboxnet0 up and my issue got resolved
I faced the same issue. Turns out that I was running the command without being the root user. So, if you login as the super user (sudo -i), it might work.
This issue is because the Kubelet is not running or is not healthy.
One way to resolve this issue:
$ sudo swapoff -a
$ sudo systemctl enable kubelet
$ sudo systemctl start kubelet
After this, deploy Kubernetes with kubeadm as given below:
$ sudo kubeadm init --ignore-preflight-errors=all
After loading the kubeadm credentials, untaint the master node and join worker nodes if you are working on a cluster.
And now give the command:
$ sudo kubectl cluster-info
The server and the client should be running with the same Kubernetes version.
If this solution doesn't work, scrape Kubernetes, kubectl, kubeadm and kubelet and follow the Kubernetes installation steps alone from this guide.

Kubernetes configuration step 2 CentOS 7

From http://kubernetes.io/docs/getting-started-guides/kubeadm/
CentOS Linux release 7.2.1511 (Core)
(1/4) Installing kubelet and kubeadm on your hosts
.....
it's ok
$sudo docker -v
Docker version 1.10.3, build cb079f6-unsupported
$sudo kubeadm version
$kubeadm version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-alpha.0.1534+cf7301f16c0363-dirty", GitCommit:"cf7301f16c036363c4fdcb5d4d0c867720214598", GitTreeState:"dirty", BuildDate:"2016-09-27T18:10:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
$sudo systemctl enable docker && systemctl start docker
$sudo systemctl enable kubelet && systemctl start kubelet
it's ok again
$ sudo kubeadm init
<master/tokens> generated token: "15a340.9910f948879b5d99"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
And at that place proccess stopped.
Probably, i can'nt understand something, but RedHat OpenShift version 3 use kubernetes+docker. I tried OpenShift v3 docker version download - it was ok.
I fixed that issue with a likewise setup by declaring the private ip address as localhost in the /etc/hosts file.
Example: /etc/hosts
10.0.0.2 localhost
Then I run a problem where kubectl get nodes threw:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This I fixed by copying the generated conf to the local kube config.
cp /etc/kubernetes/kubelet.conf ~/.kube/config
There are a couple of possibilities here -:
1) In older kubeadm versions selinux blocks access at this point
2) If you are behind a proxy you will need to add the usual to the kubeadm environment -:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Plus, which I have not seen documented anywhere -:
KUBERNETES_HTTP_PROXY
KUBERNETES_HTTPS_PROXY
KUBERNETES_NO_PROXY
.....
<master/apiclient> all control plane components are healthy after 20.585964 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 8.259447 seconds
<master/apiclient> attempting a test deployment
<master/apiclient> test deployment succeeded
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
<master/discovery> kube-discovery is ready after 66.415198 seconds
kubeadm: I am an alpha version, my authors welcome your feedback and bug reports
kubeadm: please create an issue using https://github.com/kubernetes/kubernetes/issues/new
kubeadm: and make sure to mention #kubernetes/sig-cluster-lifecycle. Thank you!
failed creating essential kube-proxy addon [Timeout: request did not complete within allowed duration]
Fixed. But I installed and configuregd succefully 1.2.0 version... Oh