When i run the kubectl version command , I get the following error message.
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
How do I resolve this?
You can get relevant information about the client-server status by using the following command.
kubectl config view
Now you can update or set k8s context accordingly with the following command.
kubectl config use-context CONTEXT-CHOSEN-FROM-PREVIOUS-COMMAND-OUTPUT
you can do further action on kubeconfig file. the following command will provide you with all necessary information.
kubectl config --help
You have to run first
minikube start
on your terminal. This will do following things for you:
Restarting existing virtualbox VM for "minikube" ...
β Waiting for SSH access ...
πΆ "minikube" IP address is 192.168.99.100
π³ Configuring Docker as the container runtime ...
π³ Version of container runtime is 18.06.3-ce
β Waiting for image downloads to complete ...
β¨ Preparing Kubernetes environment ...
π Pulling images required by Kubernetes v1.14.1 ...
π Relaunching Kubernetes v1.14.1 using kubeadm ...
β Waiting for pods: apiserver proxy etcd scheduler controller dns
π― Updating kube-proxy configuration ...
π€ Verifying component health ......
π kubectl is now configured to use "minikube"
π Done! Thank you for using minikube!
If you use minikube then you should run, kubectl config use-context minikube
If you use latest docker for desktop that comes with kubernetes then you should run, kubectl config use-context docker-for-desktop
I was facing the same issue on Ubuntu 18.04.1 LTS.
The solution provided here worked for me.
Just putting the same data here:
Get current cluster name and Zone:
gcloud container clusters list
Configure Kubernetes to use your current cluster:
gcloud container clusters get-credentials [cluster name] --zone [zone]
Hope it helps.
I had the same issue when I tried use kubrnetes installed with Docker. It turned out that it was not enbled by default.
First I enabled kubrnetes in Docker options and then I changed context for docker-for-desktop
kubectl config get-contexts
kubectl config use-context docker-for-desktop
It solved the issue.
This problem occurs because of minikube. Restart minikube will solve this problem.Run below command and it will work-
minikube stop
minikube delete
minikube start
Was facing the same problem with accessing GKE master from Google Cloud Shell.
Then I followed this GCloud doc to solve it.
Open GCloud Shell
Get External IP of the current GCloud Shell with:
dig +short myip.opendns.com #resolver1.opendns.com
Add this External IP into the "Master authorized networks" section of the GKE cluster - with a CIDR suffix of /32
After that, running kubectl get nodes from the GCloud Shell worked right away.
I got similar problem when I run
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
And here's how I tried and finally worked.
I installed Docker Desktop on Mac (Version 2.0.0.3) firstly.
Then I installed the kubectl with command
$ brew install kubectl
.....
==> Pouring kubernetes-cli-1.16.0.high_sierra.bottle.tar.gz
Error: The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink bin/kubectl
Target /usr/local/bin/kubectl
already exists. You may want to remove it:
rm '/usr/local/bin/kubectl'
To force the link and overwrite all conflicting files:
brew link --overwrite kubernetes-cli
To list all files that would be deleted:
brew link --overwrite --dry-run kubernetes-cli
Possible conflicting files are:
/usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl
.....
That doesn't matter, we have already got the kubectl.
Then I install minikube with command
$ brew cask install minikube
...
==> Linking Binary 'minikube-darwin-amd64' to '/usr/local/bin/minikube'.
πΊ minikube was successfully installed!
start minikube first time (VirtualBox not installed)
$ minikube start
π minikube v1.4.0 on Darwin 10.13.6
πΏ Downloading VM boot image ...
> minikube-v1.4.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
> minikube-v1.4.0.iso: 135.73 MiB / 135.73 MiB [-] 100.00% 7.75 MiB p/s 18s
π₯ Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
π Retriable failure: create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
...
π£ Unable to start VM
β Error: [VBOX_NOT_FOUND] create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
π‘ Suggestion: Install VirtualBox, or select an alternative value for --vm-driver
π Documentation: https://minikube.sigs.k8s.io/docs/start/
βοΈ Related issues:
βͺ https://github.com/kubernetes/minikube/issues/3784
Install VirtualBox, then start minikube second time (VirtualBox installed)
$ minikube start
π 13:37:01.006849 35511 cache_images.go:79] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 failed: read tcp 10.49.52.206:50350->104.18.125.25:443: read: operation timed out
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
E1002 13:37:33.632298 35511 start.go:706] Error caching images: Caching images for kubeadm: caching images: caching image /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: read tcp 10.49.52.206:50350->104.18.125.25:443: read: operation timed out
β Unable to load cached images: loading cached images: loading image /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: stat /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: no such file or directoryminikube v1.4.0 on Darwin 10.13.6
π₯ Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
E1002
πΎ Downloading kubeadm v1.16.0
πΎ Downloading kubelet v1.16.0
π Pulling images ...
π Launching Kubernetes ...
π£ Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout
πΏ Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
π https://github.com/kubernetes/minikube/issues/new/choose
β Problems detected in kube-addon-manager [b17d460ddbab]:
error: no objects passeINFO:d == Kuto apberneply
error: no objectNsF Op:a == Kubernetssed tes ado appdon ely
start minikube 3rd time
$ minikube start
π minikube v1.4.0 on Darwin 10.13.6
π‘ Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
π Using the running virtualbox "minikube" VM ...
β Waiting for the host to be provisioned ...
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
π Relaunching Kubernetes using kubeadm ...
! still got stuck on Relaunching
I enable Kubernetes config in Docker Preferences setting, restart my Mac and switch the Kubernetes context to docker-for-desktop.
Oh, the kubectl version works this time, but with the context docker-for-desktop
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
start minikube 4th time (after restart system maybe)
$ minikube start
π minikube v1.4.0 on Darwin 10.13.6
π‘ Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
π Starting existing virtualbox VM for "minikube" ...
β Waiting for the host to be provisioned ...
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
π Relaunching Kubernetes using kubeadm ...
β Waiting for: apiserver proxy etcd scheduler controller dns
π Done! kubectl is now configured to use "minikube"
Finally, it works with minikube context...
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
i checked the firewall port and it was closed, i opened it and it started working.
If you are using azure and have recently changed your password try this:
az account clear
az login
After logging in successfully:
az aks get-credentials --name project_name --resource-group resource_group_name
Now when you run
kubectl get nodes
you should see something. Also, make sure you are using the correct kubectl context.
My problem was that I use 2 virtual networks on my VM. The network which Kubernetes uses is always the one of the Default Gateway. However the communication network between my VMs was the other one.
You can force Kubernetes to use a different network by using the folowing flags:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-cert-extra-sans=xxx.xxx.xxx.xxx --apiserver-advertise-address=xxx.xxx.xxx.xxx
Change the xxx.xxx.xxx.xxx with the commmunication IP address of your K8S master.
I have two contexts and I got this error when I was in the incorrect one of the two so I switched the context and this error was resolved.
To see your current context: kubectl config current-context
To see the contexts you have: kubectl config view
To switch context: kubectl config use-context context-cluster-name
Adding this here so it can help someone with a similar problem.
In our case, we had to configure our VPC network to export its custom routes for VPC peering βgke-jn7hiuenrg787hudf-77h7-peerβ in project ββ to the control plane's VPC network.
The control plane's VPC network is already configured to import custom routes. This provides a path for the control plane to send packets back to on-premise resources.
Step-1: Run command to see all list of context:
kubectl config view
Step-2: Now switch your context where you want to work.
kubectl config use-context [context-name]
For example:
kubectl config use-context docker-desktop
I face the same issue, it might be your ip was not added into authorize network list in the Kubernetes Cluster. Simply navigate to:
GCP console -> Kubernetes Engine -> Click into the Clusters you wish to interact with
In the target Cluster page look for:
Control plane authorized networks -> click pencil icon -> Add Authorized Network
Add your External Ip with a CIDR suffix of /32 (xxx.xxx.xxx.xxx/32).
One way to get your external IP on terminal / CMD:
curl -4 ifconfig.co
Related
I'm using minikube to test the kompose.
I installed k8s using the following minikube command
# minikube start --driver=none --kubernetes-version v1.16.0
minikube v1.9.2 on Ubuntu 18.04
β¨ Using the none driver based on user configuration
π Starting control plane node in cluster minikube
π€Ή Running on localhost (CPUs=XX, Memory=XXXXXMB, Disk=XXXXXMB) ...
βΉοΈ OS release is Ubuntu 18.04.3 LTS
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.7 ...
βͺ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
β This bare metal machine is having trouble accessing https://k8s.gcr.io
π‘ To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
π Enabling addons: default-storageclass, storage-provisioner
π€Ή Configuring local host environment ...
β The 'none' driver is designed for experts who need to integrate with an existing VM
π‘ Most users should use the newer 'docker' driver instead, which does not require root!
π For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
β kubectl and minikube configuration will be stored in /root
β To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
βͺ sudo mv /root/.kube /root/.minikube $HOME
βͺ sudo chown -R $USER $HOME/.kube $HOME/.minikube
π‘ This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
π Done! kubectl is now configured to use "minikube"
π‘ For best results, install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/
and curl, chmod, mv install kubectl
# kubectl versionClient Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
But when I use the kompose up command, it shows connection rejection
kompose -f docker-compose.yaml up
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
FATA Error while deploying application: Get https://127.0.0.1:6443/api: dial tcp 127.0.0.1:6443: connect: connection refused
Querying the kubectl configuration found that its port was 8443, different from the 6443 connected by kompose up
# kubectl cluster-info
Kubernetes master is running at https://172.26.90.122:8443
KubeDNS is running at https://172.26.90.122:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I think that's the problem, but I don't know how to fix it to make the ports match, right.
I would appreciate it if you could tell me how to solve it?
You add this flag in the start command
--apiserver-port=6443
I'm installing kubernetes(kubeadm) on centos VM running inside Virtualbox, so with yum I installed kubeadm, kubelet and docker.
Now while trying to setup cluster with kubeadm init --pod-network-cidr=192.168.56.0/24 --apiserver-advertise-address=192.168.56.33/32 i run into the following error :
Unable to update cni config: No networks found in /etc/cni/net.d
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
So I checked, no cni folder in /etc even that kubernetes-cni-0.6.0-0.x86_64 is installed. I Tried commenting KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf but it didn't work.
PS:
I'm installing behind proxy.
I have multiple network adapters:
NAT : 10.0.2.15/24 for Internet
Host Only : 192.168.56.33/32
And docker interface : 172.17.0.1/16
Docker version: 17.12.1-ce
kubectl version : Major:"1",
Minor:"9", GitVersion:"v1.9.3"
Centos 7
Add pod network add-on
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
If flannel doesn't work, then try calico -
kubectl apply -f https://docs.projectcalico.org/manifests/calico-typha.yaml
There are several points to remember when setting up the cluster with "kubeadm init" and it is clearly documented on the Kubernetes site kubeadm cluster create:
"kubeadm reset" if you have already created a previous cluster
Remove the ".kube" folder from the home or root directory
(Also stopping the kubelet with systemctl will allow for a smooth setup)
Disable swap permanently on the machine, especially if you are rebooting your linux system
And not to forget, install a pod network add-on according to the instructions provided on the add on site (not Kubernetes site)
Follow the post initialization steps given on the command window by kubeadm.
If all these steps are followed correctly then your cluster will run properly.
And don't forget to do the following command to enable scheduling on the created cluster:
kubectl taint nodes --all node-role.kubernetes.io/master-
About how to install from behind proxy you may find this useful:
install using proxy
Check this answer.
Use this PR (till will be approved):
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
it's a known issue: coreos/flannel#1044
Stop and disable apparmor & restart the containerd service on that node will solve your issue
root#node:~# systemctl stop apparmor
root#node:~# systemctl disable apparmor
root#node:~# systemctl restart containerd.service
I could not see the helm server version:
$ helm version --tiller-namespace digital-ocean-namespace
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find a ready tiller pod
The kubectl describe node kubernetes-master --namespace digital-ocean-namespace command was showing the message:
NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
The nodes was not ready:
$ kubectl get node --namespace digital-ocean-namespace
NAME STATUS ROLES AGE VERSION
kubernetes-master NotReady master 82m v1.14.1
kubernetes-worker-1 NotReady <none> 81m v1.14.1
I had a version compatibility issue between Kubernetes and the flannel network.
My k8s version was 1.14 as seen in the command:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
After re-installing the flannel network with the command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
I could then see the helm server version:
$ helm version --tiller-namespace digital-ocean-namespace
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Resolved this issue by installing Calico CNI plugin using following commands:
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
It was a proxy error as mentionned in Github https://github.com/kubernetes/kubernetes/issues/34695
They suggested to use kubeadm init --use-kubernetes-version v1.4.1 but i change my network entirely (no proxy) and i manage to setup my cluster.
After that we can setup pod network with kubectl apply -f ... see https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
I solved this by Installing a Pod network add-o,
I used Flannel pod network which is a very simple overlay network that satisfies the kubernetes requirements
you can do it with this command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
You can read more about this in the kubernetes documentation
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
I face the same errors and it seens that It seens that systemd have a problems. I'm not remember my last systemd version. But update it solve the problems for me.
In my case I restarted docker and the status changed to ready
sudo systemctl stop docker
sudo systemctl start docker
I faced same errors, I was seeing the issue after slave node joined to cluster. Slave node was showing status 'Not ready' after joining.
I checked kubectl describe node ksalve and observed the mentioned issue.
After digging deeper I found that systemd was different in master and slave node.
In master I have configured systemd however slave has default cfgroup only.
Once I removed the systemd from master node, slave status immediately changed to Ready.
My problem was that I was updating the hostname after the cluster was created. By doing that, it's like the master didn't know it was the master.
I am still running:
sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname) [1][2]
but now I run it before the cluster initialization
Error that lead me to this from running sudo journalctl -u kubelet:
Unable to register node "ip-10-126-121-125.ec2.internal" with API server: nodes "ip-10-126-121-125.ec2.internal" is forbidden: node "ip-10-126-121-125" cannot modify node "ip-10-126-121-125.ec2.internal"
This is for AWS VPC CNI
Step1 : kubectl get mutatingwebhookconfigurations -oyaml >
mutating.txt
Step 2: Kubectl delete -f mutating.txt
Step3: Restart the node
Step4 : You should be able to see the node is ready
Step5: Install the mutatingwebhookconfiguration back
In my case, it is because I forgot to open the 8285 port. 8285 port is used by flannel you need to open it from the firewall.
e.g:
if you use flannel addon and your OS is centOS:
firewall-cmd --permanent --add-port=8285/tcp
firewall-cmd --reload
When I try any kubectl command, it always returns:
Unable to connect to the server: EOF
I followed these tutorials:
https://kubernetes.io/docs/tasks/tools/install-kubectl/
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
But they have not helped me. According to the first link, by default, kubectl configuration is located at
~/.kube/config
But in that path I don't have anything. I don't know if this is causing the issue.
Other thing is when I try to check the kubectl configuration:
M:.kube candres$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: EOF
M:.kube candres$ kubectl cluster-info dump
Unable to connect to the server: EOF
The versions I have installed are:
Kubernetes - kubectl
M:.kube candres$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"X", GitTreeState:"clean", BuildDate:"2018-02-09T21:51:06Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: EOF
Minikube
M:.kube candres$ minikube version
minikube version: v0.25.0
Docker:
M:.kube candres$ docker version
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: X
Built: Wed Dec 27 20:03:51 2017
OS/Arch: darwin/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: X
Built: Wed Dec 27 20:12:29 2017
OS/Arch: linux/amd64
Experimental: true
Does anyone know how to resolve this?
After Minikube is started, kubectl is configured automatically.
minikube start
Starting local Kubernetes cluster...
Kubernetes is available at https://192.168.99.100:8443.
Kubectl is now configured to use the cluster.
You can verify and validate the cluster and context with following commands.
kubectl config view
I also had this issue. Be sure to check your config file that is generated by minikube. This file can most likely be found ~/.kube/config. Make sure that you are referencing the right cluster name in the current context you are using. You can see what context you are currently using by: kubectl get current-context. The important thing is that you understand why you are getting this error and as #Suresh Vishnoi stated, kubectl doesn't know about k8s api-server.
here the steps to my solution
Install minikube:
brew install minikube
Start minikube
minikube start
check again and β
kubectl version --short
Client Version: v1.16.6-beta.0
Server Version: v1.22.2
Here is all I had to run:
minikube delete
minikube start
Just updating Kubectl version to latest version resolve my problem.
If you get a message like this:
You appear to be using a proxy, but your NO_PROXY environment does not
include the minikube IP (www.xxx.yyy.zzz).
Then set your environment variable NO_PROXY to the address given before running kubectl. This is probably configurable somewhere, but that's a short quick solution.
reset kubeadm via force
reset kubeadm -f
and then
copy config file again
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
at last
kubectl init
I'm using wsl. So it helps me to synchronize time between
windows and linux console.
sudo hwclock --hctosys
Check your VPN security as well as your anti virus internet security. in case it is ON then we have to make it off. and it worked for me after that.
Try it out this also.
VPN should solve it, if not, then also try unsetting local env proxy settings.
I installed the minikube for local kubernetes development according to article DevOps-Kubernetes-1-Running-Kubernetes-Locally-via-Minikube
Ubuntu 16.04 LTS
minikube 0.20.0
The default kubernetes for minikube 0.20.0 is v1.6.4 and I use follow command to use new release v1.7.0
minikube start --kubernetes-version v1.7.0
How can I set this as default in configuration for minikube ?
So far, if I run minikube start, it always starts default v1.6.4 even the server VM is upgraded to v1.7.0
$ minikube start
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
...
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0",
GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean",
BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 192.168.42.96:8443 was refused - did you specify the right host or port?
You can set its default value with:
minikube config set kubernetes-version v1.7.0
It edits ~/.minikube/config/config.json and adds:
{
"kubernetes-version": "v1.7.0"
}
Check out Selecting a Kubernetes version in the documentation. Check source code config.go for reference.
I need some help in creating my first Kubernetes cluster with Vagrant. I have installed Kubernetes, Vagrant and libvirt. My kubectl status displays:
[root#localhost ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
From the documentation I can read that the cluster can be started with:
export KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh
However no "cluster" folder has been created by installing kubernetes, nor the "kube-up.sh" command is available. I can see only the following ones:
kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler kube-version-change
Am I missing some steps ? Thanks
From the documentation I can read that the cluster can be started with:
export KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh
This documentation assumes that you have downloaded a kubernetes release. You can grab the latest release (1.1.7) from the github releases page. Once you unpack kubernetes.tar.gz you should see a cluster directory with kube-up.sh inside of it that you can execute it as suggested by the documentation.