I create dashboard after I installed kubernetes with kubeadm.
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
Wait a while, the pod is crashed like:
kubectl get pods --all-namespaces
kubernetes-dashboard-3203831700-wq0v4 0/1 CrashLoopBackOff 3 3m
And I checked the pod log:
kubectl logs -f kubernetes-dashboard-3203831700-wq0v4 -n kube-system Using HTTP port: 9090
Creating API server client for https://10.96.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
But I tried it mannually, the url works:
# curl https://10.96.0.1:443/version
curl: (35) Peer reports incompatible or unsupported protocol version.
Have anybody encountered this issue before? or help me?
I execute the following command:
rm -rf ~/.kube
Now it works. still a bit strange :-(
Related
I am new to Kubernetes. I just trying to create a tls secret using kubectl. My ultimate goal is deploy a keycloak cluster in kubernetes.
So I follow this youtube tutorial. But in this tutorial doesn't mention how to generate my own tls key and tls cert. So to do that I use this documentation (https://www.linode.com/docs/guides/create-a-self-signed-tls-certificate/).
Then I could generate MyCertTLS.crt and MyKeyTLS.key
gayan#Gayan:/srv$ cd certs
gayan#Gayan:/srv/certs$ ls
MyCertTLS.crt MyKeyTLS.key
To create secret key for the kubernetes, I ran this command
sudo kubectl create secret tls my-tls --key="MyKeyTLS.key" --cert="MyCertTLS.crt" -n keycloak-test
But It's not working, I got this error,
gayan#Gayan:/srv/certs$ sudo kubectl create secret tls my-tls --key="MyKeyTLS.key" --cert="MyCertTLS.crt" -n keycloak-test
[sudo] password for gayan:
error: failed to create secret Post "http://localhost:8080/api/v1/namespaces/keycloak-test/secrets?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 127.0.0.1:8080: connect: connection refused
Note:
MiniKube is Running...
And Ingress Addon also enabled...
I have created a namespace called keycloak-test.
gayan#Gayan:/srv/keycloak$ kubectl get namespaces
NAME STATUS AGE
default Active 3d19h
ingress-nginx Active 119m
keycloak-test Active 4m12s
kube-node-lease Active 3d19h
kube-public Active 3d19h
kube-system Active 3d19h
kubernetes-dashboard Active 3d19h
I am trying to fix this error. But I have no idea why I get this, looking for a solution from the genius community.
I figured this out! I posted this, because this may helpful for someone.
I am getting that error,
error: failed to create secret Post "http://localhost:8080/api/v1/namespaces/keycloak-test/secrets?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 127.0.0.1:8080: connect: connection refused
Because my kubernetes api-server is running on a different port.
You can view what port your kubernetes api-server is running by running this command,
kubectl config view
Then for example, if you can see server: localhost:40475 like that, It's mean your server running on port 40475.
And kubernetes default port is 8443
Then you should mention the correct port on your kubectl command to create the secret.
So, I add --server=https://localhost:40475 to my command.
kubectl create secret tls my-tls --key="tls.key" --cert="tls.crt" -n keycloak-test --server=https://localhost:40475
And another thing, if you getting error like permission denied
You have to change the ownership of your tls.key file and tls.crt file.
I did this by running these commands,
sudo chmod 666 tls.crt
sudo chmod 666 tls.key
Then you should run above kubectl command, without sudo! It works !!!!!!
If you run that command with sudo, It will ask username and passwords and it confused me and it did not work.
So, by doing this way, I solved this issue!
Hope this will help to someone!!! Thanks!
In your examples, kubectl get namespaces works, but sudo kubectl create secret doesn't.
You don't need sudo to work with Kubernetes. In particular, the connection information is stored in a $HOME/.kube/config file by default, but when you sudo kubectl ..., that changes the home directory and you can't find the connection information.
The standard Kubernetes assumption is that the cluster is remote, and so your local user ID doesn't really matter to it. All that does matter is the Kubernetes-specific permissions assigned to the user that's accessing the cluster.
I fail to deploy istio and met this problem. When I tried to deploy istio using istioctl install --set profile=default -y. The output is like:
➜ istio-1.11.4 istioctl install --set profile=default -y
✔ Istio core installed
✔ Istiod installed
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
Deployment/istio-system/istio-ingressgateway (containers with unready status: [istio-proxy])
- Pruning removed resources Error: failed to install manifests: errors occurred during operation
After running kubectl get pods -n=istio-system, I found the pod of istio-ingressgateway was created, and the result of describe:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m36s default-scheduler Successfully assigned istio-system/istio-ingressgateway-8dbb57f65-vc85p to k8s-slave
Normal Pulled 4m35s kubelet Container image "docker.io/istio/proxyv2:1.11.4" already present on machine
Normal Created 4m35s kubelet Created container istio-proxy
Normal Started 4m35s kubelet Started container istio-proxy
Warning Unhealthy 3m56s (x22 over 4m34s) kubelet Readiness probe failed: Get "http://10.244.1.4:15021/healthz/ready": dial tcp 10.244.1.4:15021: connect: connection refused
And I can't get the log of this pod:
➜ ~ kubectl logs pods/istio-ingressgateway-8dbb57f65-vc85p -n=istio-system
Error from server: Get "https://192.168.0.154:10250/containerLogs/istio-system/istio-ingressgateway-8dbb57f65-vc85p/istio-proxy": dial tcp 192.168.0.154:10250: i/o timeout
I run all this command on two VM in Huawei Cloud, with a 2C8G master and a 2C4G slave in ubuntu18.04. I have reinstall the environment and the kubernetes cluster, but that doesn't help.
Without ingressgateway
I also tried istioctl install --set profile=minimal -y that only run istiod. But when I try to run httpbin(kubectl apply -f samples/httpbin/httpbin.yaml) with auto injection on, the deployment can't create pod.
➜ istio-1.11.4 kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpbin 0/1 0 0 5m24s
➜ istio-1.11.4 kubectl describe deployment/httpbin
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 6m6s deployment-controller Scaled up replica set httpbin-74fb669cc6 to 1
When I unlabel the default namespace(kubectl label namespace default istio-injection-), everything works fine.
I hope to deploy istio ingressgateway and run demo like istio-ingressgateway, but I have no idea to solve this situation. Thanks for any help.
I made a silly mistake Orz.
After communiation with my cloud provider, I was informed that there was a network security policy of my cloud server. It's strange that one server has full access and the other has partial access (which only allow for port like 80, 443 and so on). After I change the policy, everything works fine.
For someone who may meet the similar question, I found all these questions seem to come with network problems like dns configuration, k8s configuration or server network problem after hours of searching in google. Like what howardjohn said in this issue, this is not a istio problem.
Before I could run this command kubectl logs <pod> without issue for many days/versions. However, after I pushed another image and deployed recently, I faced below error:
Error from server: Get https://aks-agentpool-xxx-0:10250/containerLogs/default/<-pod->/<-service->: dial tcp 10.240.0.4:10250: i/o timeout
I tried to re-build and re-deploy but failed.
Below was the Node info for reference:
Not sure if your issue is caused by the problem described in this troubleshooting. But maybe you can take a try, it shows below:
Make sure that the default network security group isn't modified and
that both port 22 and 9000 are open for connection to the API server.
Check whether the tunnelfront pod is running in the kube-system
namespace using the kubectl get pods --namespace kube-system command.
If it isn't, force deletion of the pod and it will restart.
I got into a vicious circle. I was trying to deploy a few services on AWS Ubuntu machine. It has 1 Gb RAM. By the end of deploying all RAM was used. I decided to delete some of the deployments but I was even unable to check the status of pods and deployments:
$ kubectl delete -f test.yaml
unable to recognize "test.yaml": Get https://172.31.38.138:6443/api?timeout=32s: dial tcp 172.31.38.138:6443: connect: connection refused
$ kubectl get deployments
Unable to connect to the server: dial tcp 172.31.38.138:6443: i/o timeoutUnable to connect to the server: dial tcp 172.31.38.138:6443: i/o timeout
I do understand that the issue is lack of memory. Hence kube-dns, kube-proxy, etc cannot work correctly. The question is:
How can I delete my test deployments without kubectl delete...?
Thanks
Stop Kubelet service then run docker system prune command to delete all pods. And finally restart kubelet
I have installed and configured Kubernetes on my ubuntu machine, followed this Document
After deploying the Kubernetes-dashboard, container keep crashing
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Started the Proxy using:
kubectl proxy --address='0.0.0.0' --accept-hosts='.*' --port=8001
Pod status:
kubectl get pods -o wide --all-namespaces
....
....
kube-system kubernetes-dashboard-64576d84bd-z6pff 0/1 CrashLoopBackOff 26 2h 192.168.162.87 kb-node <none>
Kubernetes system log:
root#KB-master:~# kubectl -n kube-system logs kubernetes-dashboard-64576d84bd-z6pff --follow
2018/09/11 09:27:03 Starting overwatch
2018/09/11 09:27:03 Using apiserver-host location: http://192.168.33.30:8001
2018/09/11 09:27:03 Skipping in-cluster config
2018/09/11 09:27:03 Using random key for csrf signing
2018/09/11 09:27:03 No request provided. Skipping authorization
2018/09/11 09:27:33 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get http://192.168.33.30:8001/version: dial tcp 192.168.33.30:8001: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
Getting the msg when I'm trying to hit below link on the browser
URL:http://192.168.33.30:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Error: 'dial tcp 192.168.162.87:8443: connect: connection refused'
Trying to reach: 'https://192.168.162.87:8443/'
Anyone can help me with this.
http://192.168.33.30:8001 is not a legitimate API server URL. All communications with the API server use TLS internally (https:// URL scheme). These communications are verified using the API server CA certificate and are authenticated by mean of tokens signed by the same CA.
What you see is the result of a misconfiguration. At first sight it seems like you mixed pod, service and host networks.
Make sure you understand the difference between Host network, Pod network and Service network. These 3 networks can not overlap. For example --pod-network-cidr=192.168.0.0/16 must not include the IP address of your host, change it to 10.0.0.0/16 or something smaller if necessary.
After you have a clear overview of the network topology, run the setup again and everything will be configured correctly, including the Kubernetes CA.