kops k8s cluster Kubectl commands Timeout issue - kubernetes

I was trying to run
"kubectl get nodes" command for k8s cluster.it gives "Unable to connect to the server: dial tcp..."
this is a k8s cluster created by a different user in the company AWS account.
this is the steps I have followed
export AWS_PROFILE=RR
export KOPS_STATE_STORE=s3://s3bucketname
kops export kubecfg dev.k8s.local
kubectl config get-contexts
kubectl get nodes 12:53:18
Unable to connect to the server: dial tcp 3.136.226.173:443: i/o timeout
I need to view running nodes and services in this k8 cluster how can I do this.

Possible cause one: routing/firewall issues
It happens when you create/use private cluster.
To solve - add an external IP to authorized networks.
to get your external IP address, you can use some of these commands:
curl ifconfig.co
dig +short myip.opendns.com #resolver1.opendns.com
curl ifconfig.me
curl ifconfig.co
curl smart-ip.net/myip
wget -O - -q icanhazip.com
wget -O - -q ifconfig.me/ip
Other such resources:
http://ip.tyk.nu/
http://whatismyip.akamai.com/
http://tnx.nl/ip
http://ifcfg.me/
http://l2.io/ip
http://ip.appspot.com/
http://ident.me/
http://ipof.in/txt
http://icanhazip.com/
http://curlmyip.com/
http://wgetip.com/
http://curlmyip.com/
http://bot.whatismyipaddress.com/
http://eth0.me/
http://ifconfig.me/
http://corz.org/ip
http://ipecho.net/plain
Possible cause two: lost/stale k8s context
To get context use:
kubectl config view
To set context use:
kubectl config set-context <your_context>
Possible cause three: outdated CF template
As per this answer, you'd check an AMI template that used when cluster was created.
The cluster was set up using an older version of the CloudFormation template

Related

k8s ClusterIP:Port accessible only within the node of running the pod

I create 3 ubuntu VMs in AWS and use kubeadm to set up the cluster in the master nodes and open port 6443. and apply flannel network via below command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
And the I join the other two nodes to the cluster via command join:
kubeadm join 172.31.5.223:6443
The I apply below two yaml to deploy my deployment and svc
Here comes the issue. I output all resources in k8s master:
I can only access clusterip:port inside the node/ip-172-31-36-90 as it has pod running.
Which results:
I can only access /:NodePorts using IP of node/ip-172-31-36-90 as it has pod running.
I can use curl <externalip/internal of the node/ip-172-31-36-90>:nodeport in other nodes. But this IP can only be ip-172-31-36-90.
If I try above two using IP of master node or node/ip-172-31-41-66, it will get a timeout issue. Notice: Nodeport 30000 are open on all nodes via aws security group.
Anyone can help me with this network issue? I am really bad at debug network stuff.
2.Second question, If I try curl <externalip/internal of the node/ip-172-31-36-90>:nodeport in my local machine, it gives error :
curl: (56) Recv failure: Connection reset by peer
It really bothers me. k8s expert please save me!!!
----------------Update---------------------------
After days of debugging, I notice it is related to IPs of docker0 and flannel.1
, they are not in the same subnet:
But I still don't where I have I done wrong and how to sync them. Any export here, please!

error: failed to discover supported resources kubernetes google cloud platform

I was performing a practical where i was deploying a containerised sample application using kubernetes.
i was trying to run container on google cloud platform using kubernetes engine.But while deploying container using "kubectl run" command using google cloud shell.
its showing an error "error: failed to discover supported resources: Get https://35.240.145.231/apis/extensions/v1beta1: x509: certificate signed by unknown authority".
From Error, i can recollect that its because of "SSL Certificate" not authorised.
I even exported the config file resides at "$HOME/.kube/config". but still getting the same error.
please anyone help me understand the real issue behind this.
Best,
Swapnil Pawar
You may try following steps,
List all the available clusters,
$ gcloud container clusters list
Depending upon how you have configured the cluster, if the cluster location is configured for a specific zone then,
$ gcloud container clusters get-credentials <cluster_name> --ZONE <location>
or if the location is configured for a region then,
$ gcloud container clusters get-credentials <cluster_name> --REGION <location>
The above command will update your kubectl config file $HOME/.kube/config
Now, the tricky part.
If you have more than one cluster that you have configured, then your $HOME/.kube/config will have two or more entries. You can verify it by doing a cat command on the config file.
To select a particular context/cluster, you need to run the following commands
$ kubectl config get-contexts -o=name // will give you a list of available contexts
$ kubectl config use-context <CONTEXT_NAME>
$ kubectl config set-context <CONTEXT_NAME>
Now, you may run the kubectl run.

How can I access an internal HTTP port of a Kubernetes node in Google Cloud Platform

I have a load-balanced service running in a Kubernetes cluster on the Google Cloud Platform. The individual servers expose some debugging information via a particular URL path. I would like to be able to access those individual server URLs, otherwise I just get whichever server the load balancer sends the request to.
What is the easiest way to get access to those internal nodes? Ideally, I'd like to be able to access them via a browser, but if I can only access via a command line (e.g. via ssh or Google Cloud Shell) I'm willing to run curl to get the debugging info.
I think the simplest tool for you would be kubectl proxy or maybe even simpler kubectl port-forward. With the first you can use one endpoint and the apiserver ability to proxy to particular pod by providing appropriate URL.
kubectl proxy
After running kubectl proxy you should be able to open http://127.0.0.1:8001/ in your local browser and see a bunch of paths available on the API server. From there you can proceed with URL like ie. http://127.0.0.1:8001/api/v1/namespaces/default/pods/my-pod-name:80/proxy/ which will proxy to port 80 of your particular pod.
kubectl port-forward
Will do similar but directly to port on your pod : kubectl port-forward my-pod-name 8081:80. At that point any request to 127.0.0.1:8081 will be forwarded to your pods port 80
Port Forward can be used as described in answer from Radek, and it has many advantages. The disadvantage is that it is quite slow, and if you are having a script doing many calles, there is another option for you.
kubectl run curl-mark-friedman --image=radial/busyboxplus:curl -i --tty --rm
This will create a new POD on you network with a busybox that includes the curl command. You can now use interactive mode in that POD to execute curl commands to other PODS from within the network.
You can find many images with the tools included that you like on docker hub. If you for example need jq, there is an image for that:
kubectl run curl-jq-mark-friedman --image=devorbitus/ubuntu-bash-jq-curl -i --tty --rm
The --rm option is used to remove the POD when you are done with it. If you want the POD to stay alive, just remove that option. You may then attach to that POD again using:
kubectl get pods | grep curl-mark-friedman <- get your <POD ID> from here.
kubectl attach <POD ID> -c curl-mark-friedman -i -t

K8S dashboard not accessible after first cluster in GKE - GCP using console

Newbie setup :
Created First project in GCP
Created cluster with default, 3 nodes. Node version 1.7.6. cluster master version 1.7.6-gke.1.
Deployed aan application in a pod, per example.
Able to access "hello world" and the hostname, using the external-ip and the port.
In GCP / GKE webpage of my cloud console, clicked "discovery and loadbalancing", I was able to see the "kubernetes-dashboard" process in green-tick, but cannot access throught the IP listed. tried 8001,9090, /ui and nothing worked.
not using any cloud shell or gcloud commands on my local laptop. Everything is done on console.
Questions :
How can anyone access the kubernetes-dashboard of the cluster created in console?
docs are unclear, are the dashboard components incorporated in the console itself? Are the docs out of sync with GCP-GKE screens?
tutorial says run "kubectl proxy" and then to open
"http://localhost:8001/ui", but it doesnt work, why?
If you create a cluster with with version 1.9.x or greater, then u can access using tokens.
get secret.
kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'
Copy secret.
kubectl proxy.
Open UI using 127.0.0.1:8001/ui. This will redirect to login page.
there will be two options to login, kubeconfig and token.
Select token and paste the secret copied earlier.
hope this helps
It seems to be an issue with the internal Kubernetes DNS service starting at version 1.7.6 on Google Cloud.
The solution is to access the dashboard at this endpoint instead:
http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Github Issue links:
https://github.com/kubernetes/dashboard/issues/2368
https://github.com/kubernetes/kubernetes/issues/52729
The address of the dashboard service is only accessible from inside of the cluster. If you ssh into a node in your cluster, you should be able to connect to the dashboard. You can verify this by noticing that the address is within the services CIDR range for your cluster.
The dashboard in running as a pod inside of your cluster with an associated service. If you open the Workloads view you will see the kubernetes-dashboard deployment and can see the pod that was created by the deployment. I'm not sure which docs you are referring to, since you didn't provide a link.
When you run kubectl proxy it creates a secure connection from your local machine into your cluster. It works by connecting to your master and then running through a proxy on the master to the pod/service/host that you are connecting to via an ssh tunnel. It's possible that it isn't working because the ssh tunnels are not running; you should verify that your project has newly created ssh rules allowing access from the cluster endpoint IP address. Otherwise, if you could explain more about how it fails, that would be useful for debugging.
First :
gcloud container clusters get-credentials cluster-1 --zone my-zone --project my-project
Then find your kubernetes dashboard endpoint doing :
kubectl cluster-info
It will be something like https://42.42.42.42/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Install kube-dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Run:
$ kubectl proxy
Access:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

Kubernetes ssh into pods fails

I'm trying to ssh into my pod with this command
kubectl --namespace=default exec -ti pod-name /bin/bash
I get this error:
Content-Type specified (plain/text) must be 'application/json'
The process gets stuck and I have to close the terminal.
I was able to ssh into my pods before I re install kubernetes in my machine. Is this an issue with latest kubernetes releases?
You're not trying to "ssh", you're forwarding your standard input and receiving a standard output over HTTP through the Kubernetes API.
That said, you're using Docker 1.10 whereas Kubernetes doesn't support it yet. Check this out https://github.com/kubernetes/kubernetes/issues/19720
edit:
Kubernetes supports Docker 1.10+ since the 1.3.0 release.