Kubernetes, Unable to connect to the server: EOF - kubernetes

Environment of kubectl: Windows 10.
Kubectl version: https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe
Hello. I've just installed Kubernetes cluster at Google Cloud Platform. Then applied the next command:
gcloud container clusters get-credentials my-cluster --zone europe-west1-b --project my-project
It successfully added the credentials at %UserProfile%\.kube\config
But when I try kubectl get pods it returns Unable to connect to the server: EOF. My computer accesses the internet through corporate proxy. How and where could I provide cert file for the kubectl so it could use the cert with all the requests? Thanx.

You would get EOF if there is no response from kubectl API calls in a certain time(Idle time is set 300 sec by default).
Try increasing cluster Idle time or maybe you might need VPN to access those pods (something like those)

Related

gmp managed prometheus example not working on a brand new vanilla stable gke autopilot cluster

Google managed prometheus seems like a great service however at the moment it does not work even in the example... https://cloud.google.com/stackdriver/docs/managed-prometheus/setup-managed
Setup:
create a new autopilot cluster 1.21.12-gke.2200
enable manage prometheus via gcloud cli command
gcloud beta container clusters update <mycluster> --enable-managed-prometheus --region us-central1
add port 8443 firewall webhook command
install ingress-nginx
try and use the PodMonitoring manifest to get metrics from ingress-nginx
Error from server (InternalError): error when creating "ingress-nginx/metrics.yaml": Internal error occurred: failed calling webhook "default.podmonitorings.gmp-operator.gke-gmp-system.monitoring.googleapis.com": Post "https://gmp-operator.gke-gmp-system.svc:443/default/monitoring.googleapis.com/v1/podmonitorings?timeout=10s": x509: certificate is valid for gmp-operator, gmp-operator.gmp-system, gmp-operator.gmp-system.svc, not gmp-operator.gke-gmp-system.svc
There is a thread suggesting this will all be fixed this week (8/11/2022), https://github.com/GoogleCloudPlatform/prometheus-engine/issues/300, but it seems like this should work regardless.
if I try to port forward ...
kubectl -n gke-gmp-system port-forward svc/gmp-operator 8443
error: Pod 'gmp-operator-67d5fff8b9-p4n7t' does not have a named port 'webhook'

Access Kubernetes Dashboard using kubectl proxy remotly for multiple user

I have setup kubernetes cluster in EKS. API server access is in private mode. I have bastion host from which i can run kubectl commands. I want to access kubernetes dashboard remotly.
One thing i can do is ssh -L localhost:8001:127.0.0.1:8001 # kubectl proxy. this wil provide me an access remotly.
If somone else will execute ssh -L localhost:8001:127.0.0.1:8001 # kubectl proxy then it will get an error. "error: listen tcp 127.0.0.1:8001: bind: address already in use". Because somebody else is accessing kubectl proxy.
How to solve this issue. I want to access kubernetes dashboard on multiple machine at the same time.

SSH to Kubernetes pod using Bastion

I have deployed Google cloud Kubernetes cluster. The cluster has internal IP only.
In order to access it, I created a virtual machine bastion-1 which has external IP.
The structure:
My Machine -> bastion-1 -> Kubernetes cluster
The connection to the proxy station:
$ ssh bastion -D 1080
now using kubectl using proxy:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pods
No resources found.
The Kubernetes master server is responding, which is a good sign.
Now, trying to ssh a pod:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl exec -it "my-pod" -- /bin/bash
error: error sending request: Post https://xxx.xxx.xxx.xxx/api/v1/namespaces/xxx/pods/pod-xxx/exec?command=%2Fbin%2Fbash&container=xxx&container=xxx&stdin=true&stdout=true&tty=true: EOF
Question:
How to allow ssh connection to pod via bastion? What I'm doing wrong?
You can't do this right now.
The reason is because the connections used for commands like exec and proxy use SPDY2.
There's a bug report here with more information.
You'll have to switch to using a HTTP proxy

I have deployed kubernetes cluster. The issue i have is that the dashboard is not accessible from external desktop system

I have deployed kubernetes cluster. The issue i have is that the dashboard is not accessible from external desktop system.
Following is my setup.
Two vm's with cluster deployed, one master one node.
dashboard running without any issue the kube-dns is also working as expected.
kubernetes version is 1.7.
Issue: When trying to access dashboard externally through kubectl proxy. i get unauthorized response.
This is with rbac role and rolebindings enabled.
How to i configure the cluster for http browser access to dashboard from external system.
Any hint/suggestions are most welcome.
kubectl proxy not working > 1.7
try this:
copy ~/.kube/config file to your desktop
then run the kubect like this
export POD_NAME=$(kubectl --kubeconfig=config get pods -n kube-system -l "app=kubernetes-dashboard,release=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:9090/
kubectl --kubeconfig=config -n kube-system port-forward $POD_NAME 9090:9090
Then access the ui like this: http://127.0.0.1:9090
see this helps
If kubectl proxy gives the Unauthorized error, there can be 2 reasons:
Your user cert doesn't have the appropriate permissions. This is unlikely since you successfully deployed kube-dns and the dashboard.
kubelet authn/authz is enabled and it's not setup correctly. See the answer to my question.

kubectl can't connect to Google Container Engine

I have followed the installation steps:
https://cloud.google.com/container-engine/docs/tutorials/guestbook#install_gcloud_and_kubectl
A Google Container Engine cluster is up and running and gcloud CLI is authenticated and works.
But kubectl says:
"couldn't read version from server: Get http://local host:8080/api: dial tcp 127.0.0.1:8080: connection refused"
I think I need to use kubectl config set-cluster to setup the connection to my cluster on GCE.
Where do I find the address of the Kubernetes master of my GCE cluster?
With gcloud beta container clusters list I seemingly get the master IP of my cluster.
I used that with kubectl config set-cluster.
Now it says:
"error: couldn't read version from server: Get http:// 104.197.49.119/api: dial tcp 104.197.49.119:80: i/o timeout"
Am I on the right track with this?
Additional strangeness:
gcloud container or gcloud preview container doesn't work for me. Only gcloud beta container
MASTER_VERSION of my cluster is 0.21.4, while the version of my kubectl client is GitVersion:"v0.20.2", even though freshly installed with gcloud.
Run
gcloud container clusters get-credentials my-cluster-name
to update the kubeconfig file and point kubectl at a cluster on Google Container Engine.
as #ScyDev stated
Run:
gcloud container get-credentials <cluster_name>
But you may have to set your compute zone before, in case you initialized a new cloud shell terminal. That was my case
if you're working in windows (ex. powershell), you need to check this out:
https://github.com/kubernetes/kubernetes/issues/34395