Gcloud Kubernetes and Redis memory store, intermittent issues, host not found - kubernetes

From time to time once a week or so we get in a weird state with our Kubernetes cluster not able to connect to the memory store Redis service.
K8S mater version: 1.10.7
cloud beta redis instances list --region europe-west1  1 ↵  10122  12:26:38
INSTANCE_NAME REGION TIER SIZE_GB HOST PORT NETWORK RESERVED_IP STATUS CREATE_TIME
chefclub-redis europe-west1 STANDARD_HA 1 10.0.10.4 6379 default 10.0.10.0/29 READY 2018-05-29T14:12:46
Getting a No route to host.
kubectl run -i --tty busybox --image=busybox -- sh  ✓  10125  12:28:36
If you don't see a command prompt, try pressing enter.
/ # telnet 10.0.10.4 6379
telnet: can't connect to remote host (10.0.10.4): No route to host
It happened a few times in the past, Now I just did an upgrade of my node to 1.10.7 and everything went back in place, I could connect again.
I wonder what other steps I could take next it happens?

Make sure you have followed the instructions on how to connect to Redis instance from a cluster and troubleshooting doc. Note that while connecting to redis server if your cluster configuration have IP aliases enabled, steps may vary.
You can research through Stackdriver logging for Kubernetes pods and check for complete error message during the affected timeframe. This will help you check for known issues in Github or other Stackoverflow thread. Advanced Stackdriver logging filter to view pod logs:
resource.type="container" resource.labels.cluster_name="cluster_name"
resource.labels.namespace_id="k8s_namespace"
labels."container.googleapis.com/k8s_pod_name"="k8s_pod_name"
If you did not find any known issues and suspect that the issue could be on Google end. You can create an issue using the Public Issue Tracker.

Related

EKS: kubectl exec does not respect streamingConnectionIdleTimeout

Using EKS with Kubernetes 1.21, managed nodegroups in a private subnet. I'm trying to set the cluster up so that kubectl exec times out after inactivity regardless of the workload being execed into, and without any client configuration.
I'm aware of https://github.com/containerd/containerd/issues/5563, except we're on 1.21 with Docker runtime, not containerd yet.
I set streamingConnectionIdleTimeout: 3600s on the kubelet in the launch template:
cat /etc/kubernetes/kubelet/kubelet-config.json | jq '.streamingConnectionIdleTimeout = "3600s"' > /etc/kubernetes/kubelet/kubelet-config.json
/etc/eks/bootstrap.sh {{CLUSTER_NAME}}
And confirmed with curl -sSL "http://localhost:8001/api/v1/nodes/(node name)/proxy/configz".
However, kubectl exec still does not time out.
I confirmed /proc/sys/net/ipv4/tcp_keepalive_time = 7200 on both the client and the node, so we should be hitting the streaming connection idle timeout before Linux starts sending keepalive probes.
Reading through How kubectl exec Works, it seems possible that the EKS managed control plane is keeping the connection alive. There are people online who have the opposite problem - their connection times out regardless of streamingConnectionIdleTimeout - and they solve it by adjusting the timeout on the load balancer in front of their k8s API server. However, there are no knobs (that I know of) to tweak in that regard on the EKS managed control plane.
I would appreciate any input on this topic.

k8s ClusterIP:Port accessible only within the node of running the pod

I create 3 ubuntu VMs in AWS and use kubeadm to set up the cluster in the master nodes and open port 6443. and apply flannel network via below command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
And the I join the other two nodes to the cluster via command join:
kubeadm join 172.31.5.223:6443
The I apply below two yaml to deploy my deployment and svc
Here comes the issue. I output all resources in k8s master:
I can only access clusterip:port inside the node/ip-172-31-36-90 as it has pod running.
Which results:
I can only access /:NodePorts using IP of node/ip-172-31-36-90 as it has pod running.
I can use curl <externalip/internal of the node/ip-172-31-36-90>:nodeport in other nodes. But this IP can only be ip-172-31-36-90.
If I try above two using IP of master node or node/ip-172-31-41-66, it will get a timeout issue. Notice: Nodeport 30000 are open on all nodes via aws security group.
Anyone can help me with this network issue? I am really bad at debug network stuff.
2.Second question, If I try curl <externalip/internal of the node/ip-172-31-36-90>:nodeport in my local machine, it gives error :
curl: (56) Recv failure: Connection reset by peer
It really bothers me. k8s expert please save me!!!
----------------Update---------------------------
After days of debugging, I notice it is related to IPs of docker0 and flannel.1
, they are not in the same subnet:
But I still don't where I have I done wrong and how to sync them. Any export here, please!

Unable to connect to the server: net/http: TLS handshake timeout

On minikube for windows I created a deployment on the kubernetes cluster, then I tried to scale it by changing replicas from 1 to 2, and after that kubectl hangs and my disk usage is 100%.
I only have one container in my deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: first-deployment
spec:
replicas: 1
selector:
matchLabels:
run: app
template:
metadata:
labels:
run: app
spec:
containers:
- name: demo
image: ner_app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
all I did was run this after the pods were successfully deployed and running
kubectl scale --replicas=2 deployment first-deployment
In another terminal I was watching the pods using
kubectl get pods --watch
But everything is unresponsive and I'm not sure how to recover from this.
When I run kubectl get pods again it gives the following message
PS D:\docker\ner> kubectl get pods
Unable to connect to the server: net/http: TLS handshake timeout
Is there a way to recover, or cancel whatever process is running?
Also my VM's are on Hyper-V for Windows 10 Pro (minikube and Docker Desktop) both have the default RAM allocated - 2048MB
The container in my pod is a machine learning process and the model it loads could be large, in the order of 200MB to 300MB
You may have some proxy problems. Try following commands:
$ unset http_proxy
$ unset https_proxy
and repeat your kubectl call.
For me, the problem is that Docker ran out of memory. (EDIT: Possibly anyway; I wrote this post a while ago, and am now not so sure that is the root case, but did not write down my rationale, so idk.)
Anyway, to fix:
Fully close your k8s emulator. (docker desktop, minikube, etc.)
Shutdown WSL2. (wsl --shutdown) [EDIT: This step is apparently not necessary -- at least not always, since this time I skipped it, and the problem still resolved.]
Restart your k8s emulator.
Rerun the commands you wanted.
Sometimes it also works to simply:
Right click the Docker Desktop tray-icon, press "Restart Docker", and wait a few minutes for things to restart. (sometimes this fails, with Docker Desktop saying "Docker failed to start", so I'd generally recommend the more thorough process above)
Just happened to me on a new Windows 10 install with Ubuntu distro in WSL2. I solved the problem by running:
$ sudo ifconfig eth0 mtu 1350
(BTW, I was on a VPN connection when trying the 'kubectl get pods' command)
You can set up resource limits on deployments so that pods will not use the entire available resource in the node.
In my case I have my private EKS cluster and there is no 443(HTTPS) enabled in security groups.
My issue is solved after enabling the (HTTPS)443 port in security groups.
Kindly refer for AWS documentation for more details: "You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your connected network"
i solved this problem when execute the following command
minikube delete
and then start it
minikube start --vm-driver="virtualbox"
if use this why your pods will deleted
and when run kubectl get pods
you can see this result
No resources found in default namespace.
You could try $ unset all_proxy to reset the socket proxy.
Also, if you're connected to a VPN, try disconnecting - it seems that can interfere with connecting to a cluster.
I think the other answers don't really mention or refer to the vpn and proxy documentation for minikube: https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/
The NO_PROXY variable here is important: Without setting it, minikube may not be able to access resources within the VM. minikube uses two IP ranges, which should not go through the proxy:
192.168.99.0/24: Used by the minikube VM. Configurable for some hypervisors via --host-only-cidr
192.168.39.0/24: Used by the minikube kvm2 driver.
192.168.49.0/24: Used by the minikube docker driver’s first cluster.
10.96.0.0/12: Used by service cluster IP’s. Configurable via --service-cluster-ip-rang
So adding those IP ranges to your NO_PROXY environment variable should fix the issue.
Simply closing cmd, opening again, then
minikube start
And then executing the commands again solved this issue for me.
P.S: minikube start took less than a minute
Adding the IP address to the no_proxy list worked for me.
Obtain the IP address from ip addr output.
export no_proxy=localhost,127.0.0.1,<IP_ADDRESS>
restart minikube will work.
But if you don't want to delete it
then you can just switch to other cluster and then switch back.
I just click other kubenete cluster (ex: docker-desktop)
and then click back to the cluster I want to run (ex: minikube)
If you're on Linux or Mac, go to your virtualbox, and then on the toolbar choose 'Global Tools', then if you see two machines are using the same ip address, you should remove one of them. this image shows virtual box GUI
As this answer comes first on search for net-http-tls-handshake-timeout error
For those having issue with AWS EKS (and likely any K8s),
NO_PROXY solves problem by adding related IP/host to environment variable.
As suggested in comments for first answer.
For AWS EKS (when seeing this intermittently after vpc-cni addon upgrade)
replace for specific region or single url for your use case.
NO_PROXY=$NO_PROXY;eks.amazonaws.com
At least for Windows 10 and 11
$PS C:\oc rollback dc/my-app
Unable to connect to the server: net/http: TLS handshake timeout
For OpenShift 4.x the problem is that for some reason you are logged-out:
$PS C:\oc status
error: You must be logged in to the server (Unauthorized)
logging in by e.g.
$oc login -u developer
resolves the problem
Open PowerShell as an administrator and run the command "wsl --shutdown". You will see the same notification in your open Ubuntu terminal.
Open Docker Desktop.
Open a new terminal.
Run the command "minikube status" in the Ubuntu terminal.
Run the Minikube container. You can do this in Docker Desktop.
Run the command "minikube start".
That's it! You don't need to close your computer after this, and Minikube should work fine.

k8s, RabbitMQ, and Peer Discovery

We are trying to run an instance of the RabbitMQ chart with Helm from the helm/charts/stable/rabbit project. I had it running perfect but then I had to restart k8s for some maintenance. Now we are completely unable to launch the RabbitMQ chart in any way shape or form. I am not even trying to run the chart with any variables, i.e. just the default values.
Here is all I am doing:
helm install stable/rabbitmq
I have confirmed I can simply run the default right on my local k8s which I'm running with Docker for Desktop. When we run the rabbit chart on our shared k8s the exact same way as on desktop and what we did before the restart, the following error is thrown:
Failed to get nodes from k8s - 503
I have also posted an issue on the Helm charts repo as well. Click here to see the issue on Github.
We are suspecting the DNS but are unable to confirm anything yet. What is very frustrating is after the restart every single other chart we installed restarted perfectly except Rabbit which now will not start at all.
Anyone know what I could do to get Rabbits peer discovery to work? Anyone seen issue like this after restarting k8s?
So I actually got rabbit to run. Turns out my issue was the k8s peer discovery could not connect over the default port 443 and I had to use the external port 6443 because kubernetes.default.svc.cluster.local resolved to the public port and could not find the internal, so yeah our config is messed up too.
It took me a while to realize the variable below was not overriding when I overrode it with helm install . -f server-values.yaml.
rabbitmq:
configuration: |-
## Clustering
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.k8s.port = 6443
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
# queue master locator
queue_master_locator=min-masters
# enable guest user
loopback_users.guest = false
I had to add cluster_formation.k8s.port = 6443 to the main values.yaml file instead of my own. Once the port was changed specifically in the values.yaml, rabbit started right up.
I'm wondering what is the reason of using rabbit_peer_discovery_k8s plugin, if values.yaml defaults to 1 replicas (your manifest file does not override this setting) ?
I was trying to reproduce your issue with given by you override values (dev-server.yaml), as per the details in your github issue #10811, but I somewhat failed. Here are my observations:
If to install RabbitMQ chart with your custom values, my rabbitmq-dev-default-0 pod gets stuck in CrashLoopBackOff state.
It`s quite hard to troubleshoot it further for me as bitnami`s rabbitmq image containers, used by this rabbitmq Helm chart, are shipped with non-root account.
On the other hand if rabbitmq chart is installed on my Kubernetes cluster (v1.13.2) in simplest form:
helm install stable/rabbitmq
I observe similar issue then. I mean rabbitmq server survives a simulated VM restart of all cluster nodes (including master), but I cannot connect to it from outside:
Post VM restart, I`m getting following error from my python mqclient:
socket.gaierror: [Errno -2] Name or service not known
Few remarks here:
Yes, I did port(s)-forward as per instructions on "helm status " command:
The readiness probe works fine:
curl -sS -f --user user:<my_pwd> 127.0.0.1:15672/api/healthchecks/node
{"status":"ok"}
rabbitmqctl to rabbitmq-server connectivity from inside the container works fine too:
kubectl exec rabbitmq-dev-default-0 -- rabbitmqctl list_queues
warning: the VM is running with native name encoding of latin1 which may cause Elixir to malfunction as it expects utf8. Please ensure your locale is set to UTF-8 (which can be verified by running "locale" in your shell)
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name messages
hello 11
From the moment I used kubectl port-forward to pod instead service, connectivity to rabbitmq server is restored:
kubectl port-forward --namespace default pod/rabbitmq-dev-default-0 5672:5672
$ python send.py
[x] Sent 'Hello World!'

Cannot get fabric8 to start on local development machine (OSX or Linux)

I'm trying to give fabric8 a shot but I'm having issues getting it to start on a local machine running minikube and virtualbox (I've attempted this on Linux and OSX. I'm able to get all but one of the pods to start (after manually increasing minikube's VM ram to 8GB). The expose controller won't start and is giving me the following error in the logs:
I0415 14:29:43.431944 1 exposecontroller.go:47] Using build: '2.3.2'
F0415 14:29:43.492059 1 exposecontroller.go:66] failed to create new strategy: failed to create node port expose strategy: failed to list nodes: nodes is forbidden: User "system:serviceaccount:fabric8:exposecontroller" cannot list nodes at the cluster scope
Here are the commands I'm running:
minikube start --cpus=5 --disk-size=50g --memory=8000
curl -sS http://get.fabric8.io/download.txt | bash
gofabric8 start
I also tried creating OAuth secret via GitHub (using bogus IP address info for the redirect URL) but this doesn't make sense to me because I don't have a domain... Then I ran these:
minikube start --vm-driver=xhyve --cpus=5 --disk-size=50g --memory=8000
minikube addons enable ingress
gofabric8 deploy --package system -n fabric8
That resulted in the exposecontroller working but then additional pods (keycloak, for example) were created but failed to start.
I've spent hours trying to get this to work and am about to give up. The documentation on GitHub differs from fabric8's site documentation and I just can't get it to work. If someone is able to help, I would greatly appreciate it.
Note:
I've attempted to follow the instructions here:
http://fabric8.io/guide/getStarted/gofabric8.html
Additionally, I attempted to follow this:
https://github.com/fabric8io/fabric8-platform/blob/master/INSTALL.md