Unable to communicate between pods on kubernetes on rancher using google cloud? - kubernetes

I have created orderer and cli pods. When i go to cli shell and create channel then it is not able to connect with ordrer.
Error: failed to create deliver client: orderer client failed to connect to orderer:7050: failed to create new connection: context deadline exceeded
The port for order ie. 7050 is open and when i go to orderer shell and do telnet localhost 7050 it is connected but when specify the ip for pod then it does not work.
I am using Google Cloud for deloyment. I have also added firewall rules for ingress and egress for all IP and all Ports.
Any help will be much appreciated.

I was missing this variable
ORDERER_GENERAL_LISTENADDRESS = 0.0.0.0
After adding this variable it worked

Related

Cant connect to GKE cluster with kubectl. getting timeout

I executed followign command
gcloud container clusters get-credentials my-noice-cluter --region=asia-south2
and that command runs successfully. I can see the relevant config with kubectl config view
But when I try to kubectl, I get timeout
kubectl config view
❯ kubectl get pods -A -o wide
Unable to connect to the server: dial tcp <some noice ip>:443: i/o timeout
If I create a VM in gcp and use kubectl there or use gcp's cloud shell, It works but it does not work on our local laptops and PCs.
Some network info about our cluster:-
Private cluster Disabled
Network default
Subnet default
VPC-native traffic routing Enabled
Pod address range 10.122.128.0/17
Service address range 10.123.0.0/22
Intranode visibility Enabled
NodeLocal DNSCache Enabled
HTTP Load Balancing Enabled
Subsetting for L4 Internal Load Balancers Disabled
Control plane authorized networks
office (192.169.1.0/24)
Network policy Disabled
Dataplane V2 Disabled
I also have firewall riles to allow http/s
❯ gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
default-allow-http default INGRESS 1000 tcp:80 False
default-allow-https default INGRESS 1000 tcp:443 False
....
If it's work from your VPC and not from outside, it's because you created a private GKE cluster. The master is only reachable through the private IP or through the autorized network.
Speaking about the authorized network, you have one authorizer office (192.169.1.0/24). Sadly, you registered a private IP range in your office network and not the public IP used to access the internet.
To solve that, go to a site that provide you your public IP. Then update the authorized network for your cluster with that IP/32, and try again.
If it works from the GCP VM, but does not work from your local that means that it's either related to the GCP Firewall or your GKE does not have a public IP.
First check if you cluster IP is public and if yes, then you need to add a firewall rule which allows the traffic over HTTPS (443 port). You can do it in the the gcloud tool or via the GCP Console "Firewall -> Create Firewall Rule".

Remote kubernetes server unreachable

I tried to install kubernetes with Docker desktop. However, as soon as I type in
kubectl get nodes
I get Remote kubernetes server unreachable error.
I0217 23:42:56.224000 26220 versioner.go:56] Remote kubernetes server unreachable
Unable to connect to the server: dial tcp 172.28.112.98:6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Any ideas on how to fix this?
Have you got more than one context in your kubeconfig?
You can check this with kubectl config get-contexts.
If necessary change your context to Docker Desktop Kubernetes using kubectl config use-context docker-desktop.
Is it possible that you may have tried minikube and this has left cluster/context in your .kube\config?
Configure Access to Multiple Clusters
go to your HOME/.kube directory and check the config file. There is a possibility that the server mentioned there is old or not reachable.
you can copy the new config file(from remote server of the other directory of tools like k3s) and add/replace it in your HOME/.kube / config file.
Same error message may also happen when you switch from local k8s cluster to a remote cluster that requires VPN to connect and you are not connected to VPN.

Can't connect to my private cluster using kubectl (Unable to connect to the server: dial tcp )

i have 2 clusters same config, till yesterday i was able to connect to both of them, but today i was not able to connect to one of them, all apps are working fine, i just get this error when using kubectl
Unable to connect to the server: dial tcp IP:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because the connected host has failed to respond
i tried scaling up and down my cluster from the interface ( the scaling works but still i can't use kubectl)
can someone help please?
it was network conflict with the docker bridge network , basically i was using dockers default bridge network as my GKE endpoint , so when i installed docker on my deployment machine (to automatically build images and push them) i lost the connection to this kubernetes cluster.
So all i had to do was disable the bridge network , and i got my connection back to this cluster

Kubectl don't allow view resources in cluster from node in GCP

I have a trouble that i can't solve. So, I have k8s cluster on GCP. I can use kubectl from shell That opened directly to cluster. But when I use kubectl from node "I have The connection to the server localhost:8080 was refused - did you specify the right host or port?".
Also I use ./kube/config and it works about 5 minutes, and then again fail.
Maybe someone use GCP and help me. Thank you.
irvifa- I use Kubernetes cluster that provides GCP. When I connect directly from cloud shell. But when I connect from instance kubectl shows for client but for server it has mistake The connection to the server localhost:8080 was refused - did you specify the right host or port?"
If you didn't use COS(Container-Optimized OS), your need to
gcloud container clusters get-credentials "CLUSTER NAME"
By default COS will get credentials when you access to the node.

Connecting RabbitMQ | Pika | Kubernetes

I have a kubernetes cluster running. Inside this I have python application running as a pod.
This app is talking to rabbitmq which is also one of the pods of the same cluster. So the whole connection is done using internal IPs.
The problem is I have a service for rabbitmq, so I am providing the service IP for connection. It looks something like this ( with pika )
credentials = pika.PlainCredentials(rabbituser, rabbitpass)
connection = pika.BlockingConnection(pika.ConnectionParameters(host=QHost, port=80, credentials=credentials))
channel = connection.channel()
As you can see the connection I am trying to make is to port 80 which is open in the k8s service.
Now it should redirect it to respective pod which is rabbitmq, but it is trying to connect to port 5672
That means, error is because it is trying to connect to service-ip:5672, which is not there.
I want it to forward the request to the rabbit mq pod where the service is pointed.
I hope that I have explained main things.
Please ask if more details required.
Thanks