Redis-cli cannot connect from a vnet to a Redis Cluster installed in AKS on another vnet (bitnami/redis-cluster) - redirect

So we installed a redis cluster via helm after changing following two settings on AKS (azure k8s)
-- service.beta.kubernetes.io/azure-load-balancer-internal: "true"
--type: LoadBalancer
Now we will have a cluster/svc like below
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-dev-redis-cluster LoadBalancer 10.0.16.157 10.10.1.34 6379:30340/TCP 32d
From anywhere in the vnet where the aks is installed, we can connect, and all are fine. Using the external IP. the external IP is actually just a private IP in the vnet.
Now the problem is: we need to access this Redis Cluster from another Azure vnet. There we have problems due to re-direct. Of course, in order to connect from another vnet, we need to expose this cluster via Private link service and add a private endpoint from the connecting vnet. The bitnami helm chart handles configuring all necessary in the AKS internal load balancer.
Connecting using redis-cli using the private endpoint (10.2.3.41 is the IP of the private endpoint)
redis-cli -h 10.2.3.41 -a LcMFNNdlAnIM -c
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.2.3.41:6379> get one
"111111"
10.2.3.41:6379> get two
-> Redirected to slot [2127] located at 10.10.1.28:6379
Could not connect to Redis at 10.10.1.28:6379: Connection timed out
You see, the first key returns fine. but the 2nd key, it hangs.
This is because, the redis cli is connected to the cluster, if the data is in the same node it is connected, it gets the data and just returns. But if the data is on another redis pod/node, it has to redirect to 10.10.1.28, which is on another vnet. The redis cli on another vnet has no idea of the ip. It only knows the ip of the private endpoint.
We validated this using networking tracing. when redirected, the redis-cli client is re-establishing a connection between itself and wherever it is re-directed. And it cannot because they are on different vnets.
I see there are options like hostNetwork in the chart. Tried to change it to true. But the cluster won't even start this way.
hostNetwork: true
I saw this thread, but we cannot figure out what to change in the helm chart to make it work.
https://github.com/Grokzen/docker-redis-cluster/issues/30
Really appreciate it for helping!

Related

Access Kubernetes applications via localhost from the host system

Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi

kind cluster how to access a service using loadbalancer

I am deploying a k8s cluster locally using Kind. The image gets deployed ok and when I view the list of services I see the following
the service I'm trying to access is chatt-service and if you notice the EXTERNAL-IP is pending. I know minikube has a command which makes this accessible, but how do I do it on a Kind cluster ?
for Loadbalancer service type you will not able to get public ip because you're running it locally and you will need to run it in a cloud provider which will provide the LB for you like ALB in aws or LoadBalancer in Digital ocean. however, you can access this service locally using the Kubectl proxy tool.
.
kubectl port-forward service/chatt-service 3002:3002
There are some additional options to work on LoadBalancer under Kind cluster. (While the port forwarding is the simplest way).
https://kind.sigs.k8s.io/docs/user/loadbalancer/
First way:
You can also expose pods and services using extra port mappings
this mean manually set ports in cluster-config.yaml
And maybe second way (but not actually the solution on LoadBalancer):
You may want to check out the Ingress Guide as a cross-platform
workaround

I am not able expose a service in kubernetes cluster to the internet

I have created a simple hello world service in my kubernetes cluster. I am not using any cloud provider and have created it in a simple Ubuntu 16.04 server from scratch.
I am able to access the service inside the cluster but now when I want to expose it to the internet, it does not work.
Here is the yml file - deployment.yml
And this is the result of the command - kubectl get all:
Now when I am trying to access the external IP with the port in my browser, i.e., 172.31.8.110:8080, it does not work.
NOTE: I also tried the NodePort Service Type, but then it does not provide any external IP to me. The state remains pending under the "External IP" tab when I do "kubectl get services".
How to resolve this??
I believe you might have a mix of networking problems tied together.
First of all, 172.31.8.110 belongs to a private network, and it is not routable via Internet. So make sure that the location you are trying to browse from can reach the destination (i.e. same private network).
As a quick test you can make an ssh connection to your master node and then check if you can open the page:
curl 172.31.8.110:8080
In order to expose it to Internet, you need a to use a public IP for your master node, not internal one. Then update your Service externalIPs accordingly.
Also make sure that your firewall allows network connections from public Internet to 8080 on master node.
In any case I suggest that you use this configuration for testing purposes only, as it is generally bad idea to use master node for service exposure, because this applies extra networking load on the master and widens security surface. Use something like an Ingress controller (like Nginx or other) + Ingress resource instead.
One option is also to do SSH local port forwarding.
ssh -L <local-port><private-ip-on-your-server><remote-port> <ip-of-your-server>
So in your case for example:
ssh -L 8888:172.31.8.110:8080 <ip-of-your-ubuntu-server>
Then you can simply go to your browser and configure a SOCKS Proxy for localhost:8888.
Then you can access the site on http://localhost:8888 .

Enable access to Kubernetes Dashboard without kubectl proxy

If I move a relevant config file and run kubectl proxy it will allow me to access the Kubernetes dashboard through this URL:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
However if I try to access the node directly, without kubectl proxy, I will get a 403 Forbidden.
http://dev-master:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Our kubernetes clusters are hidden inside a private network that users need to VPN in to; furthermore only some of us can talk to the master node of each of our clusters after authenticating to the VPN. As such, running kubectl proxy is a redundant step, and choosing the appropriate config file for each cluster is an additional pain, especially when we want to compare the state of different clusters.
What needs to be changed to allow "anonymous" HTTP access to the dashboard of these already-secured kubernetes master nodes?
You would want to set up a Service (either NodePort or LoadBalancer) for the dashboard pod(s) to expose it to the outside world (well, outside from the PoV of the cluster, which is still an internal network for you).

Like to know Kubernetes API server Host from the app deployed on to POD in a cluster

Like to know if there is a way to identify Master IP address (API server Host & port) from the application that I run in POD?. ( say I create a kubernetes cluster A, with Master public IP address x.x.x.x; and I create a pod with my app (say a golang or J2ee) on a kubernetes minion belonging to that cluster A. From the App process that is running on cluster A minion , we like to know the public IP address of that Master (x.x.x.x) )
I do not find any Environmental variables for the same.
Can any one help?.
The api service is exposed via a k8s service. You would need to use the API to access the service config. You can get access to the API through the service endpoint (the address is in an env variable) but you'll have to have the necessary creds to do anything useful. They're in a subdirectory of /var/run/secrets/kubernetes.io depending on which service account your pod is running under. You can either access the endpoint directly, over https or you can run a proxy in your pod. See http://kubernetes.io/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod.
An example of using curl to hit the API server is given in https://stackoverflow.com/a/30739416/191215.
The api endpoint you want is /api/v1/endpoints. Look for the item with a metadata.name of kubernetes. The external endpoint should be one of the subsets addresses.
I believe this is true for all k8s implementations but have only tested it on GKE and from docker-machine.
The DNS name kubernetes should resolve to a virtual IP address that will route to the master so that you can hit the apiserver.