Kubernetes: not able to access outside service from my kubernetes pod - kubernetes

I have a gateway running as a pod and accessing it via NodePort service on port 3XXXX on a server.
I am able to send traffic to this gateway.
But I am not able to forward traffic from this gateway pod to a service that is on a different and server(this service is not a Kubernetes service).
It is showing timeout.
I am also not able to ping it from indside of pod.
I have whitelisted both servers with each other.
Please help me.

Related

how should I communicate from Kubernetes cluster with Ingress Controller with external database

I am planning to have a Kubernetes cluster for production using Ingress for external requests.
I have an elastic database that is not going to be part of Kubernetes cluster. I have a microservice in the Kubernetes cluster that communicates with the elastic database through HTTP (Get,Post etc).
Should I create another NodePort Service in order to communicate with the elastic database or should it be through the Ingress controller as it's an HTTP request? if both are valid options please let me know what is better to use and why
Should I create another NodePort Service in order to communicate with
the elastic database or should it be through the Ingress controller as
it's an HTTP request?
There is no requirement of it if your k8s cluster is a public, microservices will be able to send requests to the Elasticsearch database.
Ingress and Egress endpoints might not be the same point in K8s.
I have a microservice in the Kubernetes cluster that communicates with
the elastic database through HTTP (Get,Post etc).
May there is some misunderstanding, Ingress is for the incoming request it's not guarantee when you are running the microservice on Kubernetes your HTTP outgoing egress request will go through the same way.
If your microservice running on the K8s cluster, it will use the Node's IP on which POD is running as outgoing IP.
You can verify this quickly using kubectl exec command
kubectl exec -it <Any POD name> -n <namespace name> -- /bin/bash
Run the command now
curl https://ifconfig.me
above command will response with the IP from where the request is going out in your cluster, it will be Node's IP on which your POD is scheduled.
Extra
So you can manage the ingress for incoming traffic no extra config is required for egress traffic, but if you want to whitelist single IP in the Elasticsearch database then you have to set up the NAT gateway.
So all traffic of K8s microservices will go out from a single IP(Nat gateway's IP), it will be different IP from the Ingress IP.
If you are on GCP, here is terraform script to setup the NAT gateway also : https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway
You might will get an idea by watching the diagram in the above link.

In Istio, service to service communication, does Kubernetes service required to setup?

Hello I'm new to Istio and currently learning about Istio.
As per my understanding, Envoy proxy will resolve an IP address of destination instead of Kube DNS server. Envoy will send traffic directly to healthy pod based on information which received from control pane.
So... Does Kubernetes service required to setup, if I'm using Istio?
Correct me if I'm wrong.
Thanks!
From the docs
In order to direct traffic within your mesh, Istio needs to know where
all your endpoints are, and which services they belong to. To populate
its own service registry, Istio connects to a service discovery
system. For example, if you’ve installed Istio on a Kubernetes
cluster, then Istio automatically detects the services and endpoints
in that cluster.
So Kubernetes service is needed for istio to achieve service discovery i.e to know the POD IPs. But kubernetes service(L4) is not used for load balancing and routing traffic because L7 envoy proxy does that in istio.
From the docs.
A pod must belong to at least one Kubernetes service even if the pod
does NOT expose any port. If a pod belongs to multiple Kubernetes
services, the services cannot use the same port number for different
protocols, for instance HTTP and TCP.

Connecting Kube cluster through proxy and clusterIP?

As various google articles(Example : this blog) states that this(connecting Kube cluster through proxy and clusterIP) method isn’t suitable for a production environment, but it’s useful for development.
My question is why it is not suitable for production ? Why connecting through nodeport service is better than proxy and clusterIP ?
Lets distinguish between three scenarios where connecting to the cluster is required
Connecting to Kubernetes API Server
Connecting to the API server is required for administrative purposes. The users of your application have no business with it.
The following options are available
Connect directly to Master IP via HTTPS
Kubectl Proxy Use kubectl proxy to to make the Kubernetes API available on your localhost.
Connecting external traffic to your applications running in the Kubernetes Cluster. Here you want to expose your applications to your users. You'll need to configure a Service and they can be of the following types
NodePort: Only accessible on the NodeIPs and ports > 30000
ClusterIP: Internal Only. External traffic cannot hit a service of type ClusterIP directly. Requires ingress resource & ingress controller to receive external traffic.
LoadBalancer: Allows you receive external traffic to one and only one service
Ingress: This isn't a type of service, it is another type of Kubernetes resource. By configuring NGINX Ingress for example, you can handle traffic to multiple ClusterIP services with only on external LoadBalancer.
A Developer needs to troubleshoot a pod/service: kubectl port-forward: Port forwarding example Requires kubectl to be configured on the system hence it cannot be used for all users of the application
As you can see from the above explanation, the proxy and port-forwarding option aren't viable options for connecting external traffic to the applications running because it requires your kubectl installed and configured with a valid kubeconfig which grants access into your cluster.

kubernetes service exposed to host ip

I created a kubernetes service something like this on my 4 node cluster:
kubectl expose deployment distcc-deploy --name=distccsvc --port=8080
--target-port=3632 --type=LoadBalancer
The problem is how do I expose this service to an external ip. Without an external ip you can not ping or reach this service endpoint from outside network.
I am not sure if i need to change the kubedns or put some kind of changes.
Ideally I would like the service to be exposed on the host ip.
Like http://localhost:32876
hypothetically let's say
i have a 4 node vm on which i am running let's say nginx service. i expose it as a lodabalancer service. how can i access the nginx using this service from the vm ?
let's say the service name is nginxsvc is there a way i can do http://:8080. how will i get this here for my 4 node vm ?
LoadBalancer does different things depending on where you deployed kubernetes. If you deployed on AWS (using kops or some other tool) it'll create an elastic load balancer to expose the service. If you deployed on GCP it'll do something similar - Google terminology escapes me at the moment. These are separate VMs in the cloud routing traffic to your service. If you're playing around in minikube LoadBalancer doesn't really do anything, it does a node port with the assumption that the user understands minikube isn't capable of providing a true load balancer.
LoadBalancer is supposed to expose your service via a brand new IP address. So this is what happens on the cloud providers, they requisition VMs with a separate public IP address (GCP gives a static address and AWS a DNS). NodePort will expose as a port on kubernetes node running the pod. This isn't a workable solution for a general deployment but works ok while developing.

Kubernetes - service exposed via NodePort not available on all nodes

I've a nginx service exposed via NodePort. According to the documentation, I should now be able to hit the service on $NODE_IP:$NODE_PORT for all my K8 worker IPs. However, I'm able to access the service via curl on only the node that hosts the actual nginx pod. Any idea why?
I did verify using netstat that kube-proxy is listening on $NODE_PORT on all the hosts. Somehow, the request is not being forwarded to the actual pod by kube-proxy.
This turned out to be an issue with the security group associated with the workers. I had opened only the ports in the --service-node-port-range. This was not enough because I was deploying nginx on port 80 and kube-proxy tried to forward the request to the pod's IP on port 80 but was being blocked.