Connect pods via service name in GCP K8's - kubernetes

I have a number of Services running against Pods hosted within a cluster on Google Cloud K8's.
Service 1 is an Ingress - basic-ingress
Service 2 is a NodeJS API Gateway w/ 2 Pods - security-gateway-svc
Service 3 is a NodeJS API w/ 2 Pods - some-random-api-svc
and so on service 4 / 5 / 6 etc....
My Ingress allows me to access exposed services via a sub domain however I would like to move my external API's behind my Gateway so I can handle auth etc in the gateway.
What I'd like to do is allow security-gateway-svc to connect to some-random-api-svc without having to go via dns or outside of my cluster.
I figured I could update my ingress so all sub domains use the same service entry and allow the Gateway to figure out where the traffic should go.
I can configure this just fine locally as everything runs on localhost and I specify a port so it's fairly straight forward.
Is it possible however to expose pods to other pods within a cluster via the service name instead of an actual domain / dns look up?

You service should be accessible within your cluster via the service name.
Point your gateway entry for each api to the service name.
Something like http://some-random-api-svc should work.

The easier way to make pods reachable within your kubernetes clulster is to use services link to services documentation. For this you need to create a yaml block that will create an internal hostname binded by an endpoint to your pod. In addition, a selector will allow you to bind one or multiple pods to that internal hostname. Here is an example:
---
apiVersion: v1
kind: Service
metadata:
name: $YOUR_SERVICE_NAME
namespace: $YOUR_NAMESPACE
labels:
app: $YOUR_SERVICE_NAME
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
app: $YOUR_SERVICE_NAME
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: $YOUR_SERVICE_NAME
namespace: $YOUR_NAMESPACE
labels:
app: $YOUR_SERVICE_NAME
spec:
replicas: 1
selector:
matchLabels:
app: $YOUR_SERVICE_NAME
template:
metadata:
labels:
app: k2m
spec:
containers:
- name: $YOUR_SERVICE_NAME
image: alpine:latest
restartPolicy: Always
Finally, use the service name in your ingress controller route to redirect traffic to your api-gateway.

Kubernetes uses CoreDNS to perform in-cluster DNS resolution. By default, all Services are assigned DNS names in the (FQDN) form of <service-name>.<namespace>.svc.cluster.local. So your security-gateway-svc will be able to forward requests to some-random-api-svc via some-random-api-svc.<namespace>, without routing the traffic outside of Kubernetes. Keep in mind that you shouldn't be interacting with pods directly, because pods are ephemeral; always go through Services.

Related

k8s communicate from one service to another

I have two services deployed on the same k8s (minikube cluster). What is the url/approach I should use for one service to communicate with another service. I tried searching a bit on the web but most of them are communicating with an external db which is not what I'm after. This is what my deployments look like. I am looking for the goclient to be able to communicate with goserver. I know I need to go through the service but not sure what the url should look like. And is this dynamically discoverable? In addition to this if I expose goserver though ingress will this change ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: goserver
namespace: golang-ns
labels:
app: goserver
spec:
replicas: 1
selector:
matchLabels:
app: goserver
template:
metadata:
labels:
app: goserver
spec:
containers:
- name: goserver
image: goserver:1.0.0
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goclient
namespace: golang-ns
labels:
app: goclient
spec:
replicas: 1
selector:
matchLabels:
app: goclient
template:
metadata:
labels:
app: goclient
spec:
containers:
- name: goclient
image: goclient:1.0.0
imagePullPolicy: Never
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: goserver-service
namespace: golang-ns
spec:
selector:
app: goserver
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: goclient-service
namespace: golang-ns
spec:
selector:
app: goclient
ports:
- protocol: TCP
port: 8081
targetPort: 8081
type: LoadBalancer
Note that the term Service can be quite ambiguous when used in the context of kubernetes.
Service in your question is used to denote one of your microservices, deployed as containerized applications, running in Pods, managed by 2 separate Deployments.
Service, that was mentioned in David Maze's comment, refers to a specific resource type which is used for exposing your apps/microservices both inside and outside your kubernetes cluster. This resource type is called a Service. But I assume you know that as such Services are also added in your examples.
This is the reason why I prefer to use a term microservice if I really want to call "a service" one of the apps (clients, servers, whatever... ) deployed on my kubernetes cluster. And yes, this is really important distinction as talking about communication from one Service to another Service (kubernetes resource type) doesn't make any sense at all. Your Pod can communicate with a different Pod via a Service that exposes this second Pod, but Services don't communicate with each other at all. I hope this is clear.
So in order to expose one of your microservices within your cluster and make it easily accessible for other microservices, running on the same cluster, use a Service. But what you really need in your case is it's simplest form. ❗There is no need for using LoadBalancer type here. In your case you want to expose your Deployment named goserver to make it accessible by Pods from second Deployment, named goclient, not by external clients, sending requests from the public Internet.
Note that LoadBalancer type that you used in your Service's yaml manifests has completely different purpose - it is used for exposing your app for clients reaching to it from outside your kubernetes cluster and is mainly applicable in cloud environments.
So again, what you need in your case is the simplest Service (often called ClusterIP as it is the default Service type) which exposes a Deployment within the cluster. ⚠️ Remember that ClusterIP Service also has loadbalancing capabilities.
OK, enough of explanations, let's move on to the practical part. As I said, it's really simple and it can be done with one command:
kubectl expose deployment goserver --port 8080 --namespace golang-ns
Yes! That's all! It will create a Service named goserver (there is no reason to name it differently than the Deployment it exposes) which will expose Pods belonging to goserver Deployment within your kubernetes cluster, making it easily accessible (and discoverable) via it's DNS name.
If you prefer declarative Service definition, here it is as well:
apiVersion: v1
kind: Service
metadata:
name: goserver
namespace: golang-ns
spec:
selector:
app: goserver
ports:
- port: 8080
Your golang-client Pods need only the Service name i.e. goserver to access goserver Pods as they are deployed in the same namespace (golang-ns). If you need to access them from a Pod deployed to a different namespace, you need to use <servicename>.<namespace> i.e. goserver.golang-ns. You can also use fully quallified domain name (FQDN) (see the official docs here):
my-svc.my-namespace.svc.cluster-domain.example
which in your case may look as follows:
goserver.golang-ns.svc.cluster.local
As to:
In addition to this if I expose goserver though ingress will this
change ?
❗Unless you want to expose your goserver to the external world, don't use Ingress, you don't need it.

Load distribution: All HTTP requests are getting redirected to a single pod in a k8 cluster

I have created a very simple spring boot application with only one REST service. This app is converted into a docker image ("springdockerimage:1") and deployed in the Kubernetes cluster with 3 replicas. Contents of my "Deployment" definition is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: springapp
labels:
app: distributiondemo
spec:
selector:
matchLabels:
app: distributiondemo
replicas: 3
template:
metadata:
labels:
app: distributiondemo
spec:
containers:
- name: spring-container
image: springdockerimage:1
I have created service for my above deployment as follows:
apiVersion: v1
kind: Service
metadata:
name: springservice
labels:
app: distributiondemo
spec:
selector:
app: distributiondemo
ports:
- port: 8080
protocol: TCP
targetPort: 8080
name: spring-port
nodePort: 32000
type: NodePort
After deploying both the above YAML(deployment and service) files, I noticed that everything has been deployed as expected i.e., 3 replicas are created and my service is having 3 endpoints as well. Below screenshot is the proof of the same:
Since I am using minikube for my local testing, I am port forwarding and accessing the application as kubectl port-forward deployment.apps/springapp 40002:8080 .
But one thing I noticed is that all my HTTP requests are getting redirected to only one pod.
while true ; do curl http://localhost:40002/docker-java-app/test ;done
I am not getting where exactly I am doing it wrong. Any help would be appreciated. Thank you.
The loadbalancing might not work with port-forwarded ports as it might be directly redirecting traffic to pod (read more here). The K8s service is the feature will give you that loadbalancing capability.
So you can try either of below instead
Use http://your_service_dns_name:8080/docker-java-app/test
Use http://service_cluster_ip:8080/docker-java-app/test
Use http://any_host_ip_from_k8s_cluster:32000/docker-java-app/test
Option 1 and 2 works only if you are accessing those urls from a host which is part of K8s cluster. Option 3 just needs connectivity to target host and port, from the host you are accessing url.

Disable Kubernetes replica set load balancing

I have a Kubernetes cluster of 3 nodes.
A sample deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
I do not have ingress, but I do have external load balancer that round-robins the traffic at 80.11.12.10, 80.11.12.11, 80.11.12.12. So I set my service like this.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
externalIPs:
- 80.11.12.10
- 80.11.12.11
- 80.11.12.12
The problem is that due to existing kubernetes service load balancer the traffic get load balancing twice. Aside that it is unnecessary it is spoils the connection persistence. Is there a way to force Kubernetes to forward traffic on local machine pod for each node?
If you set service.spec.externalTrafficPolicy to the value Local, kube-proxy only proxies proxy requests to local endpoints, and does not forward traffic to other nodes.
kubectl patch svc servicename -p '{"spec":{"externalTrafficPolicy":"Local"}}'
If there are no local endpoints, packets sent to the node are dropped.
For clusterIP type service you need to use Service Topology
Service Topology enables a service to route traffic based upon the
Node topology of the cluster. For example, a service can specify that
traffic be preferentially routed to endpoints that are on the same
Node as the client, or in the same availability zone.
It's an alpha feature available from kubernetes 1.17 which needs to be turned on by enabling the feature flag
kube-proxy is using ipvs in order to maange load-balancing in recent k8s versions. ipvsadm can be used at the node level in order to choose which load-balancing algorithm you want ipvs to use. According to https://linux.die.net/man/8/ipvsadm, you can choose between 8 load-balancing algorithms: round robin, weighted round robin, least-connection, weighted least-connection, locality-based least-connection, locality-based least-connection with replication, destination-hashing, and source-hashing. For additional information, check the documentation for the --scheduler option of ipvsadm.

Setting environment variables based on services in Kubernetes

I am running a simple app based on an api and web interface in Kubernetes. However, I can't seem to get the api to talk to the web interface. In my local environment, I just define a variable API_URL in the web interface with eg. localhost:5001 and the web interface correctly connects to the api. As api and web are running in different pods I need to make them talk to each other via services in Kubernetes. So far, this is what I am doing, but without any luck.
I set-up a deployment for the API
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: api
image: gcr.io/myproject-22edwx23/api:latest
ports:
- containerPort: 5001
I attach a service to it:
apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service
spec:
type: NodePort
selector:
component: api
ports:
- port: 5001
targetPort: 5001
and then create a web deployment that should connect to this api.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: web
image: gcr.io/myproject-22edwx23/web:latest
ports:
- containerPort: 5000
env:
- name: API_URL
value: http://api-cluster-ip-service:5001
afterwards, I add a service for the web interface + ingress etc but that seems irrelevant for the issues. I am wondering if the setting of API_URL correctly picks up the host of the api via http://api-cluster-ip-service:5001?
Or can I not rely on Kubernetes getting the appropriate dns for the api and should the web app call the api via the public internet.
If you want to check API_URL variable value, simply run
kubectl exec -it web-deployment-pod env | grep API_URL
The kube-dns service listens for service and endpoint events from the Kubernetes API and updates its DNS records as needed. These events are triggered when you create, update or delete Kubernetes services and their associated pods.
kubelet sets each new pod's search option in /etc/resolv.conf
Still, if you want to http from one pod to another via cluster service it is recommended to refer service's ClusterIP as follows
api-cluster-ip-service.default.svc.cluster.local
You should have service IP assigned to env variable within your web pod, so there's no need to re-invent it:
sukhoversha#sukhoversha:~/GCP$ kk exec -it web-deployment-675f8fcf69-xmqt8 env | grep -i service
API_CLUSTER_IP_SERVICE_PORT=tcp://10.31.253.149:5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP=tcp://10.31.253.149:5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP_PORT=5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP_ADDR=10.31.253.149
To read more about DNS for Services. A service defines environment variables naming the host and port.
If you want to use environment variables you can do the following:
Example in Python:
import os
API_URL = os.environ['API_CLUSTER_IP_SERVICE_SERVICE_HOST'] + ":" + os.environ['API_CLUSTER_IP_SERVICE_SERVICE_PORT']
Notice that the environment variable is based on your service name. If you want to check all environment variables available in a Pod:
kubectl get pods #get {pod name}
kubectl exec -it {pod_name} printenv
P.S. Be careful that a Pod gets its environment variables during its creation and it will not be able to get it from services created after it.

Kubernetes to find Pod IP from another Pod

I have the following pods hello-abc and hello-def.
And I want to send data from hello-abc to hello-def.
How would pod hello-abc know the IP address of hello-def?
And I want to do this programmatically.
What's the easiest way for hello-abc to find where hello-def?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-abc-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-abc
spec:
containers:
- name: hello-abc
image: hello-abc:v0.0.1
imagePullPolicy: Always
args: ["/hello-abc"]
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-def-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-def
spec:
containers:
- name: hello-def
image: hello-def:v0.0.1
imagePullPolicy: Always
args: ["/hello-def"]
ports:
- containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: hello-abc-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: hello-abc
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: hello-def-service
spec:
ports:
- port: 80
targetPort: 5001
protocol: TCP
selector:
app: hello-def
type: NodePort
Preface
Since you have defined a service that routes to each deployment, if you have deployed both services and deployments into the same namespace, you can in many modern kubernetes clusters take advantage of kube-dns and simply refer to the service by name.
Unfortunately if kube-dns is not configured in your cluster (although it is unlikely) you cannot refer to it by name.
You can read more about DNS records for services here
In addition Kubernetes features "Service Discovery" Which exposes the ports and ips of your services into any container which is deployed into the same namespace.
Solution
This means, to reach hello-def you can do so like this
curl http://hello-def-service:${HELLO_DEF_SERVICE_PORT}
based on Service Discovery https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
Caveat: Its very possible that if the Service port changes, only pods that are created after the change in the same namespace will receive the new environment variables.
External Access
In addition, you can also reach this your service externally since you are using the NodePort feature, as long as your NodePort range is accessible from outside.
This would require you to access your service by node-ip:nodePort
You can find out the NodePort which was randomly assigned to your service with kubectl describe svc/hello-def-service
Ingress
To reach your service from outside you should implement an ingress service such as nginx-ingress
https://github.com/helm/charts/tree/master/stable/nginx-ingress
https://github.com/kubernetes/ingress-nginx
Sidecar
If your 2 services are tightly coupled, you can include both in the same pod using the Kubernetes Sidecar feature. In this case, both containers in the pod would share the same virtual network adapter and accessible via localhost:$port
https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods
Service Discovery
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It supports both Docker links
compatible variables (see makeLinkVariables) and simpler
{SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the
Service name is upper-cased and dashes are converted to underscores.
Read more about service discovery here:
https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
You should be able to reach hello-def-service from pods in hello-abc via DNS as specified here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services
However, kube-dns or CoreDNS has to be configured/installed in your k8s cluster before DNS records can be utilized in your cluster.
Specifically, you should be reach hello-def-service via the DNS record http://hello-def-service for the service running in the same namespace as hello-abc-service
And you should be able to reach hello-def-service running in another namespace ohter_namespace via the DNS record hello-def-service.other_namespace.svc.cluster.local.
If, for some reason, you do not have DNS add-ons installed in your cluster, you still can find the virtual IP of the hello-def-service via environment variables in hello-abc pods. As is documented here.