Can I use multiple names for a Kubernetes service? - kubernetes

In Kubernetes, one communicates with a specific service X by doing http requests to http://X:9999. X here is the application name. I wonder, can one add multiple names, or alias, to which it will point to http://X:9999? I.e can I forward/point http://Y:9999 to http://X:9999?

Answer
Yes you can have multiple hostnames point to the same Pod(s).
You can achieve this by creating multiple Services with the same label selectors.
Background
A service creates endpoints to Pod IPs based on label selectors.
Services will match their selectors against Pod labels.
If multiple Services (with different names) have the same label selectors, they will create multiple endpoints to the same Pods.
Example
First service:
apiVersion: v1
kind: Service
metadata:
name: nginx1
namespace: nginx
spec:
selector:
app: nginx
...
Second service:
apiVersion: v1
kind: Service
metadata:
name: nginx2
namespace: nginx
spec:
selector:
app: nginx
...
Each Service will create an endpoint pointing to any Pods with the label app: nginx.
So you could hit the same Pods using nginx2:<port> and nginx1:<port>.

Related

Kubernetes (GKE) names , labels, selectors, matchLables in manifest files

I have a question about labels and names, in this example manifest file
apiVersion: apps/v
1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
I can see that the name of the deployment is "nginx-deployment" and the pod name is "nginx"? or is it the running container?
Then I see in the console that the pods would have a hash attached to the end of its name, I believe this is the revision number?
I just want to decipher the names from the labels from the matchLables, so for example I can use this service manifest to expose the pods with a certain label:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 60000
targetPort: 80
will this service expose all pods with the selector : app:nginx ?
thanks
The Deployment has your specifed name "nginx-deployment".
However you did not define a Pod with a fixed name, you define a template for the pods managed by this deployment.
The Deployment manages 3 pods (because of your replicas 3), so it will use the template to build this three pods.
There will also be a Replica Set with a hash and this will manage the Pods, but this is better seen by following the example below.
Since a deployment can manage multiple pods (like your example with 3 replicas) or needing one new Pod when updating them, a deployment will not exactly use the name specified in the template, but will always append a hash value to keep them unique.
But now you would have the problem to have all Pods loadbalanced behind one Kubernetes Service, because they have different names.
This is why you denfine a label "app:nignx" in your template so all 3 Pods will have this label regardless of there name and other labels set by Kubernetes.
The Service uses the selector to find the correct Pods. In your case it will search them by label "app:nginx".
So yes the Service will expose all 3 Pods of your deployment and will loadbalance trafic between them.
You can use --show-labels for kubectl get pods to see the name and the assigned labels.
For a more complete example see:
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

k8s communicate from one service to another

I have two services deployed on the same k8s (minikube cluster). What is the url/approach I should use for one service to communicate with another service. I tried searching a bit on the web but most of them are communicating with an external db which is not what I'm after. This is what my deployments look like. I am looking for the goclient to be able to communicate with goserver. I know I need to go through the service but not sure what the url should look like. And is this dynamically discoverable? In addition to this if I expose goserver though ingress will this change ?
apiVersion: apps/v1
kind: Deployment
metadata:
name: goserver
namespace: golang-ns
labels:
app: goserver
spec:
replicas: 1
selector:
matchLabels:
app: goserver
template:
metadata:
labels:
app: goserver
spec:
containers:
- name: goserver
image: goserver:1.0.0
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goclient
namespace: golang-ns
labels:
app: goclient
spec:
replicas: 1
selector:
matchLabels:
app: goclient
template:
metadata:
labels:
app: goclient
spec:
containers:
- name: goclient
image: goclient:1.0.0
imagePullPolicy: Never
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: goserver-service
namespace: golang-ns
spec:
selector:
app: goserver
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: goclient-service
namespace: golang-ns
spec:
selector:
app: goclient
ports:
- protocol: TCP
port: 8081
targetPort: 8081
type: LoadBalancer
Note that the term Service can be quite ambiguous when used in the context of kubernetes.
Service in your question is used to denote one of your microservices, deployed as containerized applications, running in Pods, managed by 2 separate Deployments.
Service, that was mentioned in David Maze's comment, refers to a specific resource type which is used for exposing your apps/microservices both inside and outside your kubernetes cluster. This resource type is called a Service. But I assume you know that as such Services are also added in your examples.
This is the reason why I prefer to use a term microservice if I really want to call "a service" one of the apps (clients, servers, whatever... ) deployed on my kubernetes cluster. And yes, this is really important distinction as talking about communication from one Service to another Service (kubernetes resource type) doesn't make any sense at all. Your Pod can communicate with a different Pod via a Service that exposes this second Pod, but Services don't communicate with each other at all. I hope this is clear.
So in order to expose one of your microservices within your cluster and make it easily accessible for other microservices, running on the same cluster, use a Service. But what you really need in your case is it's simplest form. ❗There is no need for using LoadBalancer type here. In your case you want to expose your Deployment named goserver to make it accessible by Pods from second Deployment, named goclient, not by external clients, sending requests from the public Internet.
Note that LoadBalancer type that you used in your Service's yaml manifests has completely different purpose - it is used for exposing your app for clients reaching to it from outside your kubernetes cluster and is mainly applicable in cloud environments.
So again, what you need in your case is the simplest Service (often called ClusterIP as it is the default Service type) which exposes a Deployment within the cluster. ⚠️ Remember that ClusterIP Service also has loadbalancing capabilities.
OK, enough of explanations, let's move on to the practical part. As I said, it's really simple and it can be done with one command:
kubectl expose deployment goserver --port 8080 --namespace golang-ns
Yes! That's all! It will create a Service named goserver (there is no reason to name it differently than the Deployment it exposes) which will expose Pods belonging to goserver Deployment within your kubernetes cluster, making it easily accessible (and discoverable) via it's DNS name.
If you prefer declarative Service definition, here it is as well:
apiVersion: v1
kind: Service
metadata:
name: goserver
namespace: golang-ns
spec:
selector:
app: goserver
ports:
- port: 8080
Your golang-client Pods need only the Service name i.e. goserver to access goserver Pods as they are deployed in the same namespace (golang-ns). If you need to access them from a Pod deployed to a different namespace, you need to use <servicename>.<namespace> i.e. goserver.golang-ns. You can also use fully quallified domain name (FQDN) (see the official docs here):
my-svc.my-namespace.svc.cluster-domain.example
which in your case may look as follows:
goserver.golang-ns.svc.cluster.local
As to:
In addition to this if I expose goserver though ingress will this
change ?
❗Unless you want to expose your goserver to the external world, don't use Ingress, you don't need it.

Kubernetes: Only one service endpoint working

I've deployed my Django/React app into K8s and exposed both deployments as a service (ClusterIP).
Whenever I try to call the API service through its ClusterIP:8000, it sometimes refuses the connection. So I checked its endpoints and only one out of the three existing endpoints returns what I expect. I understand that when calling the ClusterIP, it redirects to one of those three endpoints.
Is there any way to 'debug' a incoming service request? Can I modify the amount of existing endpoints (so I could limit it to the only working endpoint)? Is there any other way to maybe see logs of the service to find out why only one of the endpoints is working?
I was able to fix it:
I deployed a three-tier-application (Django/React/DB) and used the same selector for every deployment, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-xxx-deployment
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
So when exposing this with "kubectl expose deployment/..." it created as many endpoints as equal selectors were found in the deployment. Since I have three deployments (DB/React/Djagno), three endpoints were created.
Changing the deployment .yaml like this fixed my error and only one endpoint was crated:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: myapp-web
spec:
replicas: 1
selector:
matchLabels:
app: mapp-web
ClusterIP:8000 is not seems right to use.
You could replace it to http://$(serviceName).$(namespace):8000/ for using service correctly.

Connect pods via service name in GCP K8's

I have a number of Services running against Pods hosted within a cluster on Google Cloud K8's.
Service 1 is an Ingress - basic-ingress
Service 2 is a NodeJS API Gateway w/ 2 Pods - security-gateway-svc
Service 3 is a NodeJS API w/ 2 Pods - some-random-api-svc
and so on service 4 / 5 / 6 etc....
My Ingress allows me to access exposed services via a sub domain however I would like to move my external API's behind my Gateway so I can handle auth etc in the gateway.
What I'd like to do is allow security-gateway-svc to connect to some-random-api-svc without having to go via dns or outside of my cluster.
I figured I could update my ingress so all sub domains use the same service entry and allow the Gateway to figure out where the traffic should go.
I can configure this just fine locally as everything runs on localhost and I specify a port so it's fairly straight forward.
Is it possible however to expose pods to other pods within a cluster via the service name instead of an actual domain / dns look up?
You service should be accessible within your cluster via the service name.
Point your gateway entry for each api to the service name.
Something like http://some-random-api-svc should work.
The easier way to make pods reachable within your kubernetes clulster is to use services link to services documentation. For this you need to create a yaml block that will create an internal hostname binded by an endpoint to your pod. In addition, a selector will allow you to bind one or multiple pods to that internal hostname. Here is an example:
---
apiVersion: v1
kind: Service
metadata:
name: $YOUR_SERVICE_NAME
namespace: $YOUR_NAMESPACE
labels:
app: $YOUR_SERVICE_NAME
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
app: $YOUR_SERVICE_NAME
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: $YOUR_SERVICE_NAME
namespace: $YOUR_NAMESPACE
labels:
app: $YOUR_SERVICE_NAME
spec:
replicas: 1
selector:
matchLabels:
app: $YOUR_SERVICE_NAME
template:
metadata:
labels:
app: k2m
spec:
containers:
- name: $YOUR_SERVICE_NAME
image: alpine:latest
restartPolicy: Always
Finally, use the service name in your ingress controller route to redirect traffic to your api-gateway.
Kubernetes uses CoreDNS to perform in-cluster DNS resolution. By default, all Services are assigned DNS names in the (FQDN) form of <service-name>.<namespace>.svc.cluster.local. So your security-gateway-svc will be able to forward requests to some-random-api-svc via some-random-api-svc.<namespace>, without routing the traffic outside of Kubernetes. Keep in mind that you shouldn't be interacting with pods directly, because pods are ephemeral; always go through Services.

How to stop all external traffic and allow only inter pod network call within namespace using network policy?

I'm setting up a namespace in my kubernetes cluster to deny any outgoing network calls like http://company.com but to allow inter pod communication within my namespace like http://my-nginx where my-nginx is a kubernetes service pointing to my nginx pod.
How to achieve this using network policy. Below network policy helps in blocking all outgoing network calls
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-egress
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
How to white list only the inter pod calls?
Using Network Policies you can whitelist all pods in a namespace:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-egress-to-sample
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
name: sample
As you probably already know, pods with at least one Network Policy applied to them can only communicate to targets allowed by any Network Policy applied to them.
Names don't actually matter. Selectors (namespaceSelector and podSelector in this case) only care about labels. Labels are key-value pairs associated with resources. The above example assumes the namespace called sample has a label of name=sample.
If you want to whitelist the namespace called http://my-nginx, first you need to add a label to your namespace (if it doesn't already have one). name is a good key IMO, and the value can be the name of the service, http://my-nginx in this particular case (not sure if : and / can be a part of a label though). Then, just using this in your Network Policies will allow you to target the namespace (either for ingress or egress)
- namespaceSelector:
matchLabels:
name: http://my-nginx
If you want to allow communication to a service called my-nginx, the service's name doesn't really matter. You need to select the target pods using podSelector, which should be done with the same label that the service uses to know which pods belong to it. Check your service to get the label, and use the key: value in the Network Policy. For example, for a key=value pair of role=nginx you should use
- podSelector:
matchLabels:
role: nginx
This can be done using the following combination of network policies:
# The same as yours
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-egress
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
---
# allows connections to all pods in your namespace from all pods in your namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-namespace-egress
namespace: sample
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels: {}
---
# allows connections from all pods in your namespace to all pods in your namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-namespace-internal
namespace: sample
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels: {}
assuming your network policy implementation implements the full spec.
I am not sure if you can do it by using Kubernetes NetworkPolicy, but you can achieve this by Istio-enabled pods.
Note: First make sure that Istio installed on your cluster. For installation see.
See quote from Istio's documentation about Egress Traffic.
By default, Istio-enabled services are unable to access URLs outside
of the cluster because the pod uses iptables to transparently redirect
all outbound traffic to the sidecar proxy, which only handles
intra-cluster destinations.
Also, you can whitelist domains outside of the cluster by adding ServiceEntry and VirtualService to your cluster, example in Configuring the external services in Istio documentation.
I hope it can be useful for you.