Kubernetes service - How to differentiate identical target ports - kubernetes

I have two different deployments creating two different pods that spins up two different containers serves different purposes. but as a coincidence the port being exposed by both of those containers is 8080.
I created a single service with two ports 8080 and 8081(type=LoadBalancer) to expose both of those deployments. when I hit the LoadBalancer url I get back response from container 1 and after hitting refresh few times I get back the response from container 2. This behavior is same on both ports.
I know that changing the port exposed on the dockerfile of one of those containers would solve this problem. but just out of curiosity as a newbie to the kubernetes, is there any different approach to handle this scenario?

You could use Ingress. Here is an example.
Instead of creating one Service for both pods. Create one Service per pod. Make sure the selector labels are different for both. Set type to NodePort. Then create an Ingress with rules like.
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
Now there are many ingress solutions out there. Ingress in k8s is just a networking spec. All it is, is a data model that represents your networking logic. The various ingress controllers take that spec and implement the logic with their given solution. Here is a link to the docs for nginx ingress controller. https://www.nginx.com/products/nginx/kubernetes-ingress-controller/

Related

How to use ingress so that services can talk to each other?

On AWS EKS, I have three pods in a cluster each of which have been exposed by services. The problem is the services can not communicate with each other as discussed here Error while doing inter pod communication on EKS. It has not been answered yet but further search said that it can be done through Ingress. I am having confusion as to how to do it? Can anybody help ?
Code:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: test
name: ingress-test
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: server-service
port:
number: 8000
My server-service has APIs like /api/v1/getAll, /api/v1/updateAll, etc.
So, what should I write in path and for a database service what should I do??
And say in future I make another microservice and open another service which has APIs like /api/v1/showImage, /api/v1/deleteImage will I have to write all paths in ingress or is their another way for it to work?
A Ingress is a really good solution to expose both a frontend and a backend at the same domain with different paths (but reading your other question, it will be of no help in exposing the database)
With this said, you don't have to write all the paths in the Ingress (unless you want to) as you can instead use pathType: Prefix as it is already in your example.
Let me link you to the documentation Examples which explain how it works really well. Basically, you can add a rule with:
path: /api
pathType: Prefix
In order to expose your backend under /api and all the child paths.
The case where you put a second backend, which wants to be exposed under /api as the first one, is way more complex instead. If two Pods wants to be exposed at the same paths, you will probably need to list all the subpaths in a way that differentiate them.
For example:
Backed A
/api/v1/foo/listAll
/api/v1/foo/save
/api/v1/foo/delete
Backend B
/api/v1/bar/listAll
/api/v1/bar/save
/api/v1/bar/delete
Then you could expose one under subPath /api/v1/foo (Prefix) and the other under /api/v1/bar (Prefix).
As another alternative, you may want to expose the backends at different paths from what they actually expect using a rewrite target rule.

When to use NodePort, ClusterIP, LoadBalancer, Headless as backend to Ingress?

The following example would expose the services externally. So why is NodePort/LB allowed in this context, would not that be redundant?
rules:
- host: lab.example.com
http:
paths:
- path: /service-root
backend:
serviceName: clusterip-svc
servicePort: 8080
- path: /service-one
backend:
serviceName: nodeport-svc
servicePort: 8080
- path: /service-two
backend:
serviceName: headless-svc
servicePort: 8080
Is there any particular advantage of using NodePort, ClusterIP, LoadBalancer, or Headless as back-end to Ingress?
Services are a way to define logical set of Pods and a policy to access them. The Pods are ephemeral resources, so Services make it possible to connect to them regardless of their IP addresses. They usually use selectors to do so. There are different types of Services in Kubernetes and these are the main differences.
Cluster IP is a default type of Service. It exposes the Service on a cluster-internal IP and makes it available only from within the cluster.
NodePort exposes the Service on each Node's IP at a static port. This option also creates ClusterIP Service, to which NodePort routes.
LoadBalancer goes a step further and exposes the Service externally using cloud provider's load balancer. NodePort and ClusterIP resources are created automatically.
Follow this link to get more information about different ServiceTypes.
And there are Headless Services. You would use these when you don't need load-balancing and a single Service IP. You can follow this section in documentation for further clarification.
Answering your question - it depends on your use-case, you might find different advantages using these Services.
So why is NodePort/LB allowed in this context, would not that be
redundant?
Mostly no one do it that way, as everyone manages and keeps the ingress as the single point of entry to the cluster. Behind ingress mostly ClusterIP services will be running still it's up to you how you want to set it up.
Is there any particular advantage of using NodePort, ClusterIP,
LoadBalancer, or Headless as back-end to Ingress?
i am not aware about any benefit, as each type has its own benefits. ingress will forward the traffic if you will provide service it doesn't check any type or so.
Node port, Loadbalancer can be useful for specific requirements.

Difference between kunernetes Service and Ingress

I want to create a load balancer for 4 http server pods.
I have one mysql pod too.
Everything works fine, i have created a loadbalancer service for http, and another service for mysql.
I have read i should create an ingress too. But i do not understand what is an ingress because everything works with Services.
What is the value-add of an Ingress ?
Thanks
Since you have single service serving http, your current solution using LoadBalancer service type works fine. Imagine you have multiple http based services that you want to make externally available on different routes. You would have to create a LoadBalancer services for each of them and by default you would get a different IP address for each of them. Instead you can use an Ingress, which sits infront of these services and does the routing.
Example ingress manifest:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /cart
backend:
serviceName: cart
servicePort: 80
- path: /payment
backend:
serviceName: payment
servicePort: 80
Here you have two different HTTP services exposed by an Ingress on a single IP address. You don't need a LoadBalancer per service when using an Ingress.
The Service of type LoadBalancer relies on a third-party LoadBalancer and IP provisioning thing somewhere that deals with getting Layer 3 traffic (IP) from outside to the Nodes on some high-numbered NodePort.
A Ingress relies on a third-party Ingress Controller to accept Layer 3 traffic, open it up to Layer 7 (eg, terminate TLS) and do protocol-specific routing (eg by http fqdn/path) to some other Service (probably of type ClusterIP) inside the cluster.
If all your service should be explictly exposed without any further filtering or other options, a LoadBalancer and no Ingress might be the right choice....but LoadBalancers dont do much on their own....they just expose the Service to the outside world....very little in the way of traffic shpaing, A/B testing, etc
However, if you want to put multiple services behind a single IP/VIP/certificate, or you want to direct traffic in some weird ways (like based on Header:, client type, percentage weighting, etc), you'd probably want an Ingress (which itself would be exposed by a LoadBalancer Service)

Kubernetes (AKS) : Expose multiple ports of different service to common load balancer

I am setting up Kubernetes cluster on Azure (using AKS) to host Elasticsearch, Kibana, custom api, UI, nginx, etc.
As I don't want separate public IP per service, I need a way to setup a common load balancer/Ingress and then just add the port numbers to there and setup routing.
I tried using the approach mentioned in this stackoverflow question - How to expose multiple port using a load balancer services in kubernetes but didn't work out.
As there are technology clients connecting to my cluster, I need to have service per technology.
Basically I need to expose 9200, 5601, 80 - all on same IP but on accessing the IP with port, user must be re-directed to appropriate technology service.
Below is sample configuration for what am looking for.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
spec:
rules:
- host: myurl.domain.com
http:
paths:
- path: /
backend:
serviceName: elasticsearch
servicePort: 9200
- path: /
backend:
serviceName: kibana
servicePort: 5602
Any thoughts?
With your issue, the ingress is what you want. You can create all your services as you want. And expose the ports for your service. Then create the ingress with a public IP and create the ingress route that routes the access from the ingress to your backend services.
Take a look at the example in Create an ingress controller in Azure Kubernetes Service (AKS). It will show you what the steps need to be done. And if you have more questions please let me know.
I just finished doing this on a mail server project using HAProxy ingress controller (https://github.com/helm/charts/tree/master/incubator/haproxy-ingress) in TCP mode. Works a treat. Working config can be found at https://github.com/funkypenguin/docker-mailserver/blob/fa9bd9c9ed9b66aa6ee1c36ca19a73c558682f24/helm-chart/docker-mailserver/values.yaml#L300
D
Sorry i posted another similar question. Finally was able to solve with issue with simple tagging.
No extra tools/code required.
Please refer my post for answer : Kubernetes: Expose multiple services internally & externally

Kubernetes services for different application tracks

I'm working on an application deployment environment using Kubernetes where I want to be able to spin up copies of my entire application stack based on a Git reference for the primarily web application, e.g. "master" and "my-topic-branch". I want these copies of the app stack to coexist in the same cluster. I can create Kubernetes services, replication controllers, and pods that use a "gitRef" label to isolate the stacks from each other, but some of the pods in the stack depend on each other (via Kubernetes services), and I don't see an easy, clean way to restrict the services that are exposed to a pod.
There a couple ways to achieve it that I can think of, but neither are ideal:
Put each stack in a separate Kubernetes namespace. This provides the cleanest isolation, in that there are no resource name conflicts and the applications can have the DNS hostnames for the services they depend on hardcoded, but seems to violate what the documentation says about namespaces†:
It is not necessary to use multiple namespaces just to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.
This makes sense, as putting the app stacks in different resources would negate all the usefulness of label selectors. I'd just name the namespace with the Git reference and all the other resources wouldn't need to be filtered at all.
Create a copy of each service for each copy of the application stack, e.g. "mysql-master" and "mysql-my-topic-branch". This has the advantage that all the app stacks can coexist in the same namespace, but the disadvantage of not being able to hardcode the DNS hostname for the service in the applications that need them, e.g. having a web app target the hostname "mysql" regardless of which instance of the MySQL Kubernetes service it actually resolves to. I would need to use some mechanism of injecting the correct hostname into the pods or having them figure it out for themselves somehow.
Essentially what I want is a way of telling Kubernetes, "Expose this service's hostname only to pods with the given labels, and expose it to them with the given hostname" for a specific service. This would allow me to use the second approach without having to have application-level logic for determining the correct hostname to use.
What's the best way to achieve what I'm after?
[†] http://kubernetes.io/v1.1/docs/user-guide/namespaces.html
The documentation on putting different versions in different namespaces is a bit incorrect I think. It is actually the point of namespaces to separate things completely like this. You should put a complete version of each "track" or deployment stage of your app into its own namespace.
You can then use hardcoded service names - "http://myservice/" - as the DNS will resolve on default to the local namespace.
For ingresses I have copied my answer here from the GitHub issue on cross-namespace ingresses.
You should use the approach that our group is using for Ingresses.
Think of an Ingress not as much as a LoadBalancer but just a document specifying some mappings between URLs and services within the same namespace.
An example, from a real document we use:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: dev-1
spec:
rules:
- host: api-gateway-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
path: /
- host: api-shop-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-shop
servicePort: 80
path: /
- host: api-search-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-search
servicePort: 8080
path: /
tls:
- hosts:
- api-gateway-dev-1.faceit.com
- api-search-dev-1.faceit.com
- api-shop-dev-1.faceit.com
secretName: faceitssl
We make one of these for each of our namespaces for each track.
Then, we have a single namespace with an Ingress Controller which runs automatically configured NGINX pods. Another AWS Load balancer points to these pods which run on a NodePort using a DaemonSet to run at most and at least one on every node in our cluster.
As such, the traffic is then routed:
Internet -> AWS ELB -> NGINX (on node) -> Pod
We keep the isolation between namespaces while using Ingresses as they were intended. It's not correct or even sensible to use one ingress to hit multiple namespaces. It just doesn't make sense, given how they are designed. The solution is to use one ingress per each namespace with a cluster-scope ingress controller which actually does the routing.
All an Ingress is to Kubernetes is an object with some data on it. It's up to the Ingress Controller to do the routing.
See the document here for more info on Ingress Controllers.