I am experimenting with a service discovery scheme on Kubernetes. I have 20+ GRPC services that can be grouped and deployed as applications on Kubernetes. Each application serves several of these services with a common GRPC server. There is a service to publish this GRPC port, and I have labels on those services that identify which GRPC servers are running there.
For instance, I have APP1 application serving GRPC services a,b,c. There is a service in front of APP1 connected to the port 8000, with labels a,b,c. So when a component in the cluster needs to connect to service, say, "b", it looks up services that have the label "b", and connects to port 8000 of one of those. This way, I can group the GRPC services in different ways, deploy them, and they all find each other.
I started thinking about an alternative approach. Instead of having one service with labels for each app, I want to have multiple services (one for each GRPC service) for the same app:port with different names. So in this new scheme APP1 would have three services, a, b, and c, all connected to the same app:port. The clients would simply look up the name "b" to find the GRPC server "b".
The question is: do you see any potential problems with having multiple services with different names that are connected to the same port of the same app, exposing the same port? That is, addresses a:8000, b:8000, c:8000 all pointing to APP1:8000.
To be honest I don't see any problem as long as your application identifies internally that the client is trying to talk to either a:8000, b:8000, or c:8000. Essentially, you will find to just a single port 8000 in this case in the container. This would be analogous to different HTTP endpoints per service some like https://myendpoint:8000/a, https://myendpoint:8000/b, and https://myendpoint/c.
Note that 8000 would be the port in the container but Kubernetes will use a random port on the node to forward the traffic to 8000 in the container.
Related
I’m installing Jitsi on EKS, but I’m having trouble setting up the network. In Jitsi, two applications run as each deployment. One is a web (http/https) application called Jitsi meet and the other is a udp application called video-bridge. The web page of jitsi meet tries to communicate with video-bridge using the unified domain (e.g. meet.example.io), so I am trying to expose the services of both applications to the outside using the same domain.
I create an AWS Ingress resource which provision ALB, and the ALB exposes Jitsi meet.
To expose video-bridge, I created a LoadBalancer for udp and provisioned an NLB.
Since each load balancer is assigned a different endpoint, I do not know how to make these two load balancers accessible as a single domain.
So, I tried to find out if EKS’s LoadBalancer type service can set two target groups of different protocols (http, udp) at the same time, but I couldn’t find an appropriate setting.
What I want to do is to use a single domain to route requests to port 80/443 to the jitsi-meet application, and requests to port 10000 to the video-bridge application. How can I set up the network like this?
Suppose in my microservice architecture, I have a microservice that receives API calls, and sends the required RPCs to other microservices in order to respond the calls. Let's call it server.
In order to be exposed to outside world, I have a NodePort Service for this microservice named after its name (server).
Currently I am using RabbitMQ for my inter-service communications, and server is talking to other microservices via RMQ queues.
Now I want to deploy a service mesh and use gRPC for inter-service communications. So I need to create K8s Service for gRPC port for all of my microservices with their microservice name (including server). However, the K8s Service with name server already exists and I need to change the name of that NodePort in order to be able to create its gRPC Service, but K8s doesn't let me change the Service name. If I delete the NodePort and create another one with a new name, my application would be down for that couple of seconds.
Final question is, how can I achieve renaming this NodePort while having my application available to users?
You can do the following:
Create a brand new NodePort service "server-renamed" (with the same selectors and everything as "server")
Change your microservices config to use it and check all is OK
Remove the "server" service and recreate it with the new required specs.
I have a service which runs a couple of verticles. The main verticle configures the remaining verticles and is also responsible for registering the service with consul. However, my gRPC server cannot run on the same port as the main verticle. Does this mean I need to register each verticle as a separate service, or is there some way to use consul to advertise the correct port for my gRPC server?
It seems like tags are the way to go. I can't find the link to the github issue discussing single services with multiple ports, but that was the suggestion. Multiple ports is something being considered for a future API but there are obstacles such as the SRV entries to contend with before this is possible.
By default, when you create service fabric cluster manually using azure portal, you will have to pick a node type name which will be tied to VM size etc. However what is not shown on the GUI is the application port range that is associated with this node type. The default application port range appears to be from 20000 to 30000.
When you create a service fabric cluster application using visual studio, the default port numbers are always less than 20000. The default port number is more like 8868 or something like that.
When you deploy this service to the above cluster, everything works as expected. Lets ignore the LB port mapping for this discussion.
This begs following questions:
Are we supposed to adjust the port number in our visual studio projects to something greater than 20000 (but less than 30000) so that the port numbers are in sync with node type construct's application port range?
Obviously the service works without step (1). But are there any caveats to doing the default way (i.e. without any port number changes)?
If service port numbers do not have to be in the range defined by node type construct, then what is the purpose of the application port range in the node type?
The application port range is used when you let Service Fabric do Service discovery and resolution. If you don't specify end point ports, Service Fabric automatically assigns endpoints in this app port range that you provide while creating the cluster. Each service in Service Fabric cluster works based on an endpoint. Say if you have multiple microservices but you need only few services to be exposed with a http(s) endpoint, then you let Service Fabric decide the port for the services that you don't want to expose with http(s) endpoint. This port range also becomes handy when you want to configure port ranges in firewall or NSGs to open up traffic.
More details can be found here - https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-connect-and-communicate-with-services/
Service Fabric provides a discovery and resolution service called the Naming Service. The Naming Service maintains a table that maps named service instances to the endpoint addresses they listen on. Service Fabric has a registrar that maps service names to their endpoint address.
When an endpoint resource is defined in the service manifest, Service
Fabric assigns ports from the reserved application port range when a
port isn't specified explicitly.
https://learn.microsoft.com/en-gb/azure/service-fabric/service-fabric-service-manifest-resources
Seems to only be used if you don't explicitly specify an endpoint in the manifest
I'm trying to wrap my head around how kubernetes (k8s) utilises ports. Having read the API documentation as well as the available docs, I'm not sure how the port mapping and port flow works.
Let's say I have three containers with an externally hosted database, my k8s cluster is three on-prem CoreOS nodes, and there is a software-defined load balancer in front of all three nodes to forward traffic to all three nodes on ports 3306 and 10082.
Container A utilises incoming port 8080, needs to talk to Container B and C, but does not need external access. It is defined with Replication Controller A that has 1 replica.
Container B utilises incoming port 8081 to talk to Container A and C, but needs to access the external database on port 3306. It is defined with Replication Controller B that has 2 replicas.
Container C utilises incoming port 8082, needs to talk to Container A and B, but also needs external access on port 10082 for end users. It is defined with Replication Controller C that has 3 replicas.
I have three services to abstract the replication controllers.
Service A selects Replication Controller A and needs to forward incoming traffic on port 9080 to port 8080.
Service B selects Replication Controller B and needs to forward incoming traffic on ports 9081 and 3306 to ports 8081 and 3306.
Service C selects Replication Controller C and needs to forward incoming traffic on port 9082 to port 8082.
I have one endpoint for the external database, configured to on port 3306 with an IPv4 address.
Goals:
Services need to abstract Replication Controller ports.
Service B needs to be able to be reached from an external system on port 3306
on all nodes.
Service C needs to be able to be reached from an external system on port 10082
on all nodes.
With that:
When would I use each of the types of ports; i.e. port, targetPort, nodePort, etc.?
Thanks for the very detailed setup, but I still have some questions.
1) When you say "Container" {A,B,C} do you mean Pod? Or are A, B, C containers in the same Pod?
2) "Container B utilises incoming port 8081 to talk to Container A and C" - What do you mean that it uses an INcoming port to talk to other containers? Who opens the connection, to whom, and on what destination port?
3) "needs to access the external database on port 3306" but later "needs to be able to be reached from an external system on port 3306" - Does B access an external database or is it serving a database on 3306?
I'm confused on where traffic is coming in and where it is going out in this explanation.
In general, you should avoid thinking in terms of nodes and you should avoid thinking about pods talking to pods (or containers to containers). You have some number of Services, each of which is backed by some number of Pods. Client pods (usually) talk to Services. Services receive traffic on a port and send that traffic to the corresponding targetPort on Pods. Pods receive traffic on a containerPort.
None of that requires hostPorts or nodePorts. The last question is which of these Services need to be accessed from outside the cluster, and what is your environment capable of wrt load-balancing.
If you answer this far, then I can come back for round 2 :)