Change Kubernetes Service Name Without Removing It - kubernetes

Suppose in my microservice architecture, I have a microservice that receives API calls, and sends the required RPCs to other microservices in order to respond the calls. Let's call it server.
In order to be exposed to outside world, I have a NodePort Service for this microservice named after its name (server).
Currently I am using RabbitMQ for my inter-service communications, and server is talking to other microservices via RMQ queues.
Now I want to deploy a service mesh and use gRPC for inter-service communications. So I need to create K8s Service for gRPC port for all of my microservices with their microservice name (including server). However, the K8s Service with name server already exists and I need to change the name of that NodePort in order to be able to create its gRPC Service, but K8s doesn't let me change the Service name. If I delete the NodePort and create another one with a new name, my application would be down for that couple of seconds.
Final question is, how can I achieve renaming this NodePort while having my application available to users?

You can do the following:
Create a brand new NodePort service "server-renamed" (with the same selectors and everything as "server")
Change your microservices config to use it and check all is OK
Remove the "server" service and recreate it with the new required specs.

Related

How can pod make http request to other service in k8s

I have built two services in k8s cluster, how can they interact with each other, if I want to make http request from one service to another, I know I can’t use local host, but how can I know the host when I am coding.
Service objects are automatically exposed in DNS as <servicename>.<namespace>.svc.<clusterdomain> where clusterdomain is usually cluster.local. The default resolv.conf allows for relative lookups so if the service is in the same namespace you can use just the name, otherwise <servicename>.<namespace>.

Microservice structure using helm and kubernetes

We have several microservices(NodeJS based applications) which needs to communicate each other and two of them uses Redis and PostgreSQL. Below are the name of of my microservices. Each of them has their own SCM repository and Helm Chart.Helm version is 3.0.1. We have two environments and we have two values.yaml per environments.We have also three nodes per cluster.
First of all, after end user's action, UI service triggers than it goes to Backend. According to the end user request Backend services needs to communicate any of services such as Market, Auth and API.Some cases API and market microservice needs to communicate with Auth microservice as well.
UI -->
Backend
Market --> use postgreSQL
Auth --> use Redis
API
So my questions are,
What should we take care to communicate microservices among each other? Is this my-svc-namespace.svc.cluster.local enough to provide developers or should we specify ENV in each pod as well?
Our microservices is NodeJS application. How developers. will handle this in application source code? Did they use this service name if first question's answer is yes?
We'd like to expose our application via ingress using host per environments? I guess ingress should be only enabled for UI microservice, am I correct?
What is the best way to test each service can communicate each other?
kubectl get svc --all-namespaces
NAMESPACE NAME TYPE
database my-postgres-postgresql-helm ClusterIP
dev my-ui-dev ClusterIP
dev my-backend-dev ClusterIP
dev my-auth-dev ClusterIP
dev my-api-dev ClusterIP
dev my-market-dev ClusterIP
dev redis-master ClusterIP
ingress ingress-traefik NodePort
Two ways to perform Service Discovery in K8S
There are two ways to perform communication (service discovery) within a Kubernetes cluster.
Environment variable
DNS
DNS is the simplest way to achieve service discovery within the cluster.
And it does not require any additional ENV variable setting for each pod.
As its simplest, a service within the same namespace is accessible over its service name. e.g http://my-api-dev:PORT is accessible for all the pods within the namespace, dev.
Standard Application Name and K8s Service Name
As a practice, you can give each application a standard name, eg. my-ui, my-backend, my-api, etc. And use the same name to connect to the application.
That practice can be even applied testing locally from developer environment, with entry in the /etc/host as
127.0.0.1 my-ui my-backend my-api
(Above is nothing to do with k8s, just a practice for the communication of applications with their names in local environments)
Also, on k8s, you may assign service name as the same application name (Try to avoid, suffix like -dev for service name, which reflect the environments (dev, test, prod, etc), instead use namespace or separate cluster for that). So that, target application endpoints can be configured with their service name on each application's configuration file.
Ingress is for services with external access
Ingress should only be enabled for services which required external accesses.
Custom Health Check Endpoints
Also, it is a good practice to have some custom health check that verify all the depended applications are running fine, which will also verify the communications of application are working fine.

How to discover services deployed on kubernetes from the outside?

The User Microservice is deployed on kubernetes.
The Order Microservice is not deployed on kubernetes, but registered with Eureka.
My questions:
How can Order Microservice discover and access User Microservice through the Eureka??
First lets take a look at the problem itself:
If you use an overlay network as Kubernetes CNI, the problem is that it creates an isolated Network thats not reachable from the outside (e.g. Flannel). If you have a network like that one solution would be to move the eureka server into kubernetes so eureka can reach the service in Kubernetes and the service outside of Kubernetes.
Another solution would be to tell eureka where it can find the service instead of auto discovery but for that you also need to make the service externally available with a Service of type NodePort, HostPort or LoadBalancer or with an ingress and I'm not sure its possible, but 11.2 in the following doc could be worth a look Eureka Client Discovery.
The third solution would be to use a CNI thats not using an overlay network like Romana which will make the service external routable by default.

How to connect different deployments in Kubernetes?

I have two back-end deployments, REST server and a database server, each running on some specific ports. The REST server internally calls a database server.
Now how do I refer my database server deployment in my REST server deployment so that they can communicate with each other?
first, define a service for your DB server, that will create sort of a loadbalancer (internal kube integration based on iptables in most cases). With that, you will be able to refer to it by service name or fqdn like mydbsvc.namespace.svc.cluster.local. Which will return "Cluster IP" to that loadbalancer.
Then it's just an issue of regular app config to point it to your DB on mydbsvc, preferably by means of env variable like say DB_HOST=mydbsvc set in your REST API deployment manifest (pod template envs)
Expose your deployments as service. For example, kubectl expose ...
Connect/Allow these to communicate by creating network policies.
Service object (of database) will give you a virtual (stable) IP. Depending upon the type of service your rest code can call DB via clusterIP/externalName/externalIP/DNS.

Frontend communication with API in Kubernetes cluster

Inside of a Kubernetes Cluster I am running 1 node with 2 deployments. React front-end and a .NET Core app. I also have a Load Balancer service for the front end app. (All working: I can port-forward to see the backend deployment working.)
Question: I'm trying to get the front end and API to communicate. I know I can do that with an external facing load balancer but is there a way to do that using the clusterIPs and not have an external IP for the back end?
The reason we are interested in this, it simply adds one more layer of security. Keeping the API to vnet only, we are removing one more entry point.
If it helps, we are deploying in Azure with AKS. I know they have some weird deployment things sometimes.
Pods running on the cluster can talk to each other using a ClusterIP service, which is the default service type. You don't need a LoadBalancer service to make two pods talk to each other. According to the docs on this topic
ClusterIP exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
As explained in the Discovery documentation, if both Pods (frontend and API) are running on the same namespace, the frontend just needs to send requests to the name of the backend service.
If they are running on different namespaces, the frontend API needs to use a fully qualified domain name to be able to talk with the backend.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
You can find more info about how DNS works on kubernetes in the docs.
The problem with this configuration is the idea that the Frontend app will be trying to reach out to the API via the internal cluster. But it will not. My app, on the client's browser can not reach services and pods in my Kluster.
My cluster will need something like nginx or another external Load Balancer to allow my client side api calls to reach my API.
You can alternatively used your front end app, as your proxy, but that is highly not advised!
I'm trying to get the front end and api to communicate
By api, if you mean the Kubernetes API server, first setup a service account and token for the front-end pod to communicate with the Kubernetes API server by following the steps here, here and here.
is there a way to do that using the clusterIPs and not have an external IP for the back end
Yes, this is possible and more secure if external access is not needed for the service. Service type ClusterIP will not have an ExternalIP and the pods can talk to each other using ClusterIP:Port within the cluster.