kubernetes point domain to a local service - kubernetes

I'm new to Kubernetes and trying to point all requests to the domain to another local service.
Both applications are running in the same cluster under a different namespace
Example domains
a.domain.com hosting first app
b.domain.com hosting the second app
When I do a curl request from the first app to the second app (b.domain.com). it travels through the internet to the second app.
Usually what I could do is in /etc/hosts point b.domain.com to localhost.
What do we do in this case in Kubernetes?
I was looking into Network Policies but I'm not sure if it correct approach.
Also As I understood we could just call service name.namespace:port from the first app. But I would like to keep the full URL.
Let me know if you need more details to help me solve this.

The way to do it is by using the Kubernetes Gateway API. Now, it is true that you can deploy your own implementation since this is an Open Source project, but there are a lot of solutions already using it and it would be much easier to learn how to implement those instead.
For what you want, Istio would fit your needs. If your cluster is hosted in a Cloud environment, you can take a look at Anthos, which is the managed version of Istio.
Finally, take a look at the blog Welcome to the service mesh era, since the traffic management between services is one of the elements of the service mesh paradigm, among others like monitoring, logging, etc.

Related

Is it possible to deploy a nestjs microservice backend on a kubernetes Cluster

Hello intelligent stackoverflow people,
i am trying to deploy my microservice backend developed with nestjs on Kubernetes.
But i donĀ“t know how to do it or even find a tutorial that shows me how to.
I found an article talking about a similar case using Kafka as the event-streaming-service.
https://limascloud.com/2022/03/22/nestjs-on-kubernetes-kubernetes-for-developers/
Instead of Kafka i used the native event based communication provided by the framework described in the docs. It is some basic topic based publish-subscribe mechanism.
Does that prohibit the use of Kubernetes. Do i need to use some kind of external communication software?
I am really confused at the moment and dont know if we/i made an error since the start.
I am the author of the post you mentioned. You should be able to use the event-streaming-service, but it's a different scenario than the one I represent in the post.
In the post, the pods are connecting to a Kafka service that is running outside of the Kubernetes network, but in your scenario, the pods need to be able to connect to one another inside the Kubernetes network.
If you are planning to use two separate services, I would recommend using an external broker. If you plan to use the default mechanism, make sure to set the host and port configuration for one of the pods. Lets say api is just going to produce, so set its configuration to the pod name and port of the worker. Let me know if it works. I would start trying to make it work on your local env before going to Kubernetes.

Deploying and exposing a microservice on Openshift

I'm new in k8s world and using Openshift 4.2.18. I want to deploy a microservice on it. What I need is one common ip and being able to access each microservice using virtual path.
Like this,
https://my-common-ip/microservice1/
https://my-common-ip/microservice2/
https://my-common-ip/microservice3/
Service and deployment are OK. However I'm so confused with the other terms. Should I use route or ingress? Should I use VirtualService like in this link? Also heard about HA-Proxy and Istio. What's the best way of doing this? I would appreciate it if you could provide the information about these terms.
Thanks in advance, Best Regards
Route and ingress are intended to achieve the same end. Originally Kubernetes had no such concept and so in OpenShift the concept of a Route was developed, along with the bits for providing a load balancing proxy etc. In time it was seen as being useful to have something like this in Kubernetes, so using Route from OpenShift as a starting point for what could be done, Ingress was developed for Kubernetes. In the Ingress version they went for a more generic rules based system so how you specify them looks different, but the intent is to effectively be able to do the same thing.If you intend to deploy your application on multiple Kubernetes distributions at the same time then Ingress might be a good option.
Virtual service and istio is service mesh which is not necessary for external access of an app. You bring complexity with a service mesh. Unless the capabilities offered by a service mesh is really needed for your usecase there is no reason to use it.

how sockets or communication channels are maintained in distibuted system

I am new to distributed systems, and came to this problem once needed to deploy a gRPC service to kubernetes (GKE). As far as I know, when a client initiate an rpc, it creates a long lasting http2 connection and further calls are multiplexed on it. I like to send/push notifications or similar messages to the client through this connection. If I deploy to multiple pod, then the connections are spread across them, and not sure what is the best way to locate the instance where the channel is registered to the client. A possible solution could be, as soon as user initiate a connection, keep a reference of clientId and pod ip (or some identification) in a centralized service and other pods lookup the pod and forward the message to it. Is something like is advisable or is there an existing solution for this? I am unfamiliar with this space and any suggestion is highly appreciated.
Edit: (response to #mebius99)
While looking at deploying option, I stumbled upon GKE, and other cloud deployment options were limited because of my use of gRPC/http2. Thanks for mentioning service discovery , and that or service mesh might be an option. With gRPC, client maintains a long lived connection to a single pod. So, I want every pod to be able to query, based on unique clientId (clients can do an initial register rpc call), which pod is it connected, so can make use of this connection and also a way pods to forward the message between them. So, something like when I get a registration call from client, I update the central registry about the client and pod ip, then look it up from any pod and forward package to it so it further forward to client through the existing streaming connection. You guiding me to the right direction, please let me know above is possible in container environment.
thank you.
Another idea, You can use Envoy proxy.
If you are using GKE, these posts are helpful.
https://cloud.google.com/solutions/exposing-grpc-services-on-gke-using-envoy-proxy
https://github.com/GoogleCloudPlatform/grpc-gke-nlb-tutorial
I'd suggest to start from the Kubernetes Service concept and Service discovery. The External HTTP(S) Load Balancing should fit your needs.
In case you need something more sophisticated, Envoy proxy + Network Load Balancing could be a solution, as is mentioned here.
It sounds like you want to implement some kind of Pub-Sub system.
You must do some back-of-envelop calculation of the scale, such as how many clients, how many messages per second first.
Then you can choose whether to implement yourself or pick an off-the-shelf system, such as https://doc.akka.io/docs/alpakka/current/google-cloud-pub-sub-grpc.html
I just want to add more explanations to the existing answers here.
Since requests in HTTP/2 is multiplexed (multiple requests can be active on the same connection at any point in time), requests will be just pinned to a single Kubernetes pod. Hence, we need to configure a service mesh to shift from connection-based balancing to request-based balancing. Envoy Proxy mentioned here is one example.
I'd recommend everyone to read this good article from Kubernetes blog https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears.

Rest communication between two microservices without static ip in Amazon ECS

In this question, I managed to set-up REST communication between two microservices using a user-defined bridge network in docker-compose
Now, I'm trying to do the same when hosting my microservices on AWS.
I could really use some pointers as to how to achieve this, because I'm terribly lost.
I've tried following numerous tutorials, both written and on pluralsight, but none seem to be close enough to my use case.
My project architecture is as follows:
https://i.stack.imgur.com/vc6TX.png
And my project infrastructure should probably look like this:
https://i.stack.imgur.com/X73HA.png
Thanks
You can use internal Loadbalancer for each service and create DNS records us it for app communication Also ECS and a service discovery feature that is useful in this scenario.

Google Container Engine Subnets

I'm trying to isolate services from one another.
Suppose ops-human has a bunch of mysql stores running on Google Container Engine, and dev-human has a bunch of node apps running on the same cluster. I do NOT want dev-human to be able to access any of ops-human's mysql instances in any way.
Simplest solution: put both of these in separate subnets. How do I do such a thing? I'm open to other implementations as well.
The Kubernetes Network-SIG team has been working on the network isolation issue for a while, and there is an experimental API in kubernetes 1.2 to support this. The basic idea is to provide network policy on a per-namespace basis. A third party network controller can then react to changes to resources and enforces the policy. See the last blog post about this topic for the details.
EDIT: This answer is more about the open-source kubernetes, not for GKE specifically.
The NetworkPolicy resource is now available in GKE in alpha clusters (see latest blog post).