Kubernetes beginner: trigger simple actions between apps - kubernetes

Very simple question, I have a front-end app running in kubernetes. I would like to create a back-end containerized app that would also be in kubernetes obviously.
User actions in the frontend would need to trigger the execution of a command on the backend (echo success! for example). The UI also needs to know what was the command's output.
What is the best way to implement this in k8s?
Either through an internal service, or the two apps can also be in the same pods.
Perhaps there is some kind of messaging involved with applications such as rabbitMQ?

That depends on your application how you are planning.
Some people host frontend on bucket and from there send HTTP request to backend or so.
You can keep frontend and backend in different PODs or in a single POD also.
For example, if you are using the Node JS with express, you can run as simple API service POD also and keep frontend with it also to serve.
You can use the K8s service name for internal communication instead of adding the Message broker(RabbitMQ, Redis also can be used) unless your web app really needs it.
I would also recommend checking out the : https://learnk8s.io/deploying-nodejs-kubernetes
Github repo of application : https://github.com/learnk8s/knote-js/tree/master/01
Official example : https://kubernetes.io/docs/tutorials/stateless-application/guestbook/

Related

Is it possible to deploy a nestjs microservice backend on a kubernetes Cluster

Hello intelligent stackoverflow people,
i am trying to deploy my microservice backend developed with nestjs on Kubernetes.
But i donĀ“t know how to do it or even find a tutorial that shows me how to.
I found an article talking about a similar case using Kafka as the event-streaming-service.
https://limascloud.com/2022/03/22/nestjs-on-kubernetes-kubernetes-for-developers/
Instead of Kafka i used the native event based communication provided by the framework described in the docs. It is some basic topic based publish-subscribe mechanism.
Does that prohibit the use of Kubernetes. Do i need to use some kind of external communication software?
I am really confused at the moment and dont know if we/i made an error since the start.
I am the author of the post you mentioned. You should be able to use the event-streaming-service, but it's a different scenario than the one I represent in the post.
In the post, the pods are connecting to a Kafka service that is running outside of the Kubernetes network, but in your scenario, the pods need to be able to connect to one another inside the Kubernetes network.
If you are planning to use two separate services, I would recommend using an external broker. If you plan to use the default mechanism, make sure to set the host and port configuration for one of the pods. Lets say api is just going to produce, so set its configuration to the pod name and port of the worker. Let me know if it works. I would start trying to make it work on your local env before going to Kubernetes.

kubernetes point domain to a local service

I'm new to Kubernetes and trying to point all requests to the domain to another local service.
Both applications are running in the same cluster under a different namespace
Example domains
a.domain.com hosting first app
b.domain.com hosting the second app
When I do a curl request from the first app to the second app (b.domain.com). it travels through the internet to the second app.
Usually what I could do is in /etc/hosts point b.domain.com to localhost.
What do we do in this case in Kubernetes?
I was looking into Network Policies but I'm not sure if it correct approach.
Also As I understood we could just call service name.namespace:port from the first app. But I would like to keep the full URL.
Let me know if you need more details to help me solve this.
The way to do it is by using the Kubernetes Gateway API. Now, it is true that you can deploy your own implementation since this is an Open Source project, but there are a lot of solutions already using it and it would be much easier to learn how to implement those instead.
For what you want, Istio would fit your needs. If your cluster is hosted in a Cloud environment, you can take a look at Anthos, which is the managed version of Istio.
Finally, take a look at the blog Welcome to the service mesh era, since the traffic management between services is one of the elements of the service mesh paradigm, among others like monitoring, logging, etc.

Launching tests in containers on Kubernetes

I'm building a test automation tool that needs to launch a set of tests, collect logs and results. My plan is to build container with necessary dependency for test framework and launch them in Kubernetes.
Is there any application that abstracts complexity of managing the pod lifecycle and provides a simple API to achieve this use-case preferably through API? Basically my test scheduler need to deploy a container in kubernetes, launch a test and collect log files at the end.
I already looked at Knative and kubeless - they seem to be complex and may over-complicate what I'm trying to do here.
Based on information you provided all I can recomend is kubernetes API itself.
You can create a pod with it, wait for it to finish and gather logs. If thats all you need, you don't need any other fancy applications. Here is a list of k8s client libraries.
If you don't want to use client libraries you can always use REST api.
If you are not sure how to use REST api, run kubectl commands with --v=10 flag for debug output where you can see all requests between kubectl and api-server as a reference guide.
Kubernetes also provided detailed documentation for k8s REST api.
Try looking at https://microk8s.io/, it was built for those purposes.
And you can talk to the API server via the rest API same as in every k8s cluster.

Launch a specific Pod via API and connect from outside

I am currently designing a system where users should be able to start a simulation through a Web Portal and then connect to it with a gRPC client (amongst other things). After the user is finished the simulation then terminates. I want to run the whole system in a kind of microservice architecture in a kubernetes cluster if possible. This is however my first time working with kubernetes and I am unsure if it is possible to achieve this.
As far as I could gather from reading the documentation and googling around it seems like I should be able to launch a pod by calling POST /api/v1/namespaces/{namespace}/pods and making it availble under the Host IP by setting hostPort. However what I dont know is how I would determine a free port on the Node to deploy to or let kubernetes decide that (if hostPort is even the correct choice for this). After that it should be pretty straightforward. Send the user the IP:Port to connect to and he just plugs that into his gRPC client.
Any suggestions on how to best achieve this?
Using hostPort is rather not recommended, so you'd be better off by specifying a service and access your Pod via a service. In your case you can define NodePort service and let Kubernetes decide on the port. Then, fetch the service port using Kubernetes API.

Notify containers of updated pods in Kubernetes

I have some servers I want to deploy in Kubernetes. The clients of those servers will also be in Kubernetes. Clients and servers can independently be deployed or scaled.
The clients must know the list of the servers (IPs). I have an HTTP endpoint on the clients to update the list of the servers while the clients are running (hot config reload).
All this is currently running outside of Kubernetes. I want to migrate to GCP.
What's the industry standard regarding pods updates and notifications? I want to get notified when servers are updated to call the endpoints on the clients to update the list of the servers.
Can't use a LoadBalancer since the clients really need to call a specific server (business logic are in the clients).
Thanks
The standard for calling a group of pods that offer a functionality is services. If you don't want automated load-balancing or a single IP address, which regular services do, you should look into headless services. Calling headless services returns a list of DNS A records that point to the pods behind the service. This list is automatically updated as pods become available/unavailable.
While I think modifying an existing script to just pull a list from a headless is much simpler, it might be worth mentioning CRDs (Custom Resource Definitions) as well.
You could build a custom controller that listens to service events and then posts the data from that event to an HTTP endpoint of another Service or Ingress. The custom resource would define which service to watch and where to post the results.
Though, this is probably much heavier weight solution that just having a sidecar / separate container in a pod polling the service for changes (which sounds closer to you existing model).
I upvoted Alassane answer as I think it is the correct first path to something like this before building a CRD.