What is the best practice for service discovery on the application layer in Kubernetes? - kubernetes

I have 3 applications.
A Gateway, ServiceA & ServiceB.
Each application sits in its own namespace. Whenever there is a push to the CI/CD server on one of the all the 3. All get deployed based on the branch name.
Example:
Create a new branch (feature-1) in ServiceA repo.
Make and commit some changes
The build server builds and deploys the feature-1 branch with a unique service name to the Kubernetes cluster.
The build server looks at ServiceB and Gateway for branches feature-1 if not found defaults to develop. For the gateway it creates a feature-1 from develop and deploys that one.
The gateway then needs to know the DNS URL of the ServiceA from feature-1 in order to be able to call it.
So my question is how to do service discovery on the application lvl using kubernetes?

I think there are two ways to achieve that.
1) Query all services from Kubernetes master, with the API equivalent of kubectl get services --all-namespaces. Then you will ned to configure some logic for choosing the right service.
For this you can use for example Selector, targetPort or specify ClusterIP.
More details can be find in documentation.
2) Put application built from each branch in a new namespace, and let them route within the namespace to their usual names, without requiring any application changes. More information in documentation.

Related

How can i set up workers in kubernetes (infrastructure questions)

I'm using kubernetes and i would like to set up workers , one of my docker host an API using flask, i have an algorithm in another docker (same pod , i don't know if i should leave it in the same) and other scripts that are also in separated dockers.
i want to link all of these, when i receive a request on the API, call the other dockers depending on the request and get the return.
I don't know how to do that with multiple dockers and so kubernetes.
I'm using RQ library for python to parallelize until now but it was on Heroku without kubernetes (i'm migrating to azure at the moment) and i don't know how it manage it behind.
Thank you.
follow the below reference and setup kubernetes cluster using kubeadm.
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
using 'kubeadm join' command you should be able to add worker nodes to the master.
above given link has steps to join the worker to master
If you are using Azure, you can try exploring AKS. It works out of the box. You just need to configure kubectl and you will be good to go.
Regarding deploying multiple microservices(API), you can deploy each microservice as a separate k8s deployment using kubectl and expose them using a service. This way they can communicate with each other using exposed endpoints(API) or a message queue .
Here is a quick guide you can take help from : https://dzone.com/articles/quick-guide-to-microservices-with-kubernetes-sprin
Typically you should use only one container per pod. Multiple containers per pod are possible but are typically used for sidecars, not for additional APIs.
You expose your services using kubernetes services, no need to run everything on a different port if you don't want to.
A minimal setup for typicall webapi calls would look something like this (if you expose your API service as public LoadBalancer you don't necessarily need Ingress)
Client -> (Ingress) -> API service -> API deployment pod(s) -> internal services -> deployment pods.
You can access your internal services from within your cluster using http(s)://servicename[:custom-port]
On the other hand, if you simply use flask to forward API calls to other services, you might want to replace it with an Ingress Controller that does all the routing for you.

Kubernetes, Automatic Service fallback to another namespace

I have multiple environments represented by multiple namespaces in my kubernetes.
All application has its service endpoints defined in each namespace.
And we have three environments, dev, alpha, and beta. (Which is equivalent of dev, test, and stage). These environments are permanent, which means all the applications are running there.
Now in my team, there are few parallel development happening, for which we are planning to create multiple environments for the release and which will be only having few applications which are part of that release.
Let's think of this example: I am building feature1 and have an impact on app1 and app2
There are 10 other apps which are not having any impact.
So for my development and the parallel testing majority of services that I have to point to existing alpha or beta env and only point the app1 and app2 in the namespace.
I could achieve this by having an ExternalName mapping for all other services.
But if I have more than 100 services and managing the external endpoint in a yaml I feel very difficult.
Is there any way that I can route all the traffic to another namespace(If there exist no service with that name.)
Is there a way for global ExternalName for a Namespace?
As far as I know, it is not possible to redirect traffic to a different namespace based on the existing pods or Services in the current namespace. You can select a destination for Service only by changing its YAML configuration, and it is only possible to select pods in the same namespace.
You can simplify the deployment procedure by using Helm charts. It allows you to put variables into YAML configuration of Deployments, Services, etc, and use separate value file to substitute them during installation to the cluster. Here is a link to a blog post on Using Helm to deploy to Kubernetes

What are the general approach on structuring or modeling the Istio policy around service in a repository?

Currently our GKE cluster consists of multiple services running in different namespace, each svc communicate with each other too. We're using:
each service on different git repo
each service repo contains: source code, its helm chart defining the app deployment and infra surrounding it (Service, Istio Ingress/Egress Gateway,etc), and has its own ci/cd (jenkinsfile).
Right now, I also want to incorporate Istio security policy to enforce security of svc-to-svc communication. I've understand basic concept of it. Now, my case is how put service policy to what repo.
For example: given that I have service A (client) communicating with Service B (server). Istio has 3 different kind of policy enforcement:
mesh wide policy
namespace wide policy
service specific policy
Since our gke cluster is still in the early stage of using Istio and I want to have little effort on central governance, I prefer to adopt service specific policy, so each service owner can govern the policy too.
I am thinking to put:
the Policy (service specific policy) in each service repo who acts
as a server. The reasoning behind this is Policy is enforcing incoming trafficPolicy to the service (not the outcoming).
But I am wondering how about the DestinationRule? From article Istio provides here
To configure the client side, you need to set destination rules to use mutual TLS. I
From quote above, I get the understanding that DestinationRule is the one who enforce the client side (who has istio side car container). So DestionationRule should be put on client service repo (in the given case, it is on Service A repo).
But of course, in the server side (service B repo), the team also want to have a certain load balancing and split traffic mechanism (canary, stable, versioning, etc), which can only be defined by VirtualService and DestionationRule.
Any thought about this? Does anyone has general pattern/approach on designing this policy manifest (istio yaml file) around service in a repository?

Create a externalName service to point to a route in another project in OpenShift

We are using openShift V3 Enterprise product.
I would like to create a externalName type service called serviceA in ProjectA and it will point to a route in projectB. and I will create a another route in ProjectA which will point to ServiceA service.
Is this possible to do?
Thanks!!!
You don't need to involve a route, you can use the service name directly to connect to it. The only caveat on that is that you need to (as admin), set up a pod network between the two projects. This is better as creating a route means it will also be exposed outside of the OpenShift cluster and so publicly accessible. You do not want that if these are internal services that you don't want exposed.
For details on pod networks see:
https://docs.openshift.com/container-platform/latest/admin_guide/managing_networking.html#admin-guide-pod-network

Openshift deploy backend and frontend separately

I have an app on openshift. Currently backend and frontend are mixed up in a single WAR archive.
Though I want to separate frontend and backend into 2 projects and deploy them seperately on the same node.
I haven't found anything in the openshift docs or on google.
Is there any way to accomplish that, or do I need to deploy backend and frontend separately onto 2 nodes?
You can use what is called a "node selector" to ensure a given object, in your case a deployment, is placed onto a specific node.
This behavior is outlined in the documentation.
As per your request, this would allow you to split up your deployment, but have it land on the same node. However, it is not a pattern I would recommend. Relying on a specific node to exist, is not very failure tolerant. As such your frontend and backend should rather communicate via a service for example. This way, it doesn't matter where on your cluster frontend or backend are.