I'm looking for a way of managing multiple projects with a common tech stack: nginx, php-fpm, mysql.
It will be a managed service provided by my company. This means customers won't deal with the cluster internals. Customers choose a plan so they can have more or less resources reserved. Think of it like a service like wordpress.com or ghost.io.
When a customer comes, we reserve a set of nodes for him. The goal is that any customer can use unused resources from another customer.
First attempt: namespaces per customer
namespace customer1:
nginx deploy and service
php-fpm deploy and service
mysql deploy and service
namespace customer2:
nginx deploy and service
php-fpm deploy and service
mysql deploy and service
But I think this division is too rigid in order to share unused resources.
Second attempt: shared namepace, custom names per resources
namespace hive:
customer1.nginx deploy and service
customer1.php-fpm deploy and service
customer1.mysql deploy and service
customer2.nginx deploy and service
customer2.php-fpm deploy and service
customer2.mysql deploy and service
It look better for me, but I think resources are too tight coupled to cusomer yet.
The only thigs that define a project are a domain, a source code directory and a database (I'll deal later with logs and other stuff).
Are there any other approaches to think of the cluster as a kind of "compute fog"?
I think namespace is the right thing to do what you want. All namespaces can share the same physical resources, unless you preserve nodes for each namespace, there is no rigid resource division for namespace with general usage.
Related
I'm trying to figure out ways to automate k8s deployments in an EKS cluster. I'm trying to set up namespaces for each specific environment. One for dev, one for staging, and one for production. My production namespace is in a separate region and also in a separate cluster (dev & staging are in one cluster). I'm a little new to this concept, but does it make sense to have each respective application load balancer in it's respective namespace? Is that practice common or best practice? Any ideas on automating deployments would be appreciated.
Hi Dave Michaels,
I assume there are two questions in your post above:
If we use a dedicated namespace in the same cluster (dev & staging setup), can we use a dedicated load balancer for each of these namespaces? Is this good practice.
Answer: Yes. As you are using the namespace concept for each environment in the same cluster, it is Ok to create a dedicated load balancer (promise me you will use ingress :)) in each of these namespaces as we need an easier way to access those environments. To be frank, I am not a fan of using namespaces for environments, because as your cluster grows and lots of microservices getting added to it, you might want to use namespace for another reason eg., namespace per team or domain to have granular access rights. But I have seen teams using it for different environments successfully as well.
Suggest automated Kubernetes deployments possibilities?
This is a large topic by itself.
As your microservices grow, you will have multiple Kubernetes manifests to handle, first thing I will suggest is to either use a configuration manager like Kustomize or a package manager like Helm to segregate variables from actual manifests, this will help to easily automate deployment across environments (same cluster or different clusters). Coming to actual deployment automation, if there is no existing CD in place I would suggest exploring tools that support natively Kubernetes that supports GitOps, like FluxCD or ArgoCD etc
I am a newbie in Kubernetes.
I have 19 LAN servers with 190 machines.
Each of the 19 LANs has 10 machines and 1 exposed IP.
I have different websites/apps and their environments that are assigned to each LAN.
how do I manage my Kubernetes cluster and do setup/housekeeping.
Would like to have a single portal or manager to manage the websites and environment(dev, QA, prod) and keep isolation.
Is that possible?
I only got a vague idea of what you want to achieve so here goes nothing.
Since Kubernetes has a lot of convenience tools for setting a cluster on a public cloud platform, I'd suggest to start by going through "kubernetes-the-hard-way". It is a guide to setup a cluster on Google Cloud Platform without any additional scripts or tools, but the instructions can be applied to local setup as well.
Once you have an operational cluster, next step should be to setup an Ingress Controller. This gives you the ability to use one or more exposed machines (with public IPs) as gateways for the services running in the cluster. I'd personally recommend Traefik. It has great support for HTTP and Kubernetes.
Once you have the ingress controller setup, your cluster is pretty much ready to use. Process for deploying a service is really specific to service requirements but the right hand rule is to use a Deployment and a Service for stateless loads, and StatefulSet and headless services for stateful workloads that need peer discovery. This is obviously too generalized and have many exceptions.
For managing different environments, you could split your resources into different namespaces.
As for the single portal to manage it all, I don't think that anything as such exists, but I might be wrong. Besides, depending on your workflow, you can create your own portal using the Kubernetes API but it requires a good understanding of Kubernetes itself.
We have several microservices(NodeJS based applications) which needs to communicate each other and two of them uses Redis and PostgreSQL. Below are the name of of my microservices. Each of them has their own SCM repository and Helm Chart.Helm version is 3.0.1. We have two environments and we have two values.yaml per environments.We have also three nodes per cluster.
First of all, after end user's action, UI service triggers than it goes to Backend. According to the end user request Backend services needs to communicate any of services such as Market, Auth and API.Some cases API and market microservice needs to communicate with Auth microservice as well.
UI -->
Backend
Market --> use postgreSQL
Auth --> use Redis
API
So my questions are,
What should we take care to communicate microservices among each other? Is this my-svc-namespace.svc.cluster.local enough to provide developers or should we specify ENV in each pod as well?
Our microservices is NodeJS application. How developers. will handle this in application source code? Did they use this service name if first question's answer is yes?
We'd like to expose our application via ingress using host per environments? I guess ingress should be only enabled for UI microservice, am I correct?
What is the best way to test each service can communicate each other?
kubectl get svc --all-namespaces
NAMESPACE NAME TYPE
database my-postgres-postgresql-helm ClusterIP
dev my-ui-dev ClusterIP
dev my-backend-dev ClusterIP
dev my-auth-dev ClusterIP
dev my-api-dev ClusterIP
dev my-market-dev ClusterIP
dev redis-master ClusterIP
ingress ingress-traefik NodePort
Two ways to perform Service Discovery in K8S
There are two ways to perform communication (service discovery) within a Kubernetes cluster.
Environment variable
DNS
DNS is the simplest way to achieve service discovery within the cluster.
And it does not require any additional ENV variable setting for each pod.
As its simplest, a service within the same namespace is accessible over its service name. e.g http://my-api-dev:PORT is accessible for all the pods within the namespace, dev.
Standard Application Name and K8s Service Name
As a practice, you can give each application a standard name, eg. my-ui, my-backend, my-api, etc. And use the same name to connect to the application.
That practice can be even applied testing locally from developer environment, with entry in the /etc/host as
127.0.0.1 my-ui my-backend my-api
(Above is nothing to do with k8s, just a practice for the communication of applications with their names in local environments)
Also, on k8s, you may assign service name as the same application name (Try to avoid, suffix like -dev for service name, which reflect the environments (dev, test, prod, etc), instead use namespace or separate cluster for that). So that, target application endpoints can be configured with their service name on each application's configuration file.
Ingress is for services with external access
Ingress should only be enabled for services which required external accesses.
Custom Health Check Endpoints
Also, it is a good practice to have some custom health check that verify all the depended applications are running fine, which will also verify the communications of application are working fine.
I have multiple environments represented by multiple namespaces in my kubernetes.
All application has its service endpoints defined in each namespace.
And we have three environments, dev, alpha, and beta. (Which is equivalent of dev, test, and stage). These environments are permanent, which means all the applications are running there.
Now in my team, there are few parallel development happening, for which we are planning to create multiple environments for the release and which will be only having few applications which are part of that release.
Let's think of this example: I am building feature1 and have an impact on app1 and app2
There are 10 other apps which are not having any impact.
So for my development and the parallel testing majority of services that I have to point to existing alpha or beta env and only point the app1 and app2 in the namespace.
I could achieve this by having an ExternalName mapping for all other services.
But if I have more than 100 services and managing the external endpoint in a yaml I feel very difficult.
Is there any way that I can route all the traffic to another namespace(If there exist no service with that name.)
Is there a way for global ExternalName for a Namespace?
As far as I know, it is not possible to redirect traffic to a different namespace based on the existing pods or Services in the current namespace. You can select a destination for Service only by changing its YAML configuration, and it is only possible to select pods in the same namespace.
You can simplify the deployment procedure by using Helm charts. It allows you to put variables into YAML configuration of Deployments, Services, etc, and use separate value file to substitute them during installation to the cluster. Here is a link to a blog post on Using Helm to deploy to Kubernetes
I have multiple teams and each team has bunch of Applications and Each application will have different environments( DEV, STAGE, PROD ). Looking for a way to limit the accesses using namespace.
Say, each team will have their own namespace.
I dont want Application deployed in namespace A access configMaps from namespace B. But, I want Applications deployed in namespace A access Rest Applications deployed in namespace B( Either thru ingress or services).
Also, dev applications should not have visibility on STAGE applications.
But, there are few applications which will serve request for DEV & STAGE based on tenantId in the request header.
What is the recommendation for creating namespace here?
Thanks
Namespaces automatically separate resources in the cluster. So if you create a namespace A and B, then if you create a configmap in namespace A it will automatically be unavailable in namespace B.
If you want to restrict users on what they can do in your cluster. For example if you like developers to be able to create resources on development but only view things in staging or production. I would take a look at using RBAC
If you want to separate access to and from applications on the network layer I would suggest taking a look at Network Policies. For that you would need an applicable networking solution for example Project Calico