I am working on migration of existing AWS, SprintBoot based system with 50+ independent repositories into Kubernetes. I am preparing a file containing naming conventions for artifacts, docker images and kubernetes resources (e.g. services, deployment, configmap, secret, ingress, labels etc.) for streamlining the process. I am in dilemma over should I use single or separate file for defining kubernetes resources? I know both will work, however I am inclined to preparing separate resource file for better version control and modularity.
Appreciate if you can share your feedback on which one should be preferred? Single file for all k8s resources Or Separate k8s specification file for each resource?
Try to go for separate resources files, these would help in managing the resources better, at the same time ensuring modularity as well. Also, most of the deployments in kubernetes are now being preferred via helm charts, which allows a better way to manage the resources file.
Related
Kustomize secrets seem to work fine in a mono-repo scenario with all the deployment config together. How does one deal with microservices where each component is in its own repo? I could move the manifests together in a devops repo, but seems odd to separate the manifest from the respective component.
It will be very dependent of your way to manage your configuration. In my case, all my services repositories are basically bases (in kustomize parlance). I don't include any secret in them.
My overall production or testing environment is an overlay that include all the bases or overlays that it needs. In the case the bases and overlays are my services. I include the secrets directly in my environment overlay.
At this point you probably realized that you need a way to specify your secrets names or some place holder in your bases or services repositories. There is few solutions:
you could just patch all the resources that reference your secrets, but that a lot of work
you can define some naming convention for your secrets and know in advance what the secret name will be (that the way I usually go about it).
If you use kustomize secret generator, you'll pretty much be stuck with the second solution.
We have a bunch of services that run off of the same Docker image: some long running services, some cron jobs, and a webservice.
I'm wondering what the current best practice here is? I essentially want some basic templating for reusing an image and its config, keeping all of them at the same revision (so sirensoftitan-image:{gitsha1hash} is used where gitsha1hash isn't repeated everywhere).
Should I be using a helm chart? Kustomize? Some other type of yaml templating? I want something light with as little added complexity as possible.
I found helm chart heavy compared to kustomize. Give kustomize a try, very simple and easy to use.
You can deploy the same template for different environments by adding new labels, updating the deployment objects name by prefixing with environment value. So you can have unique naming convention for different environments.
More over it uses YAML format which makes it easy to learn and adopt it.
All custom configuration goes into one YAML file unlike helm in which you manage multiple files. I personally like kustomize as it is simple and flexible and not the least comes from Google community. Give it a try
Say that I have 5 apis that i want to deploy in a Kubernetes cluster, my question is simply what is the best practice to store the yaml files related to Kubernetes.
In projects I've seen online, Kubernetes yaml files are just added to the the api project itself. I wonder if it makes sense to decouple all files related to Kubernetes in an entirely separate "project", and which is managed by VCS as a completely separated entity from the api projects themselves.
This question arises since I'm currently reading a book about Kubernetes, on the topic namespaces, and considered it might be a good idea to have separate namespaces per environment (DEV / UAT / PROD), and it may make sense to have these files in a centralized "Kubernetes" project (unless it might be better to have a separate cluster per environment (?)).
Whether to put the yaml in the same repo as the app is a question that projects answer in different ways. You might want to put them together if you find that you often change both at the same time or you just find it clearer to see everything in one place. You might separate if you mostly work on the yaml separately or if you find it less clutttered or want different visibility for it (e.g. different teams to look at it). If things get more sophisticated then you'll actually want to generate the yaml from templates and inject environment-specific configuration into it at deploy time (whether those environments are namespaces or clusters) - see Best practices for storing kubernetes configuration in source control for more discussion on this.
From Production k8s experience for CI/CD:
One cluster per environment such as dev , stage , prod ( optionally per data centre )
One namespace per project
One git deployment repo per project
One branch in git deployment repo per environment
Use configmaps for configuration aspects
Use secret management solution to store and use secrets
i would want to know how do you manage your service with Helm ?
I already know that we are going to have more than 200 microservices. How to manage them easily ?
Each microservice with is own yaml files (deployment,service,ingress ,values etc..)
or one several large (deployment,ingress, etc.. )yaml files for all microservices and i push the values yaml file with the specific params for the application.
I'd suggest aiming for an umbrella chart that includes lots of subcharts for the individual services. You can deploy each chart individually but using a single umbrella makes it easier to deploy the whole setup consistently to different environments.
Perhaps some microservices will be similar enough that for them you could use the same chart with different parameters (maybe including docker image parameter) but you'll have to work through them to see whether you can do that. You can include the same chart as a dependency multiple times within an umbrella chart to represent different services.
Ideally you also want a chart for a service to be individually-deployable so you can deploy and check that service in isolation. To do this you would give each chart its own resources including its own Ingress. But you might decide that for the umbrella chart you prefer to disable the Ingresses in the subcharts and put in a single fan-out Ingress for everything - that comes down to what works best for you.
Is there a way to create a tree of inheritance for Kubernetes deployments? I have a number of deployments which are similar but not identical. They share many ENV vars but not all. They all use the same image.
For example, I have a dev deployment which is configured almost identical to a production deployment but has env vars pointing to a different database backend. I have a celery deployment which is configured the same as the production deployment, however, it has a different run command.
Helm is what many people are using for this. It let's you create templates for kubernetes descriptors and pass parameters in to generate descriptors from the templates.
There are other tools out there which can be used to generate variations on kubernetes deployment descriptors by injecting parameters into templates. Ansible is also popular. But Helm is closely connected with the Kubernetes CNCF and community and there's a good selection of official charts available.
EDIT: If the aim is to enable different deployments (e.g. for dev and prod) using a single docker image then that's probably best handled with a single chart. You can create different values files for each deployment and supply the chosen values file to helm install with the --values parameter. If there are parts of the chart that are only sometimes applicable then they can be wrapped in if conditions to turn them on/off.
On the subject of inheritance specifically, there's an example in the helm documention of how to take another chart as a parent/dependency and override its values and I created a chart earlier that you can see in github that includes several other charts and overrides parts of all of them via the values.yml. It also shares some config between the included charts with globals. If you're looking to use a parent to reduce duplication rather than join multiple apps then it is possible to create a base/wrapper chart but it may turn out to be better to just duplicate config.
EDIT (180119): The alternative of Kustomize may soon become available in kubectl
You may also want to check Kustomize. It provides some support to write your yaml manifests in hierarchical form, so that you don't have to repeat yourself.