Separated logs sources for different namespaces for same application name - kubernetes

There is a AWS EKS cluster. datadog-agent is installed as in the guideline
I have a deployment "myapp" in two namespaces - "dev" and "qa".
But datalogs merges logs from these two apps and I can see only "myapp" in sources https://datadoghq.eu/logs.
Is there way to get logs for every namespace separately?

You can simply try to change the name of the application in the namespaces (since you are separating them). This will make them appear as different sources.
If that is not an option, you can use tags to filter the incoming logs for the namespaces. You can read more about tags here.
This is actually a very good feature, for situations where you deploy an application spanning multiple namespaces and you want aggregated logs. So, this is very useful as well.

Related

Migrating resourses from an openshift cluster to another

I have an Openshift cluster and I want to move its resources to another cluster,
e.g. I have 40 Secrets, and 20 ConfigMaps, and some other resources such as deployment configs and more.
Moving these secrets and config maps manually is mind-blowing.
What is the best approach?
I would recommend trying out Monokle's Compare & Sync feature.
It allows you to visually compare the resources of two clusters and deploy resources from one to the other.
Here's a screenshot of the UI:
You can read more about how this works in the docs.
OpenShift has an "official" process for this called "Migration Toolkit for Containers (MTC)":
https://docs.openshift.com/container-platform/4.12/migration_toolkit_for_containers/about-mtc.html
Velero is also a great tool for your scenario. You can backup your namespaces with the granularity of the objects included, and restore them elsewhere with or without making changes:
https://velero.io/docs/v1.10/migration-case/
Follow these steps:
move secrets and config maps
move deployments
move services
move routes
As an example of how I'll do each step mentioned above, follow these steps for each of them:
1 - Login to the first cluster:
oc login --token="your-token-for-first-server" --server="your-first-server"
2 - Export your resources:
oc get -o yaml cm > configmaps.yaml
oc get -o yaml secrets > secrets.yaml
...
There are also some default ConfigMaps and Secrets which you don't need to copy, you can erase them after making the files.
3 - Login to the second cluster:
oc login --token="your-token-for-second-server" --server="your-second-server"
If you forget this step, you may get an error that says resource already exists, but be careful not to forget this step.
4 - Load resources to the second cluster
oc create -f configmaps.yaml
oc create -f secrets.yaml
...
There might be easier ways too, and there are a lot of information about this which is out of my knowledge.
There are also some considerations you need to aware of:
You may not need to move pods, usually they are made and controlled by other resources such as deployment configs.
In some companies, databases are managed completely separately by DBA teams, you may not need to change anything, but if your database is within your cluster, you should consider moving it's PV.
Using Helm chart or Openshift templates can help you make this kind of task so easier.
You can include templates in your GitLab CI/CD pipelines and just change your cluster URL and everything is up and running and redeploy.
In the end, if you are migrating from version 3 to 4, this article might be helpful.

How to distribute n different configs to exactly n pods

I have a containerized daemon that I need to run one instance of for every thing. Each thing has a unique set of configs associated with it, but the container image is the same. The configs can be set simply as environment variables. I have a list of the configs, and I need to define the desired state as having exactly 1 pod running for each thing. What is the appropriate way to construct this in Kubernetes with or without Helm?
My understanding is that ReplicaSets and Deployments work on identical containers, in other words they would all be spun up with the same environment variables? I understand that StatefulSet may be able to represent this, but the deamons do not need to hold state really, they do not need persistent storage, they can be killed at will, so long as another with the same configs comes up soon afterwards.
One clue I was given by somebody was to use Helmfile or Helm partials. That is the extent of what they told me. I have not yet investigated whether those are appropriate or not.
You are correct saying that Deployment and ReplicaSets are running on identical containers, so the way I see it you have 2 options:
Deploy multiple deployments with different configs defined in the values file:
You can see an example here, where multiple configs are set in the values file and using {{ range }} to iterate and create multiple deployments
Iterate over you configurations names/files using scripting language of your choice and create separate release for each of your configuration via the command line for example: --set configName=
Personally, I would go with the 2nd option since multiple helm releases can harness the helm cli to better understand what is running and it's state. also, any CRUD action you would like to do would be less dangerous since the deployments are decoupled

Change the spring boot admin registery unique ID

I have a requirement where my client applications are having almost same properties and even the URL is same, as they are running behind a load balancer, the only change they have is a particular set of environment properties that differ.
Is it possible to register them uniquely based on that property.
I would say there are a few approaches.
One would be loading Environment Variables from a Kubernetes Secret.
Second using helm(https://helm.sh/)
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.
Explanation:
If you would use a secret option, you would probably create two separate secrets with env variables that you need and load those based on the app name, or if you have them setup in different namespaces then copy the secret over to each as those resources will not work between different namespaces.
If you would use helm, you will have to write your chart and put the env variables into values.yaml or mix it together and load secret from inside Kubernetes.
This will work on Kubernetes, I do not know (based on your tags) if it's the same on OpenShift.
Please provide some samples of what you have already done and I'll provide more details.

What are the benefits of helm?

Okay you can easy install application but where is the benefit compared to normal .yaml files from Kubernetes?
Can someone give me a example where it is useful to use helm and why normal Kubernetes is not sufficient?
Also a confrontation for helm and Kubernetes would be nice.
With Helm, a set of resources (read as Kubernetes manifests) logically define a release - and you need to treat this group of resources as a single unit.
A simple example on why this is necessary: Imagine an application bundle that has, let's say, 10 kubernetes objects in total. On the next release, due to the changes in the app, now 1 of the resources is not needed anymore - there are 9 objects in total. How would I roll out this new release? If I simply do kubectl apply -f new_release/, that wouldn't take care of the deletion of that 1 resource that is not needed anymore. This means, I cannot roll upgrades that doesn't need manual intervention. Helm takes care of this.
Helm also keeps a history of releases with their exact set of resources, so you can rollback to a previous release with a single command, in case things go wrong.
Also, one of the things you need often is templating your resources - imagine you want to deploy multiple instances of the same exact application. What would you do?
Kubernetes doesn't offer many options to tackle this problem - one solution is to use different namespaces: Don't specify namespace in the manifests, but give it in the command, such as kubectl apply -n my_namespace -f resources/, but what if you want to deploy two of this instances on the same namespace? Then you need some kind of name/label/selector templating, and Helm takes care of that.
These are some examples for the use cases that Helm addresses.

provide default objects in kubernetes namespaces

We have several namespaces in a kubernetes (openshift) cluster and we want to provide some default objects in every namespace in an automated way. If for some reason one of these default objects is deleted (or changed), it should be recreated automatically.
In openshift this already happens with some serviceaccounts and secrets for these accounts, but it is not easily extendable if i'm not mistaking. E.g. the the 'builder' and 'deployer' serviceaccount are created and recreated automatically by openshift. We are looking for a similar solution that supports all types of objects.
Does anyone has an example of how to accomplish this? It is possible to watch the etcd for changes and react to these changes by creating/modifying objects, but i can't seem to find a good example to start with.