Big cluster config managment: Kustomize or Jsonnet? - kubernetes

Currently I'm working with Kubeflow. It is a quite large set up with about 30 different deployments. The default manifests of the Kubeflow team is using the standard Kustomize tool to provide patches for different environments like on-prem, cloud, testing, etc.
However, I still feel quite overwhelmed and limited with all those configurations. The only way that I can quickly navigate and manipulate variables for the whole project is to use the search and/or replace function from IDE (yes I know Kustomize can do variables in each environment but I want to do it for all deployments). This sucks, as it is not reversible once replaced. Another problem is that many folders are not just named base or reverse so many folders are named base, making it quite hard to findout some original fields. I also struggled when I want to combine just a few parts of different environment. For example, they provided 3 different environment for dex auth, 1 using email, 1 using GitHub and 1 using Google. I want my setup to have all 3 of them but now I cannot reuse the config from those environment easily. I had to create my own environment and dig up each of those environment to see what changes did they make in the patches.
I have never tried setting up Jsonnet configurations (maybe with tools like Tanka) on a similar cluster. I'm seeing a few big companies using Jsonnet. The two that I know are Grafana (they even created Tanka) and Databricks (they create their own Jsonnet compiler). What are the pros and cons of doing k8s configurations management in Jsonnet compared to Kustomize - the currently most popular choice? Is it worth learning and managing k8s config using Jsonnet (maybe with Tanka)?

What about ArgoCD?
Argo CD is a git-opt tool for managing projects like you describe.
It support Kustomization, Jsonnet and more
It can manage your resources, shows you what is deployed, synced, edit yor YAML file, and more.

Related

How do i create a custom devcontainer?

i have been using devcontainers for a while, and i want to extend some of them.
For instance, i want to install all the linting tools etc for various languages, and use a more personalised container as a starting point (compared to the Microsoft hosted ones).
I also like to host the containers on my own dockerhub, so i do not need to build all this stuff every time. There could also be the use case of using devcontainers for something other than the standard libraries.
I know i can just manually change the docker image reference, but i also like to integrate my changes into the plugin, so i can have my own repository show up as well, to get a native feeling.
I could not find any information on creating my own dev containers, only on extending existing ones. Is any of this i mention officially supported?
Edit: To sum all this up in one question; Can i add devcontainers from my own repo, without merging them into https://github.com/microsoft/vscode-dev-containers ?
If you are using VSCode as the text editor you can install the remote extension pack which allows you to add a template for a devcontainer to your project.
If you aren't using VSCode you can use the templated version as the basis for your own. I created a template repository with the files needed for Python project which you can refer to as well

Skaffold and mutiple Sub Charts

lately I was experimenting with Skaffold with our Helm Charts and I am in little bit in a dilemma that our Helm Chart \ Sub Charts are compatible with Skaffold or not.
Our helm Charts are looking like the following
my-helm-charts
+-charts
+-project1
+-project2
+-project3
+-project4
+-infrastructure_kafka
+-charts
+-kafka
+-zookeeper
+-infrastructure_cassandra
+-infrastructure_elasticsearch
+-Charts.yaml
+-Values.yaml
The reason we choosed to structure the Helm Charts this way, is that if necessary to spin up extra stages for our project.
Now when I want to develop project2 with Google Cloud Code / Skaffold (which I configured correctly and I can start without problem in IntelliJ) I have to start whole my-helm-charts.
That is actually Ok but the problem is, if I use Debug in Kubernetes, I have a feeling Google Cloud Code/Skaffold can really locate the project2 and no debugging occurs.
My feeling is Google Cloud/Skaffold is more oriented to work with following contruct...
project2-helm
+-templates
+-Charts.yaml
+-Values.yaml
My Subcharts contructs starts in Google Cloud Code/Skaffold without any exception but I can't debug, is it possible to achieve want I want with my structure and if yes, how?
Or is it not possible at all...
Thx for answers...
We recently added a feature called config dependencies which might help here. It allows you to create more specific skaffold.yamls and then map them together with a "requires" field:
https://skaffold.dev/docs/design/config/#configuration-dependencies
Once you have the skaffold.yamls created and the right dependency mapping you can run skaffold with the -m flag to choose once slice of your services:
skaffold dev -m project3
Cloud Code support for modules is incoming.
Cloud Code IntelliJ and Cloud Code VS Code recently added preview level support for deploying and debugging modules of a larger application which uses Skaffold. See more here https://cloud.google.com/code/docs/intellij/skaffold-modules

prometheus-operator (helm chart) & alert manager

I have a query related with prometheus-operator helm chart & alert manager combination.
Currently we are using prometheus-operator helm chart:
https://github.com/helm/charts/tree/master/stable/prometheus-operator
and I wrote a simple rule in values.yml (this is just a sample code) to generate an alert:
further I am using alertmanager config/routes/receivers to send alerts. It's working perfectly fine.
But as part of real-time implementation, I may be having so many alert rules. Is there any way where I can bring these all rules in separate rules file & configure the path (rule file path) in values.yml (under: additionalPrometheusRules section)
I also saw kube-prometheus-stack & additionalPrometheusRulesMap (in values.yml):
https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml
But didn't fine any solution. Anyone can help me on this?
So helm doesn't allow includes in values.yaml files typically. I read that there's a way to do it, but it depends on how the chart is built and typically upstream maintainers don't use templates that way afaik (could be wrong there, but I've never noticed it).
Your problem is exactly the same problem I've been trying to solve adequately, and I think I came up with something. It's not perfect, but it is better than having one huge monolithic values.yaml file.
helm allows the operator to specify multiple values.yaml files using the paradigm, -f values1.yaml -f values2.yaml -f some-more-values.yaml, so I broke my values file up into multiple logically-divided yaml files.
There might be gotchas, so be aware, but so far for this use-case, it seems to be working. I'm still testing things out. https://helm.sh/docs/helm/helm_install/
You can also add your own custom rules file using config maps. In this way, you can avoid over alerting and get notified for specific alerts only.

Using ClickOnce for multiple deployment configurations

I have a ClickOnce deployment that has different web service endpoints and strings that need to be changed in Settings.Settings. Right now I am only having to deal with on localized development version being done in house and one version that i push out to the customer for their UAT. Now i need 4 versions of this application. in house dev and testing, customer testing and production. I also need these 4 deployments to be able to be installed along side each other. I have discovered that i can change the name (i.e. APP -- INTERNAL -- TEST, APP -- INTERNAL -- DEV, APP -- CUST -- TEST, APP -- CUST -- PROD) and that will allow them all to be installed alongside each other. But, having to remember every place a string needs changed in the various settings.setting of each build, swapping the end points, changing the application names, changing the certificate, changing the deploy addreess and the url for each different build is time consuming and cumbersome. Is there a way to just say "Publish internal test build" and have it do the right thing? I was going to just write various mage scripts but I dont thing that gets me around having to mess with the settings.settings stuff. i didnt write this application nor maintain it but I suppose i could go in and use some sort of conditional logic, but the connections strings for instance are wired to reports and table adapter etc... P.S. I hate ClickOnce
Ok, for a useful answer and not a critique of my writing style. mage.exe is severly lacking in options on what it can an cannot do, it is also poorly documented and does not work as advertised. In order to accomplish what I wanted, I had to download sed for windows and write .bat files to manually rename files to .deploy. I used sed to edit the manifest files and flip options on and off and keep track of the different deployments. So in short write a batch file using mage.exe and sed and have a very good understanding of the contents of a manifest file. Feel free to contact me and I can send scripts that will automate multiple ClickOnce deployments, add the .deploy extension, require a specific version number before start up etc... none of these are possible using the tools MSFT provides.

Sharing a fabfile across multiple projects

Fabric has become my deployment tool of choice both for deploying Django projects and for initially configuring Ubuntu slices. However, my current workflow with Fabric isn't very DRY, as I find myself:
copying the fabfile.py from one Django project to another and
modifying the fabfile.py as needed for each project (e.g., changing the webserver_restart task from Apache to Nginx, configuring the host and SSH port, etc.).
One advantage of this workflow is that the fabfile.py becomes part of my Git repository, so between the fabfile.py and the pip requirements.txt, I have a recreateable virtualenv and deployment process. I want to keep this advantage, while becoming more DRY. It seems that I could improve my workflow by:
being able to pip install the common tasks defined in the fabfile.py and
having a fab_config file containing the host configuration information for each project and overriding any tasks as needed
Any recommendations on how to increase the DRYness of my Fabric workflow?
I've done some work in this direction with class-based "server definitions" that include connection info and can override methods to do specific tasks in a different way. Then my stock fabfile.py (which never changes) just calls the right method on the server definition object.