What is the difference between Helm and Kustomize? - kubernetes

I have been using the Kubernetes and Helm for a while and have now come across Kustomize for the first time.
But what exactly is the difference between Kustomize and Helm?
Are both simply different solutions for bundling K8s elements such as services, deployments, ...? Or does it make sense to use both Helm and Kustomize together?

The best way to describe the differences is to refer to them as different types of deployment engines. One is a Templating Engine and one is an Overlay Engine.
So what are these? Well when you use a templating engine you create a boilerplate example of your file. From there you abstract away contents with known filters and within these abstractions you provide references to variables. These variables are normally abstracted to another file where you insert information specific to your environment Then, on runtime, when you execute the templating engine, the templates are loaded into memory and all of the variables are exchanged with their placeholders.
This is different from an overlay engine in a few nuanced ways. Normally about how information gets into configuration examples. Noticed how I used the word examples there instead of templates. This was intentional as Kustomize doesn't use templates. Instead, you create a Kustomization.yml file. This file then points to two different things. Your Base and your Overlays. At runtime your Base is loaded into memory and if any Overlays exist that match they are merged over top of your Base configuration.
The latter method allows you to scale your configurations to large numbers of variants more easily. Imagine maintaining 10,000 different sets of variables files for 10,000 different configurations. Now imagine maintaining a hierarchy of modular and small configurations that can be inherited in any combination or permutation? It would greatly reduce redundancy and greatly improve manageability.
Another nuance to make note of is ownership of the projects. Helm is operated by a third party. Kustomize is developed directly by the Kubernetes team. In fact, Kustomize functionality is directly supported in Kubectl. You can build and perform a Kustomize project like so: kubectl apply -k DIR. However, the version of kustomize embedded in the kubectl binary is out of date and missing some new features.
There are a few other improvements in Kustomize too that are somewhat more minor but still worth mentioning. It can reference bases from the internet or other non-standard paths. It supports generators to build configuration files for you automatically based on files and string literals. It supports robust & granular JSON patching. It supports injecting metadata across configuration files.
The following links were added in the comments below for more comparisons:
https://medium.com/#alexander.hungenberg/helm-vs-kustomize-how-to-deploy-your-applications-in-2020-67f4d104da69
https://codeengineered.com/blog/2018/helm-kustomize-complexity/

Almost everything. Like asking what's the difference between Apache and Nginx :) They do vaguely similar jobs but quantifying the differences is kind of impossible.
The short version is that Helm is a template-driven system based on decentralized model for chart sharing. Kustomize is based on deep merges and other structured transforms of YAML data.
There are cases where using both is reasonable, such as feeding the output from helm template into kustomize for overlays.

Both has its Pros and Cons. Lets look in this table
Helm is particularly useful for packaging, porting and installing apps that are well-defined, while Kustomize works best for modifying existing Kubernetes apps.
The fact that Kustomize and Helm offer unique specific benefits the best course of action would be to use the two tools side by side.

Related

How can I compare serverless with monolithic projects?

I need to develop a final project in college and I chose this topic as a goal. I would like to compare the impact of using serverless on the development of an application.
To do this, I thought I'd compare repositories that use monolithic and those that use serverless. At first, the idea would be to use only Python or JavaScript languages.
I would like to know if any of you have any suggestions for software to calculate software metrics or to make it easier to find these types of repositories on GitHub. Currently, to find serverless projects, I'm looking for repositories that contain some serverless.yml file.
The idea would be to make a comparative study between these two types of architecture, calculating the differences and the benefits of using each one of them. For example, how to split code into atomic parts can impact code complexity as well as maintainability over time.
I'm still a little lost on how to proceed, any ideas or suggestions would be most welcome!

Kubernetes programming: CRD, controller and operator learning materials

I am learning Kubernetes as there are more and more companies building their infra on it. And as a DevOps and SRE guy, I found it quite useful to use Custom Resources and Operators in Kubernetes to help users alleviate their burden when deploying new app to the cluster, since I can summarize useful apps and define templates for them. These templates are CRD and Operators.
Question
I wonder if there is a dedicated materials or courses on learning how to program my own CRD, controller and operator; with milder learning curve than looking at source code. Preferrably a series of courses.
Things I've already done:
I've download the sample controller and learned to use it. However, I found that my understanding of the controller is still not enough to build a controller from scratch.
I've also searched for courses at udemy. However almost all of them are on how to build and operate k8s, which is the most basics that I already knew.
I've searched for material on google and medium. A lot of older materials are obsolete (e.g. the resource group for deployment was still extensions rather than apps) or not detailed enough.
I've also looked into popular templates of the source code in operatorhub.io. However, the learning curve to read the source code is truly way too steep.

Can an artifact consist of a list of .jar files?

I read in a UML manual that when there are many .jar files, it is possible to list them in a single artifact box. However, I have not been able to verify this from other sources, and since Visual Paradigm does not formally allow it, I would like to know if my diagram is compliant with UML notation.
If this is correct, is there a rule for choosing the name of the artifact?
I'm also trying to figure out what manifestations are. Since I don't recognize actual components in my application, but only several layers that I wouldn't define as components, I can't even find manifestations. Is it possible that there are no manifestations in a web application?
The shortcut notation using «artifact» is ambiguous, because the notation refers to a single artifact, with a name File.JAR when in reality there are plenty of them. Moreover, the UML specifications do not mention this possibility, so modelling tools shouldn't provide this feature.
However, UML provides a shortcut for deployed targets (such as nodes and execution environments), allowing to write the list of deployed artifacts directly in the box of the node, instead of drawing a lot of nested or related space-consuming artifact symbols. The UML specification explicitely allows it:
DeployedTargets are shown as a perspective view of cube labeled with the name of the DeployedTarget shown prepended by a colon. System elements deployed on a DeployedTarget, and Deployments that connect them, may be drawn inside the perspective cube. Alternately, deployed system elements can be shown as a textual list of element names.
The UML specification provide several examples page 653 and 657.
P.S: in addition of the UML specs, I've cross checked UML Distilled, The UML User's guide 2nd edition, and The UML Language reference manual 2nd edition. They are all consistent in that regard: they mention the possibility of deployments directly in an execution target (the older books clarify that it's in a compartment, i.e. after a separation line), none of them present this possibility for artifact symbols.
It depends how much you, not your tooling, cares about UML compliance
Broadly, the need for strict UML adherence varies: if you are using UML to generate code / documentation, etc, then yes you need to adhere to the spec. Whereas if you are just trying to communicate ideas to other people then, unless they are UML fanatics, they probably won't care as long as they can clearly understand what you're communicating.
The challenge for tools like Visual Paradigm and Sparx EA is that they need to be UML compliant. The means you get the strict adherence whether you need it or not - unless you find a work-around that lets you communicate your ideas even if from a UML stand-point it's a little weird.
I just wanted to complete this with what the UML spec says about artifacts (p. 654):
An Artifact represents some (usually reifiable) item of information that is used or produced by a software development process or by operation of a system. Examples of Artifacts include model files, source files, scripts, executable files, database tables, development deliverables, word-processing documents, and mail messages.
(emphasis by me)
Now, whatever reifiable will mean (probably refinable?) , I think the term item of information is broad enough to cover anything that holds information. May it be abit, a sentence in a file or a complete set of files.

Use of namespace in ROS plugins

I'm starting to work with ROS and plugins and I would like to understand one thing: What the use of different namespaces for the base class and the plugin class in ROS ?
I can understand the utility of namespace to differentiate similar nodes or Topics used by different nodes but I quite don't understand its use when we talk about plugins.
To be clear, why in the tutorial the base class use the namespace polygon_base and the plugins use the namespace polygon_plugin
thank you for your help
Initial disclaimer: Plugins are chiefly used by libraries who need the extended features Plugins provide. For the average ROS developer, there are usually better design choices.
The use of namespaces in ROS is to reduce confusion, especially with similarly and simultaneously operating things. For example, topic namespacing is (almost exclusively) used for running the same launch file multiple times, most commonly to launch/simulate multiple robots, or multiple sensors.
In the tutorial you mentioned (with badly formatted URL) (https://wiki.ros.org/pluginlib/Tutorials/Writing and Using a Simple Plugin), the only real use of namespaces are C++ namespaces. I think they used two different namespaces to disambiguate two similar kinds of code. The polygon_base code is not intended to be used as a plugin, it simply is a base. If it could be used as a plugin, it would be included in the polygon_plugins namespace.
In short, pick what is most clear and communicative as your naming schemes.

Inversion of control container

What is the most important features a IOC container should contain? You can easily create containers in 15 lines of code, but what should it include to be "useful" in a project?
This is a pretty wide open topic, and given to a lot of subjectivity, but I will try and answer from a very pragmatic point of view. Given the projects that I have worked on, and my experience with IoC, I would say that there are at least three biggies to look for in terms of usefulness.
Configuration - Any IoC that you use needs to have some central location that allows you to configure the behavior of that container. Whether that be a config file or a nice set of API calls that can be wrapped up in a global class somewhere, if the container isn't easily configurable then it is going to be a headache.
Lifetime Management - You really want a container that has the ability to allow for varied object lifetimes. You might want a certain object to always get a new IPersonCreator, but you only want one IPersonService in existence at any given time.
Automatic Dependency Injection - Ok, so Dependency Injection is the concept that IoC is built on top of, but you don't want to have to manage this yourself. The idea here is that if you ask for an IPersonCreator for the first time, it should resolve all it's dependencies, and their dependencies and so on automatically.
Overall what you need depends on the project, but there are several containers out there that will suit your needs just fine.
In descending order of importance:
Allow at least setter and constructor injection,
Separate configuration from code,
Allow different styles of configuration (XML or annotations),
These will require more than 15 lines of code, but those seem key to me.