How exactly it helps if recommended labels from kubernetes 1.12 are added in helm charts?
Since this question (as revealed in the comments) is about the application-related recommended labels that are prefixed with app.kubernetes.io, the appropriate place to look is the kubernetes documentation on this. These labels serve to identify various kubernetes objects (Pods, Services, ConfigMaps etc.) as part of a single application. Having a "common set of labels allows tools to work interoperably, describing objects in a common manner that all tools can understand". The idea is that you should be able to go into tools like the kubernetes dashboard or a monitoring tool and see a list of applications and then drill into the individual objects under the applications. However, 1.12 was only released a month ago so it will take time for the common labels to be adopted and for tools to offer support for querying based on them. Having the labels present in helm charts is a step towards adoption.
Related
First, I'm not sure this question is specific enough for Stack Overflow. Happy to remove or revise if someone has any suggestions.
We use Kubernetes to orchestrate our server side code, and have recently begun using Kustomize to modularize the code.
Most of our backend services fit nicely into that data model. For our main transactional system we have a base configuration that we overlay with tweaks for our development, staging, and different production flavors. This works really well and has helped us clean things up a ton.
We also use TensorFlow Serving to deploy machine learning models, each of which is trained and at this point deployed for each of our many clients. The only way that these configurations differ is in the name and metadata annotations (e.g., we might have one called classifier-acme and another one called classifier-bigcorp), and the bundle of weights that are pulled from our blob storage (e.g., one would pull from storage://models/acme/classifier and another would pull from storage://models/bigcorp/classifier). We also assign different namespaces to segregate between development, production, etc.
From what I understand of the Kustomize system, we would need to have a different base and set of overlays for every one of our customers if we wanted to encode the entire state of our current cluster in Kustomize files. This seems like a huge number of directories as we have many customers. If we have 100 customers and five different elopement environments, that's 500 directories with a kustomize.yml file.
Is there a tool or technique to encode this repeating with Kustomize? Or is there another tool that will work to help us generate Kubernetes configurations in a more systematic and compact way?
You can have more complex overlay structures than just a straight matrix approach. So like for one app have apps/foo-base and then apps/foo-dev and apps/foo-prod which both have ../foo-base in their bases and then those in turn are pulled in by the overlays/us-prod and overlays/eu-prod and whatnot.
But if every combo of customer and environment really does need its own setting then you might indeed end up with a lot of overlays.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am totally new to this two technologies (I know docker and kubernetes btw).
Haven't find much an the web about this comparison topic.
I have read that Openshift is used by more companies,but a nightmare to install,pricier and on upgrade data loss can occur.
But nothing else.
What should be the deciding factor for which one to use for kubernete cluster orchestration?
I currently work for Rancher. I've also been building Internet infrastructure since 1996 and owned an MSP for 14 years that built and managed Internet datacenters for large US media companies. I've been working with containers since 2014, and since then I've tried pretty much everything that exists for managing containers and Kubernetes.
"The deciding factor" varies by individual and organization. Many companies use OpenShift. Many companies use Rancher. Many companies use something else, and everyone will defend their solution because it fits their needs, or because of the psychological principle of consistency, which states that because we chose to walk a certain path, that path must be correct. More specifically, the parameters around the solution we chose must be what we need because that was the choice we made.
Red Hat's approach to Kubernetes management comes from OpenShift being a PaaS before it was ever a Kubernetes solution. By virtue of being a PaaS, it is opinionated, which means it's going to be prescriptive about what you can do and how you can do it. For many people, this is a great solution -- they avoid the "analysis paralysis" that comes from having too many choices available to them.
Rancher's approach to Kubernetes management comes from a desire to integrate cloud native tooling into a modular platform that still lets you choose what to do. Much like Kubernetes itself, it doesn't tell you how to do it, but rather gives fast access to the tooling to do whatever you want to do.
Red Hat's approach is to create large K8s clusters and manage them independently.
Rancher's approach is to unify thousands of clusters into a single management control plane.
Because Rancher is designed for multi-cluster management, it applies global configuration where it benefits the operator (such as authentication and identity management) but keeps tight controls on individual clusters and namespaces within them.
Within the security boundaries Rancher gives developers access to clusters and namespaces, easy app deployment, monitoring and metrics, service mesh, and access to Kubernetes features without having to go and learn all about Kubernetes first.
But wait! Doesn't OpenShift give developers those things too?
Yes, but often with Red Hat-branded solutions that are modified versions of open source software. Rancher always deploys unadulterated versions of upstream software and adds management value to it from the outside.
The skills you learn using software with Rancher will transfer to using that same software anywhere else. That's not always the case with skills you learn while using OpenShift.
There are a lot of things in Kubernetes that are onerous to configure, independent of the value of using the thing itself. It's easy to spend more time fussing around with Kubernetes than you do using it, and Rancher wants to narrow that gap without compromising your freedom of choice.
What is it that you want to do, not only now, but in the future? You say that you already know Kubernetes, but something has you seeking a management solution for your K8s clusters. What are your criteria for success?
No one can tell you what you need to be successful. Not me, not Red Hat, not Rancher.
I chose to use Rancher and to work there because I believe that they are empowering developers and operators to hit the ground running with Kubernetes. Everything that Rancher produces is free and open source, and although they're a business, the vast majority of Rancher deployments make no money for Rancher.
This forces Rancher to create a product that has true value, not a product that they can convince other people to buy.
The proof is in the deployments - Red Hat has roughly 1,000 OpenShift customers, which means roughly 1,000 OpenShift deployments. Rancher has fewer paying customers than Red Hat, but Rancher has over 30,000 deployments that we know about.
You can be up and running with Rancher in under ten minutes, and you can import the clusters you already have and start working with them a few minutes later. Why not just take it for a spin and see if you like it?
I also invite you to join the Rancher Users slack. There you will not only find a community of Rancher users, but you will be able to find other people who compared Rancher and OpenShift and chose Rancher. They will be happy to help you with information that will lead you to feel confident about whatever choice you make.
Similar to posts like this Separate dev and prod Firebase environment
I'm running into similar structuring issues. Unlike other posts like that that i've found, in my case, it's GCP as a whole rather than just Firebase. In addition, i'm looking at separation (or not) of blue and green deployments ALONG with the various environments.
The projects will be handling IoT data; mobile, field sensor/modules, web (in the future). Currently everything is unfortunately in one project.
So, i'm thinking of having 3 different projects for the staging, production, and test environments with each project having both blue and green deployments per, perhaps besides test but that's a different conversation.
Does GCP as a whole have documentation or recommendations about this? Or do you guys have any recommendations?
It's hard to answer because it depends a lot of your organization, your needs and your way of working.
Here you can find a google document about resources hierarchy.
However, I already see some GCP customers using only one project for dev/uat/prod because they share the same K8S cluster and separate the environment thanks to the namespace. By the way, the cluster maintenance cost is done only once for all the different steps of the project.
About the Blue/Green it depends on which component you want to apply this. If it's on the website, App Engine, or a Global Loadbalancer can do this. If it's about IoT Core or PubSub, I fear that you have to manage this by yourself or to create 2 different projects for this.
Imagine I am developing an application microservices based. They will be deployed to kubernetes with Helm Package Manager. Some microservices ends having pretty similar YAML files configuration. Some others might be different in terms of YAML configuration. What is the best practice for this? I have a few options:
Use a generic chart and pass different configuration using values.env.yaml for each microservice and then deploy this with a different release name.
Create a chart for every single microservice no matter of they are similar in terms of configuration?
This is an opinion question, so I'll answer with an opinion.
Upside: You would have to change just a few values in your values.yaml depending on the microservice and it would be easier to maintain your values.yml. Your Helm charts repo may not grow as fast.
Downside: It will be harder to create you _helpers.tpl file for example. That file will grow rapidly and it could get confusing for people creating microservices understand it.
Upside: Separation of your microservice as you scale to hundreds. Developers can work only on their microservice deployment.
Downside: File spread, too many files everywhere, and your Helm charts repo can grow rapidly. Also, a risk of large code duplication.
The more general practice is number 2 for the official Helm charts but then again every chart is for a very different application.
Like #Rico mentioned, this is an opinion question. Here is my opinion:
I think it is a good idea to start with one Chart that fits all. But when you have to add very specific stuff for only a few services with special requirements, you should create another Chart. This idea is pretty similar to Monolith first, when it comes to Microservices.
In my company we have one Chart for ~30 Services. They have very similar needs, therefore the template files aren't too complex and the _helpers file has only around 50 lines. We are very happy with this solution, because you only need a few lines of values.yaml to prepare your service for operation.
I'd like to know what are your best practices (or just your practices) regarding the management of your helm charts versions.
I wonder what is the best way to deal with application versioning, continuous integration/delivery and chart packaging.
Today I have many microservices that live their life. Each one has its own lifecycle and it own versioning in its own git repository.
Beside, we choosed to have one git repository for all our charts.
Now, we have too choices :
Each time a microservice changes, a new docker image is built and a new version of the chart is created too (with just the tag(s) of the docker image(s) that change in the value.yaml file)
Or, even if a microservice changes, we don't create a new version of the chart. The default value of the docker tag in the chart is set to "default" and when we want to upgrade the chart we have to use --set image.tag=vx.x.x flag.
The benefit of the first approach for the "ops" point of view, is that at any time we know what version of each chart (and each docker image) are running on the cluster. The drawback is that at a certain time we will have many many versions of each charts with just a docker tag version that changed.
On the other side, the benefit of the second approach is that the only thing that makes the chart version to change is a modification of the chart code itself, not an application change. It reduces drastically the "uselessly" high version numbers of each chart. The drawback is that we have to override the docker tags at the installation/upgrade time and we lost the observability of what versions are running on the cluster (usefull in case of Disaster Recovery Plan).
So, what are your practices? Perhaps an hybrid approach?
Thank you for your help
I think this is a choice that comes down to the needs of your project. An interesting comparison is the current versioning strategy of the public charts in the Kubernetes charts repo and the current default versioning strategy of Jenkins-X.
The public charts only get bumped when a change is made to the chart. This could be to bump the version of the default image tag that it points to but each time it is an explicit action requiring a pr and review and a decision on whether it is a major, minor or fix version bump.
In a Jenkins-X cluster the default behaviour is that when you make a change to the code of one of your microservices then it's chart version is automatically bumped whether or not the chart itself changes. The chart in the source repo refers to a snapshot but it is auto deployed under an explicit version and that version gets referenced in the environments it is deployed to via a pipeline. The chart refers to a draft/dev tag of the image in the source and that's also automatically replaced with an explicit version during the flow.
The key difference I think is that Jenkins-X is driven towards a highly automated CI/CD flow with particular environments in the flows. Its approach makes sense for handling frequent deployment of changes. The public charts are aimed at reusability and giving a stable experience across a hugely wide range of environments and situations through public contributions. So the strategy there is more aimed at visibility and ease of understanding for changes that you'd expect to be less frequent by comparison.