Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I'm overwhelmed with the number of options in Kubernetes.
There is a typical(the most suggested) way of deploying microservices: you create a deployment yaml that contains what type of image to pull, the number of replicas, listening ports of the app, etc. Then you might want to create more yaml's to expose the app, for example, using a service yaml.
Helm charts, it is said, provide an easier way, giving you a preconfigured package. However, after installing a couple of apps from bitnami I see that some have a deployment yalm, some don't. Instead, some represented as pod yamls. I've read pod yamls are not ideal in a production environment.
What should be used when I need just to deploy a couple of apps on a node, the easiest(yet not stupid) way possible?
Deployments do the OPs job for you while you drink coffee. What I mean by this is that a Deployment ensures that the desired state defined in your deployment manifest is maintained automatically (best effort). So, if a pod crashes, deployment will bring it up without human intervention.
However, using a POD YAML to deploy application, you have to ensure that your pod is always up (if needed).
If you have deployed a production grade app, you may know that running an app requires lots of things not just a deployment. You may need to create secrets, configMaps, Services, Deployments etc. This is where HELM lends a helping hand by combining all the required descriptors in one deployable package. This makes it very simple to maintain the state of the whole app as a single unit.
So, the HELM chart that has POD yaml and not the deployment, it really depends on the use-case. It may have an "operator" that is handling the OPs part for you.
Helm is the recommended way of deploying to Production.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I noticed some inconsistency between kubeadm upgrade plan and version skew support policy.
For example, I want to upgrade k8s cluster from 1.17 to 1.18.
so I need to execute kubeadm upgrade plan on one control plane node, and kubeadm will upgrade API Server, Controller Manager, Scheduler and other components at the same time.
but according to the policy, I should upgrade all API Servers to 1.18 at first.
The kube-apiserver instances these components communicate with are at
1.18 (in HA clusters in which these control plane components can communicate with any kube-apiserver instance in the cluster, all
kube-apiserver instances must be upgraded before upgrading these
components)
So, does kubeadm execute the upgrade plan in the wrong order, or this order is a compromise between policy and ease of use (or maybe implemention issue).
A bit above in the docs it's specified that
"kube-controller-manager, kube-scheduler, and cloud-controller-manager must not be newer than the kube-apiserver instances they communicate with. They are expected to match the kube-apiserver minor version, but may be up to one minor version older (to allow live upgrades)."
L.E.: Oh, I see, the issue is that control plane components on the upgraded control plane node will be newer than kube-apiserver on the not-yet-upgraded nodes. I've personally never had this issue, as I always configure control plane components to connect to kube-apiserver on the same node. I guess it's a kubeadm compromise, as you suggested.
I am interested in generating docker-compose.yaml files from Helm charts. Is there a good way or tool to do this?
I realize that this is in the opposite direction from what most people are doing. Why I want to do this:
Our production systems run Kubernetes via Helm charts. We've got a full blown k8s and Helm setup already; no need to use a tool like Kompose to get us there. The question is how to convert Helm to docker-compose, not the other way around.
We want our Helm charts to be the single authoritative source of container configuration. They are able to describe a superset of what docker-compose can.
Running a set of services using Helm on a development machine is more time and resource consuming than running the same set of services via docker-compose. We do not want to slow development down by having engineers run using Helm/k8s.
We do not want to maintain two sets of configurations.
Can anybody recommend how to do this, or suggest a different solution to the time/resources issue encountered on development machines?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to decide between using Kubernetes vs AWS ECS. From what I have seen Kubernetes seems to have more broader adoption although the learning curve is a bit high. The only comparison I saw was AWS-ECS vs Kubernetes which is a bit old. I would appreciate any feedback on this.
Disclaimer: this answer is fully opinionated, so take it with care! :)
BTW you're asking yourself the wrong question: is your business needed to manage a non-fully managed Kubernetes cluster?
If not and you need some Kubernetes functionalities, it's wise to think to adopt a fully managed Kubernetes offer like EKS, AKS and so on according to your required IaaS. This will let you use Kubernetes superpowers without any (SIC) vendor lockin instead of any other CaaS solution like Elastic Container Service.
But if you just need some functionalities (like container autoscaling), probably you have to follow the IaaS vendor solutions: everything depends upon your needs and your business and no further details have been provided, so this discussion would be not so impartial.
UPDATE: upon your latest comment, definitely I would suggest you go fully Kubernetes for a number of reasons.
it's a FOSS project, with strong community and committed to delivering new technologies vendor/provider agnostic
it's backed by CNCF, a branch of the Linux Foundation
Kubernetes allows you to not bind to a vendor-specific solution, making an eventual migration painless
Simplifying local development environment for developers, just using Minikube or K3s of Kubernetes for Docker: no more pain on handling multiple Docker Compose files that differ from production setup.
Adopt the true, cloud-native approach of application development and delivering (but this doesn't mean your legacy applications cannot run on Kubernetes, despite the opposite!)
I saw a presentation some time ago of a company that based their infrastructure on ECS. One of the conclusions was that things would have been easier if they had used Kubernetes (e.g. with EKS).
The main reason is that the community and tooling around Kubernetes is much bigger than around ECS. You can just find much more tools, talents, custom solutions, books, conferences, and other resources about Kubernetes than about ECS. This makes your life in the end easier when you start implementing things.
I’m currently using kubernetes and I came across of helm.
Let’s say I don’t like the idea of “infecting” my kubernetes cluster with a process that is not related to my applications but I would gladly accept it if it could be beneficial.
So I made some researches but I still can’t find anything I can’t easily do by using my yaml descriptor and kubectl so for now I can’t find an use except,maybe, for the environizing.
For example (taking it from guides I read:
you can easily install application, eg. helm install nginx —> I add an nginx image to my deployment descriptor, done
repositories -> I have docker ones (where I pull my images from)
you can easily helm rollback in case of release failure-> I just change the image version to the previous one in my kubernetes descriptor, easy
What bothers me is that, at level of commands, I do pretty much the same effort (helm update->kubectl apply).
In exchange for that I have a lot of boilerplate because of keeping the directory structure helm wants and I feel like missing the control I have with plain deployment descriptors ...what am I missing?
It is totally understandable your question. For small and simple deploys the benefits is not actually that great. But when the deploy of something is very complex Helm helps a lot.
Think that you have a couple squads that develop microservice for some company. If you can make a Chart that works for most of them, the deploy of each microservices would differ only by the image and the resources required. This way you get an standardized deployment and easier to all developers.
Another use case is deploying applications which requires a lot of moving parts. For example, if you want to deploy a Grafana server on Kubernetes you're probably going to need at least a Deployment and a Configmap, then you would need a service that matches this deployment. And if you want to expose it to the internet you need an ingress too.
One relatively simple application, would require 4 different YAMLs that you would to manually configure and make sure everything is correct instead you could do a simple helm install and reuse the configuration that someone has already made, sometimes even the company who created the Application.
There are a lot of other use cases, but these two are the ones that I would say are the most common.
Here's three suggestions of ways Helm can be useful:
Your continuous deployment system somewhat routinely produces new builds and wants to send them to the Kubernetes cluster. You can use templating to specify the image name and tag in a deployment, and so helm upgrade ... --set tag=201907211931 to request a specific tag.
You might have various service-specific controls like the log level or external database hostnames. The Helm values mechanism gives a uniform way to specify them, without having to know the details of the Kubernetes YAML files.
There is a repository of pre-packaged application charts, so if you want replicated PostgreSQL with in-cluster persistent storage, that's already built for you and you can just depend on it, rather than figuring out the right combination of StatefulSets and PersistentVolumeClaims yourself.
You can combine these in interesting (and potentially complex) ways: use an in-cluster database for developer testing but use a cloud-hosted and backed-up database for production, for example, and compute the database host name based on what combination of settings are provided.
There are, of course, alternative ways to do all of these things. Kustomize in particular can change the image value fairly straightforwardly, and is notable for having been included in the kubectl tool since Kubernetes 1.14 (see also Declarative Management of Kubernetes Objects Using Kustomize in the Kubernetes documentation). The "operator" pattern gives an alternate path to install software in your cluster, but even more so than Helm you're trusting an arbitrary program with API access.
We have multiple(20+) services running inside docker containers which are being managed using Kubernetes. These services include databases, streaming pipelines and custom applications. We want to make this product available as an on-premises solution so that it can be easily installed, like a one-click installation sort of thing, hiding all the complexity of the infrastructure.
What would be the best way of doing this? Currently we have scripts managing this but as we move into production there will be frequent upgrades and it will become more and more complex to manage all the dependencies.
I am currently looking into helm and am wondering if I am exploring in the right direction. Any guidance will be really helpful to me. Thanks.
Helm seems like the way to go, but what you need to think about in my opinion is more about how will you deliver updates to your software. For example, will you provide a single 'version' of your whole stack, that translates into particular composition of infra setup and microservice versions, or will you allow your customers to upgrade single microservices as they are released. You can have one huge helm chart for everything, or you can use, like I do in most cases, an "umbrella" chart. It contains subcharts for all microservices etc.
My usual setup contains a subchart for every service, then services names are correctly namespaced, so they can be referenced within as .Release.Name-subchart[-optional]. Also, when I need to upgrade, I just upgraed whole chart with something like --reuse-values --set subchart.image.tag=v1.x.x which gives granular control over each service version. I also gate each subcharts resources with if .Values.enabled so I can individualy enabe/diable each subcharts resources.
The ugly side of this, is that if you do want to release single service upgrade, you still need to run the whole umbrella chart, leaving more surface for some kind of error, but on the other hand it gives this capability to deploy whole solution in one command (the default tags are :latest so clean install will always install latest versions published, and then get updated with tagged releases)