Selecting between Kubernetes vs AWS ECS [closed] - kubernetes

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to decide between using Kubernetes vs AWS ECS. From what I have seen Kubernetes seems to have more broader adoption although the learning curve is a bit high. The only comparison I saw was AWS-ECS vs Kubernetes which is a bit old. I would appreciate any feedback on this.

Disclaimer: this answer is fully opinionated, so take it with care! :)
BTW you're asking yourself the wrong question: is your business needed to manage a non-fully managed Kubernetes cluster?
If not and you need some Kubernetes functionalities, it's wise to think to adopt a fully managed Kubernetes offer like EKS, AKS and so on according to your required IaaS. This will let you use Kubernetes superpowers without any (SIC) vendor lockin instead of any other CaaS solution like Elastic Container Service.
But if you just need some functionalities (like container autoscaling), probably you have to follow the IaaS vendor solutions: everything depends upon your needs and your business and no further details have been provided, so this discussion would be not so impartial.
UPDATE: upon your latest comment, definitely I would suggest you go fully Kubernetes for a number of reasons.
it's a FOSS project, with strong community and committed to delivering new technologies vendor/provider agnostic
it's backed by CNCF, a branch of the Linux Foundation
Kubernetes allows you to not bind to a vendor-specific solution, making an eventual migration painless
Simplifying local development environment for developers, just using Minikube or K3s of Kubernetes for Docker: no more pain on handling multiple Docker Compose files that differ from production setup.
Adopt the true, cloud-native approach of application development and delivering (but this doesn't mean your legacy applications cannot run on Kubernetes, despite the opposite!)

I saw a presentation some time ago of a company that based their infrastructure on ECS. One of the conclusions was that things would have been easier if they had used Kubernetes (e.g. with EKS).
The main reason is that the community and tooling around Kubernetes is much bigger than around ECS. You can just find much more tools, talents, custom solutions, books, conferences, and other resources about Kubernetes than about ECS. This makes your life in the end easier when you start implementing things.

Related

Is there a need to run containers on local/dev machine? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 days ago.
Improve this question
I want to explore options for development pocess (web api + worker services) having on mind deployment to Azure Container Apps.
In particular, I am wondering, is there any reason for running containers on developers machine or should apps be ran and unit tested locally without containers and then use containers only from ci/cd pipeline?
In that case, integration tests should also be performed in ci/cd pipeline only.
Whats also important is that different devs in a team can have different machines (windows, macos, linux) and we want to have unified dev process for all.
What is a typical development flow?
This is mostly opinion based and how well debugging works for your specific stack. For example, I work with blazor web assembly and most of the time I debug in containers, because my application is hosted in podman, however if I am investigating an client side issue containers are not convenient because debugging does not working properly.
With containers you are is close as shipping your dev machine to the cloud as possible.

Helm charts vs deployments [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I'm overwhelmed with the number of options in Kubernetes.
There is a typical(the most suggested) way of deploying microservices: you create a deployment yaml that contains what type of image to pull, the number of replicas, listening ports of the app, etc. Then you might want to create more yaml's to expose the app, for example, using a service yaml.
Helm charts, it is said, provide an easier way, giving you a preconfigured package. However, after installing a couple of apps from bitnami I see that some have a deployment yalm, some don't. Instead, some represented as pod yamls. I've read pod yamls are not ideal in a production environment.
What should be used when I need just to deploy a couple of apps on a node, the easiest(yet not stupid) way possible?
Deployments do the OPs job for you while you drink coffee. What I mean by this is that a Deployment ensures that the desired state defined in your deployment manifest is maintained automatically (best effort). So, if a pod crashes, deployment will bring it up without human intervention.
However, using a POD YAML to deploy application, you have to ensure that your pod is always up (if needed).
If you have deployed a production grade app, you may know that running an app requires lots of things not just a deployment. You may need to create secrets, configMaps, Services, Deployments etc. This is where HELM lends a helping hand by combining all the required descriptors in one deployable package. This makes it very simple to maintain the state of the whole app as a single unit.
So, the HELM chart that has POD yaml and not the deployment, it really depends on the use-case. It may have an "operator" that is handling the OPs part for you.
Helm is the recommended way of deploying to Production.

does kubeadm upgrade play against version skew support policy [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I noticed some inconsistency between kubeadm upgrade plan and version skew support policy.
For example, I want to upgrade k8s cluster from 1.17 to 1.18.
so I need to execute kubeadm upgrade plan on one control plane node, and kubeadm will upgrade API Server, Controller Manager, Scheduler and other components at the same time.
but according to the policy, I should upgrade all API Servers to 1.18 at first.
The kube-apiserver instances these components communicate with are at
1.18 (in HA clusters in which these control plane components can communicate with any kube-apiserver instance in the cluster, all
kube-apiserver instances must be upgraded before upgrading these
components)
So, does kubeadm execute the upgrade plan in the wrong order, or this order is a compromise between policy and ease of use (or maybe implemention issue).
A bit above in the docs it's specified that
"kube-controller-manager, kube-scheduler, and cloud-controller-manager must not be newer than the kube-apiserver instances they communicate with. They are expected to match the kube-apiserver minor version, but may be up to one minor version older (to allow live upgrades)."
L.E.: Oh, I see, the issue is that control plane components on the upgraded control plane node will be newer than kube-apiserver on the not-yet-upgraded nodes. I've personally never had this issue, as I always configure control plane components to connect to kube-apiserver on the same node. I guess it's a kubeadm compromise, as you suggested.

What Google's Anthos with Kubernetes can do and how does it fits in Google Cloud Platform? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I understand from official Docs Anthos is built on Kubernetes/Istio/Knative but where does Anthos fits in Google cloud platform.
Can it act as configuration manager for application auto-deployment, provisioning etc ?
Does it provide support for language specific build on the fly?
With Anthos you can basically manage multiple Kubernetes clusters from multiple Clouds (Amazon, Google, Azure) and on-prem. It can help you maintain a hybrid environment and move in a predictable way or partially your infrastructure from on-prem to cloud.
You can use Anthos Config Management to create a common configuration for your clusters. You can use ClusterSelectors to apply configurations to subsets of clusters.
Configuration can include Istio service mesh, pod security policies, or quota policies.
From a security perspective, you can manage your policies using Anthos Policy Controller, enforcing PodSecurityPolicies, with the advantage of testing constraints before enforcing them.

Spring Cloud Netflix vs Kubernetes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am trying to finally choose between Spring Cloud Netflix, Kubernetes and Swarm for building our microservices environment. They are all very cool and do some choice is very hard.
I'll describe a little which kind of problems I want to solve.
I couldn't find any best way to design Api Gateway (not a simple load balancer) with Kubernetes or Swarm , that's why I want to use Zuul. But from other side Api Gateway must use service discovery which in case of Kubernetes or Swarm will be embedded inside the orchestra. With Kubernetes I can use it's spring cloud integration, but this way I will have server side discovery and client side discovery inside Kubernetes. Which is overkill I think.
I am wondering does anyone have some experience with them and any suggestions about that.
Thanks.
Kubernetes and Docker Swarm are container orchestration tools.
Spring Cloud is a collection of tools to build microservices/streaming architectures.
There is a bit of overlap, like service discovery, gateway or configuration services. But you could use Spring Cloud without containers and deploy the jars yourself without needing Kuberentes or Swarm.
So you'll have to choose between Kubernetes and Swarm for the orchestration of your containers, if you'll use containers.
Comparison: https://dzone.com/articles/deploying-microservices-spring-cloud-vs-kubernetes