Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 days ago.
Improve this question
I'm a newbie to K3s (and Kubernetes in general). I have a K3s cluster, and I'm having trouble setting up Multus CNI with it, and I can't find enough information to understand how to adapt it to K3s.
I have a k3s cluster set up but everything I do with the Multus CNI always has some error with compatibility and different versions, the last error that I'm getting and can't seem to get rid of it was the following :
(combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "30a7b175ea81ea5e735f212291bc867b43bddc737caa4614b09e6d118c5ff2b0": plugin type="multus" name="multus-cni-network" failed (add): [default/l2sm-operator-deployment-bc447d545-7kk4g:cbr0]: error adding container to network "cbr0": unsupported CNI result version "1.0.0"
But I'm not even using the CNI version 1.0.0, so I'm getting very confused and don't know what do change in order to fix it
Has anyone ever worked with Multus CNI in a K3s cluster before? Do you have any documentation that could help me? Is it even possible?
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
let's say I have a cluster installed via kubespray. I reseted cluster by kubeadm reset. Now i have to initialize cluster but i don't have access to images or packages or install binaries. I assume that everything is on my cluster machine. Is it possible to run kubespray with such tags or just some roles to init cluster and install apps from /etc/kubernetes , where are the yaml located
you can initiate your k8s cluster with kubeadmin.
The command for that is kubeadm init.
more information on how to use kubeadm to initiate your cluster and which configuration options you might want to pass to the command can be found on the official k8s documentation
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I noticed some inconsistency between kubeadm upgrade plan and version skew support policy.
For example, I want to upgrade k8s cluster from 1.17 to 1.18.
so I need to execute kubeadm upgrade plan on one control plane node, and kubeadm will upgrade API Server, Controller Manager, Scheduler and other components at the same time.
but according to the policy, I should upgrade all API Servers to 1.18 at first.
The kube-apiserver instances these components communicate with are at
1.18 (in HA clusters in which these control plane components can communicate with any kube-apiserver instance in the cluster, all
kube-apiserver instances must be upgraded before upgrading these
components)
So, does kubeadm execute the upgrade plan in the wrong order, or this order is a compromise between policy and ease of use (or maybe implemention issue).
A bit above in the docs it's specified that
"kube-controller-manager, kube-scheduler, and cloud-controller-manager must not be newer than the kube-apiserver instances they communicate with. They are expected to match the kube-apiserver minor version, but may be up to one minor version older (to allow live upgrades)."
L.E.: Oh, I see, the issue is that control plane components on the upgraded control plane node will be newer than kube-apiserver on the not-yet-upgraded nodes. I've personally never had this issue, as I always configure control plane components to connect to kube-apiserver on the same node. I guess it's a kubeadm compromise, as you suggested.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I've searched the internet but I haven't found clear answers.
Kops is for production grade clusters and is vendor agnostic and I get that, but compared to Eksctl what are the differences ?
Also most of the articles are found are year+ old and with the speed the K8s ecosystem moves it might be outdated.
eksctl is specifically meant to bootstrap clusters using Amazon's managed Kubernetes service (EKS). With EKS, Amazon will take responsibility for managing your Kubernetes Master Nodes (at an additional cost).
kops is a Kubernetes Installer. It will install kubernetes on any type of node (e.g. an amazon ec2 instance, local virtual machine). But you will be responsible for maintaining the master nodes (and the complexity that comes with that).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to decide between using Kubernetes vs AWS ECS. From what I have seen Kubernetes seems to have more broader adoption although the learning curve is a bit high. The only comparison I saw was AWS-ECS vs Kubernetes which is a bit old. I would appreciate any feedback on this.
Disclaimer: this answer is fully opinionated, so take it with care! :)
BTW you're asking yourself the wrong question: is your business needed to manage a non-fully managed Kubernetes cluster?
If not and you need some Kubernetes functionalities, it's wise to think to adopt a fully managed Kubernetes offer like EKS, AKS and so on according to your required IaaS. This will let you use Kubernetes superpowers without any (SIC) vendor lockin instead of any other CaaS solution like Elastic Container Service.
But if you just need some functionalities (like container autoscaling), probably you have to follow the IaaS vendor solutions: everything depends upon your needs and your business and no further details have been provided, so this discussion would be not so impartial.
UPDATE: upon your latest comment, definitely I would suggest you go fully Kubernetes for a number of reasons.
it's a FOSS project, with strong community and committed to delivering new technologies vendor/provider agnostic
it's backed by CNCF, a branch of the Linux Foundation
Kubernetes allows you to not bind to a vendor-specific solution, making an eventual migration painless
Simplifying local development environment for developers, just using Minikube or K3s of Kubernetes for Docker: no more pain on handling multiple Docker Compose files that differ from production setup.
Adopt the true, cloud-native approach of application development and delivering (but this doesn't mean your legacy applications cannot run on Kubernetes, despite the opposite!)
I saw a presentation some time ago of a company that based their infrastructure on ECS. One of the conclusions was that things would have been easier if they had used Kubernetes (e.g. with EKS).
The main reason is that the community and tooling around Kubernetes is much bigger than around ECS. You can just find much more tools, talents, custom solutions, books, conferences, and other resources about Kubernetes than about ECS. This makes your life in the end easier when you start implementing things.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
After weeks of being fine, the deployment stopped working and results in the message:
"Error from server (BadRequest): cannot trigger a deployment for "xxxx" because it contains unresolved images"
This is an in-premises Openshift 3.5 cluster, the very same deployment works just fine from the web console. "oc get events" does not return anything, raising loglevel did not help me either. Can it be related to network setup? DNS, firewall - these are the only changes in the meantime I am aware of, but I would like to know how to investigate it from openshift perspective.
Late update - turned out that it was one of the nodes having connectivity problems with the registry. Incidentally when run from web console, the builder pod got assigned to a working node, when from cmd - to the failing one. Investigating the builds helped.