Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
let's say I have a cluster installed via kubespray. I reseted cluster by kubeadm reset. Now i have to initialize cluster but i don't have access to images or packages or install binaries. I assume that everything is on my cluster machine. Is it possible to run kubespray with such tags or just some roles to init cluster and install apps from /etc/kubernetes , where are the yaml located
you can initiate your k8s cluster with kubeadmin.
The command for that is kubeadm init.
more information on how to use kubeadm to initiate your cluster and which configuration options you might want to pass to the command can be found on the official k8s documentation
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 days ago.
Improve this question
I'm a newbie to K3s (and Kubernetes in general). I have a K3s cluster, and I'm having trouble setting up Multus CNI with it, and I can't find enough information to understand how to adapt it to K3s.
I have a k3s cluster set up but everything I do with the Multus CNI always has some error with compatibility and different versions, the last error that I'm getting and can't seem to get rid of it was the following :
(combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "30a7b175ea81ea5e735f212291bc867b43bddc737caa4614b09e6d118c5ff2b0": plugin type="multus" name="multus-cni-network" failed (add): [default/l2sm-operator-deployment-bc447d545-7kk4g:cbr0]: error adding container to network "cbr0": unsupported CNI result version "1.0.0"
But I'm not even using the CNI version 1.0.0, so I'm getting very confused and don't know what do change in order to fix it
Has anyone ever worked with Multus CNI in a K3s cluster before? Do you have any documentation that could help me? Is it even possible?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I've seen some resources and other videos about deploying from Azure DevOps Services to on premises servers/VM instances.
What about doing the opposit? By the opposit, I mean having a on premises AzureDevops Server 2019 and willing to deploy to let's say an AWS hosted VM ?
Is there any convenient way to make an agent on the AWS side communicate with my Azure Devops server behind my company's firewal etc... As I understood there is no way to do that as the agents are clients registering for build/release jobs/tasks to run...
Did I get it right?
Is there a hint to do that?
I will give a try to the AWS Toolkit for Azure DevOps available on the market place that should do the job.
Thanks to #Hugh Lin - MSFT !
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I noticed some inconsistency between kubeadm upgrade plan and version skew support policy.
For example, I want to upgrade k8s cluster from 1.17 to 1.18.
so I need to execute kubeadm upgrade plan on one control plane node, and kubeadm will upgrade API Server, Controller Manager, Scheduler and other components at the same time.
but according to the policy, I should upgrade all API Servers to 1.18 at first.
The kube-apiserver instances these components communicate with are at
1.18 (in HA clusters in which these control plane components can communicate with any kube-apiserver instance in the cluster, all
kube-apiserver instances must be upgraded before upgrading these
components)
So, does kubeadm execute the upgrade plan in the wrong order, or this order is a compromise between policy and ease of use (or maybe implemention issue).
A bit above in the docs it's specified that
"kube-controller-manager, kube-scheduler, and cloud-controller-manager must not be newer than the kube-apiserver instances they communicate with. They are expected to match the kube-apiserver minor version, but may be up to one minor version older (to allow live upgrades)."
L.E.: Oh, I see, the issue is that control plane components on the upgraded control plane node will be newer than kube-apiserver on the not-yet-upgraded nodes. I've personally never had this issue, as I always configure control plane components to connect to kube-apiserver on the same node. I guess it's a kubeadm compromise, as you suggested.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I understand from official Docs Anthos is built on Kubernetes/Istio/Knative but where does Anthos fits in Google cloud platform.
Can it act as configuration manager for application auto-deployment, provisioning etc ?
Does it provide support for language specific build on the fly?
With Anthos you can basically manage multiple Kubernetes clusters from multiple Clouds (Amazon, Google, Azure) and on-prem. It can help you maintain a hybrid environment and move in a predictable way or partially your infrastructure from on-prem to cloud.
You can use Anthos Config Management to create a common configuration for your clusters. You can use ClusterSelectors to apply configurations to subsets of clusters.
Configuration can include Istio service mesh, pod security policies, or quota policies.
From a security perspective, you can manage your policies using Anthos Policy Controller, enforcing PodSecurityPolicies, with the advantage of testing constraints before enforcing them.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I've searched the internet but I haven't found clear answers.
Kops is for production grade clusters and is vendor agnostic and I get that, but compared to Eksctl what are the differences ?
Also most of the articles are found are year+ old and with the speed the K8s ecosystem moves it might be outdated.
eksctl is specifically meant to bootstrap clusters using Amazon's managed Kubernetes service (EKS). With EKS, Amazon will take responsibility for managing your Kubernetes Master Nodes (at an additional cost).
kops is a Kubernetes Installer. It will install kubernetes on any type of node (e.g. an amazon ec2 instance, local virtual machine). But you will be responsible for maintaining the master nodes (and the complexity that comes with that).