I am implementing the continuous integration and continuous deployment by using Ansible, Docker, Jenkins and Kubernetes. I already created one Kubernetes cluster with 1 master and 2 worker nodes by using Ansible and kubespray deployment. And I have 30 - 40 number of micro service application. I need to create that much of service and deployments.
My Confusion
When I am using Kubernetes package manager Kubernetes Helm chart, then do I need to initiate my chart on master node or in my base machine from where I I deployed my kubernet cluster ?
If I am initiating inside master, then can I use kubectl to deploy using ssh on remote worker nodes?
If I am initiating outside the Kubernetes cluster nodes , then Can i use kubectl command to deploy in Kubernetes cluster ?
Your confusion seems to lie in the configuration and interactions of Helm components. This explanation provides a good graphics to represent the relationships.
If you are using the traditional Helm/Tiller configuration, Helm will be installed locally on your machine and, assuming you have the correct kubectl configuration, you can "initialize" your cluster by running helm init to install Tiller into your cluster. Tiller will run as a deployment in kube-system, and has the RBAC privileges to create/modify/delete/view the chart resources. Helm will automatically manage all the API objects for you, and the kube-scheduler will schedule the pods to all your nodes accordingly. You should not be directly interacting with your master and nodes via your console.
In either configuration, you would always be making the Helm deployment from your local machine with a kubectl access to your cluster.
Hope this helps!
If you look for the way for running helm client inside your Kubernetes cluster, please check the concept of Helm-Operator.
I would recommend you also to look around for term "GitOps" - set of practices which combines Git with Kubernetes, and sets Git as a source of truth for your declarative infrastructure and applications.
There are two great OSS projects out there, that implements GitOps best practices:
flux (uses Helm-Operator)
Jenkins-x (uses helm as a part of release pipeline, check out this session on YT to see it in action)
Related
I am new to Spinnaker. I want to setup Spinnaker in my lab to test some pipeline deployments to K8s. I read through a lot of videos and websites teaching how to setup Spinnaker using Helm, Hal, Operator, etc. The steps and requirements are quite complex and I am struggling which method I should take
For my lab environment, I have 3 VM running in CentOS (bare metal) and built a Kubernetes cluster on them (1 master and 2 slave nodes). And now I want to setup Spinnaker to test microservice deployment on this k8s cluster in an easy and quick way
Some of my doubts
If I chose Spinnaker on Kubernetes cluster, do I need to setup another new k8s cluster? Or I can use the same cluster that already has several microservices running on it?
If I chose Spinnaker on VM, guess I need to spin up a new Ubuntu machine instead of setting in on my existing CentOS machine?
Any suggestion is welcome. Thanks!
I am trying to create Kubernetes cluster having 1 master and 2 worker nodes by using the tool kubeadm in my on-premise machines. I am following the Kubernetes official documentation for forming the cluster from the following url:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
After installing all the runtime and completing before begin pre-requistics steps, I found in the document as the first step of forming the cluster is Create load balancer for kube-apiserver.
My Doubt
When I created the single master 3 worker nodes cluster using kubespray tool, I did not created any separate load balancer for that. So here when I am following the kubeadm tool, Do I need to create the load balancer actually for forming ?
Why are both tools showing different way, Since I did not created load balancer by using kubespray tool. Now I am trying to create cluster with kubeadm tool.
Speaking of load balancers creation during Kubernetes deployment using Kubeadm it depends on your setup. It is not mandatory to setup load balancer. Your cluster will still work, but without load balancing, it's going to be hard to qualify this cluster as HA.
In a single master setup as it is in your case, the master node manages the etcd database, API server, controller manager and scheduler, along with the worker nodes. However, if that single master node fails, all the worker node fail as well and entire cluster will be lost.
Learn more here: kubernetes-ha-kubeadm.
Kubeadm covers the needs of a life-cycle management for Kubernetes clusters, including self-hosted layouts, dynamic discovery services, etc. Kubespray is more about generic configuration, initial clustering, and bootstrapping.
Kubespray is a good choice when you either are familiar with Ansible or seek a possibility to switch between multiple platforms. If your priority is tight integration with unique features offered by the supported clouds, and you plan to stick with your provider, kops may be a better option.
Deploying a loadbalancer is up to a user and is not covered by ansible roles in Kubespray. By default, it only configures a non-HA endpoint, which points to the access_ip or IP address of the first server node in the kube-master group. It can also configure clients to use endpoints for a given loadbalancer type. More information you can find here: kubespray-lb.
Here you have comparision of Kubernetes deployment tools: Kubernetes Deployment Tools.
I want to setup my hyperledger blockchain application into kubernetes cluster.
I don't want to encourage questions like this but here are some steps that you could possibly help you:
Ensure your application runs correctly locally on Docker.
Construct your Kubernetes configuration files. What you will need:
A deployment or a statefulset for each of your peers.
A statefulset for the couchdb for each of your peers.
A deployment or a statefulset for each of your orderers.
One service per peer, orderer and couchdb (to allow them to communicate).
A job that creates and joins the channels.
A job that installs and instantiates the chaincode.
Generated crypto-material and network-artifacts.
Kubernetes Secrets or persistent volumes that hold your crypto-material and network-artifacts.
An image of your dockerized application (I assume you have some sort of server using an SDK to communicate with the peers) uploaded on a container registry.
A deployment that uses that image and a service for your application.
Create a Kubernetes cluster either locally or on a cloud provider and install the kubectl CLI on your computer.
Apply (e.g. kubectl apply -f peerDeployment.yaml) the configuration files on your cluster with this order:
Secrets
Peers, couchdb's, orderers (deployments, statefulsets and services)
Create channel jobs
Join channel jobs
Install and instantiate chaincode job
Your application's deployment and service
If everything was configured correctly, you should have a running HLF platform in your Kubernetes cluster. It goes without saying that you have to research each step to understand what you need to do. And to experiment, a lot.
I cannot find any articles answering question: Is it safe/right to deploy Spinnaker to same Kubernetes cluster which Spinnaker will manage? Mainly I mean for production, HA deployments.
I think the architectures of Spinnaker and Kubernetes compliment each other very well, and running Spinnaker in the same K8s cluster it is managing is definitely safe.
As per your comment in #mdirkse's answer, there is a codelab, which is official Spinnaker documentation, that explains how to create a set of basic pipelines for deploying code from a Github repo to a production Kubernetes cluster in the form of a Docker container.
In this documentation, it specifically states the following:
We will be deploying Spinnaker to the same Kubernetes cluster it will be managing. ...
Not sure if this is exactly what you are looking for though.
I'm not sure about "right", but I'd definitely say that it is safe to run Spinnaker on the same Kubernetes cluster that it manages, if you set it up right. Kubernetes (and Docker) gives you all the tools you need to properly separate Spinnaker from the other things running on the cluster (namespaces, quotas, node affinities etc). Indeed the whole point of Kubernetes is to be able to easily run software in an HA/fault tolerant way, and since Spinnaker consists of a collection of stateless microservices it really plays to the strenghts of k8s.
I have installed Kubernetes with minikube, which is a single node cluster.
There is a yaml file to deploy controller master but it showing
Back-off restarting failed container Error syncing pod
Can someone solve this issue?
link for the yaml file is here https://github.com/kubernetes/kubernetes.github.io/blob/master/docs/admin/high-availability/kube-controller-manager.yaml
The Kubernetes controller manager is a core component of Kubernetes and already running in every Kubernetes cluster, usually in form of a standalone pod managed by the Kubernetes addon manager. Minikube uses localkube which integrates the controller manager together with other Kubernetes core components in a single binary to simplify setup of single-node clusters for testing purposes. If you want to change options of the integrated controller manager or other components, use the --extra-config option of minikube start.
The example you linked is a custom deployment of the controller manager used for highly available multi-master clusters. If you want to test this you need to set up your cluster manually, minikube is not the right tool for this.