The requirement is, for each new build for QA should create a new kubernetes cluster (new enviroment altogether) and then it should be destroyed after QA is completed.
So it is not a federated setup.
I am using kops in AWS to create cluster.
Do I need to create another 'bootstrap' instance for creating new cluster? The guess is I can change the name of cluster in command and it will create a new cluster. Like kops create cluster --zones=<zones> <some-other-name>.
So question is what does kubectl get all return - consolidated objects?
When I do kubectl apply -f ., how doest kubectl know which cluster to apply to?
How do I specify cluster name while installing things like helm?
You should be setting the context on your cluster something like this, once this is set then all your kubectl commands will be run in the context of that cluster.
kubectl config use-context my-cluster-name
Refer this link for more details
Related
I have a kubernetes cluster installed using 5 Virtual machines. 1 for the kubernetes master, and the other for the workers.
I tried using RBAC method, I created a kubeconfig file to give access to only one namespace, when I test it using (kubectl --kubeconfig developper.kubeconfig get pods), it works but if this user didn't use the flag --kubeconfig, he can simply use: kubectl get all --all-namespaces( for axample ) and he will be able the access to all the cluster resources, So this is not the efficient method that I need.
Do you have any other solutions please?
While creating a cluster, kops gives us a set of arguments to configure the images to be used for the master instances and the node instances like the following as mentioned in the kops documentation for create cluster command : https://github.com/kubernetes/kops/blob/master/docs/cli/kops_create_cluster.md
--image string Set image for all instances.
--master-image string Set image for masters. Takes precedence over --image
--node-image string Set image for nodes. Takes precedence over --image
Suppose I forgot to add these parameters when I created the cluster, how can I edit the cluster and update these things?
While running kops edit cluster the cluster configuration opens up as a yaml.. but where should I add these things in there?
is there complete kops cluster yaml that I can refer to modify my cluster?
You would need to edit the instance group after the cluster is created to add/edit the image name.
kops get ig
kops edit ig <ig-name>
After the update is done for all masters and nodes, perform
kops update cluster <cluster-name>
kops update cluster <cluster-name> --yes
and then perform rolling-update or restart/stop 1 instance at a time from the cloud console
kops rolling-update cluster <cluster-name>
kops rolling-update cluster <cluster-name> --yes
in another terminal kops validate cluster <cluster-name> to validate the cluster
there are other flags we can use as well while performing the rolling-update
There are other parameters as well which you can add, update, edit in the instance group - take a look at the documentation for more information
Found a solution for this question. My intention was to update huge number of instance groups in one shot for a cluster. Editing each instance group one by one is lot of work.
run kops get <cluster name> -o yaml > cluster.yaml
edit it there, then run kops replace -f cluster.yaml
I am trying to delete the entire kubernetes that created for my CI/CD pipeline R&D. So for deleting the cluster and everything I run the following command,
kubectl config delete-cluster <cluster-name>
kubectl config delete-context <Cluster-context>
For making sure that the clustee is deleted, I build the jenkins pipeline job again. So I found that it is deploying with updated changes.
When I run the command "kubectl config view", I found the following result,
docker#mildevdcr01:~$ kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: kubernetes-admin#cluster.local
kind: Config
preferences: {}
users: []
docker#mildevdcr01:~$
Still my Spring Boot micro service is deploying in cluster with updated changes.
I created the Kubernetes cluster using kubespray tool that I got reference from Github:
https://github.com/kubernetes-incubator/kubespray.git
What do I need to do for the deletion of everything that I created for this Kubernetes cluster? I need to remove everything including master node.
If you setup your cluster using Kubespray, you ran whole installation using ansible, so to delete cluster you have to use it too.
But you can also reset the entire cluster for fresh installation:
$ ansible-playbook -i inventory/mycluster/hosts.ini reset.yml
Remember to keep the “hosts.ini” updated properly.
You can remove node by node from your cluster simply adding specific node do section [kube-node] in inventory/mycluster/hosts.ini file (your hosts file) and run command:
$ ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml
KubeSpray documentation: kubespray.
Useful articles: kubespray-steps, kubespray-ansible.
Okay so for a kubespray CI/CD pipeline it's a little more complicated then just deleting the cluster context. You have to actively delete other items on each node and perform a reset.yml for ETCD.
Sometimes just running the reset.yml is enough for your pipeline so it resets the cluster back to the initial state but if this is not enough then you have to delete docker, kubelet, repositories, /etc/kubernetes and many other directories on the nodes to get a clean deployment. In this case it's almost always easier to just provision new nodes in your pipeline using terraform and vsphere(vra) API.
I created a cluster using kops. It worked fine and the cluster is healthy. I can see my nodes using kubectl and have created some deployments and services. I tried adding a node using "kops edit ig nodes" and got an error "cluster not found". Now I get that error for all kops commands:
kops validate cluster
Using cluster from kubectl context: <clustername>
cluster "<clustername>" not found
So my question is: where does kops look for clusters and how do I configure it to see my cluster.
My KOPS_STATE_STORE environment variable got messed up. I corrected it to be the correct s3 bucket and everything is fine.
export KOPS_STATE_STORE=s3://correctbucketname
Kubectl and Kops access the configuration file from the following the location.
When the cluster is created.The configuration will be saved into a users
$HOME/.kube/config
I have attached the link for further insight for instance, If you have another config file you can EXPORT it. kube-config
I am working with Kube-Aws by coreos to generate a cloud formation script and deploy it as part of my stack,
I would like to upgrade my kubernetes cluster to a newer version.
I don't mind creating a new cluster, but what I do mind is recreating all the deployments/services etc...
Is there any way to take the configuration and replace/transfer them to the new cluster? maybe copy the entire etcd data? will that help?
Use kubectl get --export=true on all the resources that you want to move into a new cluster and then restore them that way.
kubectl get <pods,services,deployments,whatever> --export=true --all-namespaces=true