As in title. I want to clone (create a copy of existing cluster).
If it's not possible to copy/clone Google Container Engine cluster, then how to clone Kubernetes cluster?
If that's not possible, is there a way to dump the whole cluster config?
Note:
I try to modify the cluster's configs by calling:
kubectl apply -f some-resource.yaml
But nothing stops me/other employee modifying the cluster by running:
kubectl edit service/resource
Or setting properties from command line kubectl calls.
I'm using a bash script from CoreOS team, with small adjustments, that works pretty good. By default it's excluding the kube-system namespace, but you can adjust this if you need. Also you can add or remove the resources you want to copy.
for ns in $(kubectl get ns --no-headers | cut -d " " -f1); do
if { [ "$ns" != "kube-system" ]; }; then
kubectl --namespace="${ns}" get --export -o=json svc,rc,rs,deployments,cm,secrets,ds,statefulsets,ing | \
jq '.items[] |
select(.type!="kubernetes.io/service-account-token") |
del(
.spec.clusterIP,
.metadata.uid,
.metadata.selfLink,
.metadata.resourceVersion,
.metadata.creationTimestamp,
.metadata.generation,
.status,
.spec.template.spec.securityContext,
.spec.template.spec.dnsPolicy,
.spec.template.spec.terminationGracePeriodSeconds,
.spec.template.spec.restartPolicy
)' >> "./my-cluster.json"
fi
done
To restore it on another cluster, you have to execute kubectl create -f ./my-cluster.json
You can now create/clone an existing cluster,
On the Clusters page, click on create cluster and choose an existing cluster. But remember, this will not clone the api-resources you may have to use a third party tool such as Velero to help you backup the resources.
Here are some useful links
Cluster Creation
Velero
Medium Article on How to use Velero
Related
I have followed the getting started instructions here: https://linkerd.io/2/getting-started/
Please see the command below:
kubectl kustomize kustomize/deployment | \
linkerd inject - | \
kubectl apply -f -
emojivoto is now installed an accessible as I expected.
How can I remove emojivoto? This appears to work:
kubectl delete -f https://run.linkerd.io/emojivoto.yml
However, is it possible to do this without using an online resource?
This is of course possible: The mentioned yaml consists of multiple object definitions.
For example namespaces and service accounts.
Each of them can be deleted using kubectl delete <type> <name>.
Since all objects are created in the namespace emojivoto it is possible to remove everything by just removing the namespace: kubectl delete namespace emojivoto.
The other option is to save the yaml file locally and use kubectl delete -f <file> instead.
I am running a Kubernates Cluster in bare metal of three nodes.
I have applied a couple of yaml files of different services.
Now I would like to make order in the cluster and clean some orphaned kube objects.
To do that I need to understand the set of pods or other entities which use or refer a certain ServiceAccount.
For example, I can dig ClusterRoleBinding of the, say, admin-user and investigate it:
kubectl get сlusterrolebinding admin-user
But is there a good kubectl options combination to find all the usages/references of some ServiceAccount?
You can list all resources using a service account with the following command:
kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath='{range .items[?(#.subjects[0].name=="YOUR_SERVICE_ACCOUNT_NAME")]}[{.roleRef.kind},{.roleRef.name}];{end}' | tr ";" "\n"
You just need to replace YOUR_SERVICE_ACCOUNT_NAME to the one you are investigating.
I tested this command on my cluster and it works.
Let me know if this solution helped you.
Take a look at this project. After installing via homebrew or krew you can use it find a service account and look at its role, scope, source. It does not tell which pods are referring to it but still a useful tool.
rbac-lookup serviceaccountname --output wide --kind serviceaccount
I have a script that deploys my application to my kubernetes cluster. However, if my current kubectl context is pointing at the wrong cluster, I can easily end up deploying my application to a cluster that I did not intend to deploy it to. What is a good way to check (from inside a script) that I'm deploying to the right cluster?
I don't really want to hardcode a specific kubectl context name, since different developers on my team have different conventions for how to name their kubectl contexts.
Instead, I'd like something more like if $(kubectl get cluster-name) != "expected-clsuter-name" then error.
#!/bin/bash
if [ $(kubectl config current-context) != "your-cluster-name" ]
then
echo "Do some error!!!"
return
fi
echo "Do some kubectl command"
Above script get the cluster name and match with your-desired-cluster name. If mismatch then give error. Otherwise run desire kubectl command.
For each cluster run kubectl cluster-info once to see what the IP/host for master is - that should be stable for the cluster and not vary with the name in the kubectl context (which developers might be setting differently). Then capture that in the script with export MASTERA=<HOST/IP> where that's the master for cluster A. Then the script can do:
kubectl cluster-info | grep -q $MASTERA && echo 'on MASTERA'
Or use an if-else:
if kubectl cluster-info | grep -q $MASTERA; then
echo 'on $MASTERA'
else
exit 1
fi
I've got 3, completely distinct, pods:
kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'
kubernetes-bootcamp-5c69669756-5rh9t
queenly-seahorse-mysql-6dc964999c-h4w54
wordpress-mysql-bcc89f687-hs677
but they seem to share the same env vars. E.g.
kubectl exec "kubernetes-bootcamp-5c69669756-5rh9t" env | grep MYSQL
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT_MYSQL=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_ADDR=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_HOST=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT=3306
QUEENLY_SEAHORSE_MYSQL_PORT=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PORT=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PROTO=tcp
and then on a completely different, unrelated pod (but on the same node):
kubectl exec "queenly-seahorse-mysql-6dc964999c-h4w54" env | grep MYSQL
MYSQL_ROOT_PASSWORD=<redact>
MYSQL_PASSWORD=<redact>
MYSQL_USER=
MYSQL_DATABASE=
QUEENLY_SEAHORSE_MYSQL_PORT=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP=tcp://10.98.170.14:3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PORT=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_ADDR=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT=3306
QUEENLY_SEAHORSE_MYSQL_SERVICE_HOST=10.98.170.14
QUEENLY_SEAHORSE_MYSQL_SERVICE_PORT_MYSQL=3306
QUEENLY_SEAHORSE_MYSQL_PORT_3306_TCP_PROTO=tcp
MYSQL_MAJOR=5.7
MYSQL_VERSION=5.7.14-1debian8
Any explanation why?
FWIW, I'm clearly exec'ing into 2 different pods. E.g.
kubectl exec "queenly-seahorse-mysql-6dc964999c-h4w54" env | grep HOSTNAME
HOSTNAME=queenly-seahorse-mysql-6dc964999c-h4w54
kubectl exec "kubernetes-bootcamp-5c69669756-5rh9t" env | grep HOSTNAME
HOSTNAME=kubernetes-bootcamp-5c69669756-5rh9t
All the Kubernetes Services environment variables are shared across a namespace. This is by design so that pods can find a specific service if they need to.
There have been discussions about how to disable them, but I believe no fixes have been added upstream yet.
I deleted my comment and am adding this as an answer. I realized that the "QUEENLY_SEAHORSE_MYSQL_xxxx" env vars have been added by Kubernetes for a service named "queenly-seahorse-mysql" - see https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables
I'm trying to provision/deprovision service instance/binding from my cloud provider (IBM cloud private), Currently, there is a bug that if the service is not deprovisioned in ICP, that leaves me the orphan service instance on my ICP environment which I can't delete even with force option.
They provide a workaround solution of:
kubectl edit ServiceInstance <service-instance-name>
kubectl edit ServiceBinding <service-binding-name>
then delete the line:
...
finalizers:
- kubernetes-incubator/service-catalog
...
and the orphan service instance/binding will get deleted properly. I'm wondering how to automate this process with bash cli (live edit + delete line + save + exit) or any alternative way.
I'm not sure how this works with the ServiceInstance and ServiceBinding specifically, but you can use kubectl patch to update objects in place. As an example:
kubectl patch ServiceInstance <service-instance-name> -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl patch is one way. You can also use a jq/kubectl oneliner.
kubectl get ServiceInstance <service-instance-name> -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -