I'm trying to provision/deprovision service instance/binding from my cloud provider (IBM cloud private), Currently, there is a bug that if the service is not deprovisioned in ICP, that leaves me the orphan service instance on my ICP environment which I can't delete even with force option.
They provide a workaround solution of:
kubectl edit ServiceInstance <service-instance-name>
kubectl edit ServiceBinding <service-binding-name>
then delete the line:
...
finalizers:
- kubernetes-incubator/service-catalog
...
and the orphan service instance/binding will get deleted properly. I'm wondering how to automate this process with bash cli (live edit + delete line + save + exit) or any alternative way.
I'm not sure how this works with the ServiceInstance and ServiceBinding specifically, but you can use kubectl patch to update objects in place. As an example:
kubectl patch ServiceInstance <service-instance-name> -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl patch is one way. You can also use a jq/kubectl oneliner.
kubectl get ServiceInstance <service-instance-name> -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -
Related
I've started experimenting with Argocd as part of my cluster setup and set it up to watch a test repo containing some yaml files for a small application I wanted to use for the experiment. While getting to know the system a bit, I broke the repo connection and instead of fixing it I decided that I had what I wanted, and decided to do a clean install with the intention of configuring it towards my actual project.
I pressed the button in the web UI for deleting the application, which got stuck. After which I read that adding spec.syncPolicy.allowEmpty: true and removing the metadata.finalizers declaration from the application yaml file. This did not allow me to remove the application resource.
I then ran an uninstall command with the official manifests/install.yaml as an argument, which cleaned up most resources installed, but left the application resource and the namespace. Command: kubectl delete -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Have tried to use the kubectl delete application NAME --force flag and the --cascade=orphans flag on the application resource as well as on the argocd namespace itself. Now I have both of them stuck at terminating without getting any further.
Now I'm proper stuck as I can't reinstall the argocd in any way I know due to the resources and namespace being marked for deletion, and I'm at my wits end as to what else I can try in order to get rid of the dangling application resource.
Any and all suggestions as to what to look into is much appreciated.
If your problem is that the namespace cannot be deleted, the following two solutions may help you:
Check what resources are stuck in the deletion process, delete these resources, and then delete ns
Edit the namespace of argocd, check if there is a finalizer field in the spec, delete that field and the content of the field
Hopefully it helped you.
I've found that using the following commands help greatly...
kubectl api-resources --verbs=list --namespaced -o name | \
xargs -n 1 kubectl get --show-kind \
--ignore-not-found -n <namespace>
kubectl api-resources -n <namespace> | grep argo | grep ...
...help greatly to identify the resources that are "stuck".
Then you have to either use some awk to generate delete or delete --all to "prune" the resources. If some get stuck, then you have to resort to editing them to remove the finalisers so that they can then be deleted.
It can get ugly, but awk and printf combinations can help
Is there a way to find the history of commands applied to the kubernetes cluster by kubectl?
For example, I want to know the last applied command was
kubectl apply -f x.yaml
or
kubectl apply -f y.yaml
You can use kubectl apply view-last-applied command to find the last applied configuration:
➜ ~ kubectl apply view-last-applied --help
View the latest last-applied-configuration annotations by type/name or file.
The default output will be printed to stdout in YAML format. One can use -o option to change output format.
Examples:
# View the last-applied-configuration annotations by type/name in YAML.
kubectl apply view-last-applied deployment/nginx
# View the last-applied-configuration annotations by file in JSON
kubectl apply view-last-applied -f deploy.yaml -o json
[...]
To get the full history from the beginning of a cluster creation you should use audit logs as already mentioned in comments by #Jonas.
additionally, if you adopt gitops you could have all your cluster state under version control. It will allow you to trace back all the changes made to your cluster.
I have followed the getting started instructions here: https://linkerd.io/2/getting-started/
Please see the command below:
kubectl kustomize kustomize/deployment | \
linkerd inject - | \
kubectl apply -f -
emojivoto is now installed an accessible as I expected.
How can I remove emojivoto? This appears to work:
kubectl delete -f https://run.linkerd.io/emojivoto.yml
However, is it possible to do this without using an online resource?
This is of course possible: The mentioned yaml consists of multiple object definitions.
For example namespaces and service accounts.
Each of them can be deleted using kubectl delete <type> <name>.
Since all objects are created in the namespace emojivoto it is possible to remove everything by just removing the namespace: kubectl delete namespace emojivoto.
The other option is to save the yaml file locally and use kubectl delete -f <file> instead.
I am running a Kubernates Cluster in bare metal of three nodes.
I have applied a couple of yaml files of different services.
Now I would like to make order in the cluster and clean some orphaned kube objects.
To do that I need to understand the set of pods or other entities which use or refer a certain ServiceAccount.
For example, I can dig ClusterRoleBinding of the, say, admin-user and investigate it:
kubectl get сlusterrolebinding admin-user
But is there a good kubectl options combination to find all the usages/references of some ServiceAccount?
You can list all resources using a service account with the following command:
kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath='{range .items[?(#.subjects[0].name=="YOUR_SERVICE_ACCOUNT_NAME")]}[{.roleRef.kind},{.roleRef.name}];{end}' | tr ";" "\n"
You just need to replace YOUR_SERVICE_ACCOUNT_NAME to the one you are investigating.
I tested this command on my cluster and it works.
Let me know if this solution helped you.
Take a look at this project. After installing via homebrew or krew you can use it find a service account and look at its role, scope, source. It does not tell which pods are referring to it but still a useful tool.
rbac-lookup serviceaccountname --output wide --kind serviceaccount
As in title. I want to clone (create a copy of existing cluster).
If it's not possible to copy/clone Google Container Engine cluster, then how to clone Kubernetes cluster?
If that's not possible, is there a way to dump the whole cluster config?
Note:
I try to modify the cluster's configs by calling:
kubectl apply -f some-resource.yaml
But nothing stops me/other employee modifying the cluster by running:
kubectl edit service/resource
Or setting properties from command line kubectl calls.
I'm using a bash script from CoreOS team, with small adjustments, that works pretty good. By default it's excluding the kube-system namespace, but you can adjust this if you need. Also you can add or remove the resources you want to copy.
for ns in $(kubectl get ns --no-headers | cut -d " " -f1); do
if { [ "$ns" != "kube-system" ]; }; then
kubectl --namespace="${ns}" get --export -o=json svc,rc,rs,deployments,cm,secrets,ds,statefulsets,ing | \
jq '.items[] |
select(.type!="kubernetes.io/service-account-token") |
del(
.spec.clusterIP,
.metadata.uid,
.metadata.selfLink,
.metadata.resourceVersion,
.metadata.creationTimestamp,
.metadata.generation,
.status,
.spec.template.spec.securityContext,
.spec.template.spec.dnsPolicy,
.spec.template.spec.terminationGracePeriodSeconds,
.spec.template.spec.restartPolicy
)' >> "./my-cluster.json"
fi
done
To restore it on another cluster, you have to execute kubectl create -f ./my-cluster.json
You can now create/clone an existing cluster,
On the Clusters page, click on create cluster and choose an existing cluster. But remember, this will not clone the api-resources you may have to use a third party tool such as Velero to help you backup the resources.
Here are some useful links
Cluster Creation
Velero
Medium Article on How to use Velero