Using Kubectl to remove a service without using an online resource - kubernetes

I have followed the getting started instructions here: https://linkerd.io/2/getting-started/
Please see the command below:
kubectl kustomize kustomize/deployment | \
linkerd inject - | \
kubectl apply -f -
emojivoto is now installed an accessible as I expected.
How can I remove emojivoto? This appears to work:
kubectl delete -f https://run.linkerd.io/emojivoto.yml
However, is it possible to do this without using an online resource?

This is of course possible: The mentioned yaml consists of multiple object definitions.
For example namespaces and service accounts.
Each of them can be deleted using kubectl delete <type> <name>.
Since all objects are created in the namespace emojivoto it is possible to remove everything by just removing the namespace: kubectl delete namespace emojivoto.
The other option is to save the yaml file locally and use kubectl delete -f <file> instead.

Related

Find the history of commands applied to the kubernetes cluster

Is there a way to find the history of commands applied to the kubernetes cluster by kubectl?
For example, I want to know the last applied command was
kubectl apply -f x.yaml
or
kubectl apply -f y.yaml
You can use kubectl apply view-last-applied command to find the last applied configuration:
➜ ~ kubectl apply view-last-applied --help
View the latest last-applied-configuration annotations by type/name or file.
The default output will be printed to stdout in YAML format. One can use -o option to change output format.
Examples:
# View the last-applied-configuration annotations by type/name in YAML.
kubectl apply view-last-applied deployment/nginx
# View the last-applied-configuration annotations by file in JSON
kubectl apply view-last-applied -f deploy.yaml -o json
[...]
To get the full history from the beginning of a cluster creation you should use audit logs as already mentioned in comments by #Jonas.
additionally, if you adopt gitops you could have all your cluster state under version control. It will allow you to trace back all the changes made to your cluster.

How to write Kubernetes annotations to the underlying YAML files?

I am looking to apply existing annotations on a Kubernetes resource to the underlying YAML configuration files. For example, this command will successfully find all pods with a label of "app=helloworld" or "app=testapp" and annotate them with "xyz=test_anno":
kubectl annotate pods -l 'app in (helloworld, testapp)' xyz=test_anno
However, this only applies the annotations to the running pods and doesn't change the YAML files. How do I force those changes to the YAML files so they're permanent, either after the fact or as part of kubectl annotate to start with?
You could use the kubectl patch command with a little tricks
kubectl patch $(k get po -l 'app in (helloworld, testapp)') -p '{"metadata":{"annotations":{"xyz":"test_anno"}}}'

Location of a kubernetes objects definition file

How to find the location of a kubernetes object's definition file.
I know the name of a kubernetes deployment and want to make some changes directly to its definition file instead of using 'kubernetes edit deployment '
The object definitions are stored internally in Kubernetes in replicated storage that's not directly accessible. If you do change an object definition, you would still need to trigger the rest of the Kubernetes update sequence when an object changes.
Typical practice is to keep the Kubernetes YAML files in source control. You can then edit these locally, and use kubectl apply -f to send them to the cluster. If you don't have them then you can run commands like kubectl get deployment depl-name -o yaml to get them out, and then check in the results to your source control repository.
If you really want to edit YAML definitions in an imperative, non-reproducible way, kubectl edit is the most direct thing you can do.
You could execute kubectl get deployment <deployment-name> -o yaml to get the deployment definition in a yaml format (or -o json to get in a json format), save that to a file, edit the file and apply the changes.
In a step-by-step guide would be:
Run kubectl get deployment deployment-name -o yaml > deployment-name.yaml
Edit and save the deployment-name.yaml using the editor of your preference
Run kubectl apply -f deployment-name.yaml to apply the changes
It's all stored in etcd
Nodes
Namespaces
ServiceAccounts
Roles and RoleBindings, ClusterRoles / ClusterRoleBindings
ConfigMaps
Secrets
Workloads: Deployments, DaemonSets, Pods, …
Cluster’s certificates
The resources within each apiVersion
The events that bring the cluster in the current state
Take a look at this blog post

How to view the manifest file used to create a Kubenetes resource?

I have K8s deployed on an EC2 based cluster,
There is an application running in the deployment, and I am trying to figure out the manifest files that were used to create the resources,
There were deployment, service and ingress files used to create the App setup.
I tried the following command, but I'm not sure if it's the correct one as it's also returning a lot of unusual data like lastTransitionTime, lastUpdateTime and status-
kubectl get deployment -o yaml
What is the correct command to view the manifest yaml files of an existing deployed resource?
There is no specific way to do that. You should store your source files in source control like any other code. Think of it like decompiling, you can do it, but what you get back is not the same as what you put in. That said, check for the last-applied annotation, if you use kubectl apply that would have a JSON version of a more original-ish manifest, but again probably with some defaulted fields.
You can try using the --export flag, but it is deprecated and may not work perfectly.
kubectl get deployment -o yaml --export
Refer: https://github.com/kubernetes/kubernetes/pull/73787
KUBE_EDITOR="cat" kubectl edit secrets rook-ceph-mon -o yaml -n rook-ceph 2>/dev/null >user.yaml

remove kubernetes service-catalog finalizer with cli

I'm trying to provision/deprovision service instance/binding from my cloud provider (IBM cloud private), Currently, there is a bug that if the service is not deprovisioned in ICP, that leaves me the orphan service instance on my ICP environment which I can't delete even with force option.
They provide a workaround solution of:
kubectl edit ServiceInstance <service-instance-name>
kubectl edit ServiceBinding <service-binding-name>
then delete the line:
...
finalizers:
- kubernetes-incubator/service-catalog
...
and the orphan service instance/binding will get deleted properly. I'm wondering how to automate this process with bash cli (live edit + delete line + save + exit) or any alternative way.
I'm not sure how this works with the ServiceInstance and ServiceBinding specifically, but you can use kubectl patch to update objects in place. As an example:
kubectl patch ServiceInstance <service-instance-name> -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl patch is one way. You can also use a jq/kubectl oneliner.
kubectl get ServiceInstance <service-instance-name> -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -