How to remove pod spec file from CocoaPods / Specs - swift

I created a public pod , now I wanted to delete the pod spec because of some security concerns

pod trunk delete PODNAME VERSION
Care must be taken while doing this. If someone is using your spec file, you can damage the person's project after this process.

Related

k8s partial default recovery via yaml repo?

I am new to kubernetes and wondering if there is universal repo where I can fetch and apply default yaml configurations in case I accidentally delete them , for example if I accidentally delete any default resource like apiservice networking or other ... , could I restore it with simply by:
kubectl apply -f <k8s-resource-repo-uri>
And if there is such a repo how to identify the necessary yaml for the relevant kubernetes resource version ?
P.S.
I know that I can backup the whole ETCD and restore in case of issues , but I am wondering if there is universal yaml repo location where I can restore partially specific default resources?
For example all the resources I get with:
kubectl get apiservices
or
kubectl api-resources
I can delete them , but can I restore easily from some universal place?
For example I can do:
kubectl delete apiservices.apiregistration.k8s.io v1.networking.k8s.io
But dont know how to recover after , any ideas?
Most of the critical kube-system pods tend to be "static pods".
This means that they get restored automatically if you happen to just kubectl delete them.
https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/
For most other resources, e.g. your workload, it's good practice to have your own git repo with all the yaml files, so that you can redeploy it in case it's ever lost.

Can I see a rollout in more detail?

I was doing a practice exam on the website killer.sh , and ran into a question I feel I did a hacky solution to. Given a deployment that has had a bad rollout, revert to the last revision that didn't have any issues. If I check a deployment's rollout history, for example with the command:
kubectl rollout history deployment mydep
I get a small table with version numbers, and "change-cause" commands. Is there any way to check the changes made to the deployment's yaml file for a specific revision? Because I was stumped in figuring out which specific revision didn't have the error inside of it.
Behind the scenes a Deployment creates a ReplicaSet that has its metadata.generation set to the REVISION you see in kubectl rollout history deployment mydep, so you can look at and diff old ReplicaSets associated with the Deployment.
On the other hand, being an eventually-consistent system, kubernetes has no notion of "good" or "bad" state, so it can't know what was the last successful deployment, for example; that's why deployment tools like helm, kapp etc. exist.
Kubernetes does not store more than what is necessary for it to operate and most of the time that is just the desired state because kubernetes is not a version control system.
This is preciously why you need to have a version control system coupled with tools like helm or kustomize where you store the deployment yamls and apply them to the cluster with a new version of the software. This helps in going back in history to dig out details when things break.
You can record the last executed command that changed the deployment with --record option. When using --record you would see the last executed command executed(change-cause) to the deployment metadata.annotations. You will not see this in your local yaml file but when you try to export the deployment as yaml then you will notice the change.
--record option like below
kubectl create deployment <deployment name> --image=<someimage> > testdelpoyment.yaml
kubectl create -f testdeployment.yaml --record
or
kubectl set image deployment/<deploymentname> imagename=newimagename:newversion --record

resetting restartCount of pods with static names

For monitoring purposes, I want to rely on a pod's restartCount. However, I cannot seem to do that for certain apps, as restartCount is not reset even after rebooting the whole node the pod is scheduled to run on.
Usually, restarting a pod resets this, unless the pod name of the restarted pod is the same (e.g. true for etcd, kube-controller-manager, kube-scheduler and kube-apiserver).
For those cases, there is a longrunning minor issue as well as the idea to use kubectl patch.
To sum up the info there, kubectl edit will not allow to change anything in status. Unfortunately, neither does e.g.
kubectl -n kube-system patch pod kube-controller-manager-some.node.name --type='json' -p='[{"op": "replace", "path": "/status/containerStatuses/0/restartCount", "value": 14}]'
The Pod "kube-controller-manager-some.node.name" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
So, I am checking if anyone has found a workaround?
Thanks!
Robert
This seems to be quite an old issue (2017). Take a look here.
I believe the solution to it was supposed to be implementing unique UIDs for static pods. This issue got reopened few days ago as another github issue and hasn't been implemented to this day.
I have found a workaround for it. You need to change static pod manifest file e.g. by adding some random annotation to pod.
Let me know if it was helpful.

Accidentally deleted Kubernetes namespace

I have a Kubernetes cluster on google cloud. I accidentally deleted a namespace which had a few pods running in it. Luckily, the pods are still running, but the namespace is in terminations state.
Is there a way to restore it back to active state? If not, what would the fate of my pods running in this namespace be?
Thanks
A few interesting articles about backing up and restoring Kubernetes cluster using various tools:
https://medium.com/#pmvk/kubernetes-backups-and-recovery-efc33180e89d
https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487
https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-heptio-ark
https://www.revolgy.com/blog/kubernetes-in-production-snapshotting-cluster-state
I guess they may be useful rather in future than in your current situation. If you don't have any backup, unfortunately there isn't much you can do.
Please notice that in all of those articles they use namespace deletion to simulate disaster scenario so you can imagine what are the consequences of such operation. However the results may not be seen immediately and you may see your pods running for some time but eventually namespace deletion removes all kubernetes cluster resources in a given namespace including LoadBalancers or PersistentVolumes. It may take some time. Some resource may not be deleted because it is still used by another resource (e.g. PersistentVolume by running Pod).
You can try and run this script to dump all your resources that are still available to yaml files however some modification may be needed as you will not be able to list objects belonging to deleted namespace anymore. You may need to add --all-namespaces flag to list them.
You may also try to dump any resource which is still available manually. If you still can see some resources like Pods, Deployments etc. and you can run on them kubectl get you may try to save their definition to a yaml file:
kubectl get deployment nginx-deployment -o yaml > deployment_backup.yaml
Once you have your resources backed up you should be able to recreate your cluster more easily.
backup most resource configuration reguarly:
kubectl get all --all-namespaces -o yaml > all-deploy-resources.yaml
but this is not includes all resources.
another ways
by ark/velero:
https://github.com/vmware-tanzu/velero (Backup and migrate Kubernetes applications and their persistent volumes https://velero.io)

Kubernetes deployments: Editing the 'spec' of a pod's YAML file fails

The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.