List all the kubernetes resources related to a helm deployment or chart - kubernetes

I deployed a helm chart using helm install and after this I want to see if the pods/services/cms related to just this deployment have come up or failed. Is there a way to see this?
Using kubectl get pods and greping for the name works but it does not show the services and other resources that got deployed when this helm chart is deployed.

helm get manifest RELEASE_NAME
helm get all RELEASE_NAME
https://helm.sh/docs/helm/helm_get_manifest/

If you are using Helm3:
To list all resources managed by the helm, use label selector with label app.kubernetes.io/managed-by=Helm:
$ kubectl get all --all-namespaces -l='app.kubernetes.io/managed-by=Helm'
To list all resources managed by the helm and part of a specific release: (edit release-name)
kubectl get all --all-namespaces -l='app.kubernetes.io/managed-by=Helm,app.kubernetes.io/instance=release-name'
Update:
Labels key may vary over time, follow the official documentation for the latest labels.

I couldn't find anywhere that gave me what I wanted, so I wrote this one-liner using yq. It prints out all objects in Kind/name format. You might get some blank space if any manifests are nothing but comments.
helm get manifest $RELEASE_NAME | yq -N eval '[.kind, .metadata.name] | join("/")' - | sort
Published here: https://gist.github.com/bioshazard/e478d118fba9e26314bffebb88df1e33

By issuing:
kubectl get all -n <namespace> | grep ...
You will only query for the following resources:
pod
service
daemonset
deployment
replicaset
statefulset
job
cronjobs
I encourage you to follow this article for more explanation:
Studytonight.com: How to list all resources in a Kubernetes namespace
Using the example from the above link you can query the API for all resources by issuing:
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind -l LABEL=VALUE --ignore-not-found -o name
This command will query the API for all the resources types in the cluster and then query for each of the resources separately by label.
You can create resources in a Helm chart with labels and then query the API by specifying: -l LABEL=VALUE.
EXAMPLE
Assuming that you provisioned following Helm chart
$ helm install awesome-nginx stable/nginx-ingress
This Chart is deprecated but it's only for example purposes.
You can query the API for all resources with:
kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --show-kind -l release=awesome-nginx --ignore-not-found -o name
where:
LABEL <- release
VALUE <- awesome-nginx (release name)
After that you should be able to see:
endpoints/awesome-nginx-nginx-ingress-controller
endpoints/awesome-nginx-nginx-ingress-default-backend
pod/awesome-nginx-nginx-ingress-controller-86b9c7d9c7-wwr8f
pod/awesome-nginx-nginx-ingress-default-backend-6979c95c78-xn9h2
serviceaccount/awesome-nginx-nginx-ingress
serviceaccount/awesome-nginx-nginx-ingress-backend
service/awesome-nginx-nginx-ingress-controller
service/awesome-nginx-nginx-ingress-default-backend
deployment.apps/awesome-nginx-nginx-ingress-controller
deployment.apps/awesome-nginx-nginx-ingress-default-backend
replicaset.apps/awesome-nginx-nginx-ingress-controller-86b9c7d9c7
replicaset.apps/awesome-nginx-nginx-ingress-default-backend-6979c95c78
podmetrics.metrics.k8s.io/awesome-nginx-nginx-ingress-controller-86b9c7d9c7-wwr8f
podmetrics.metrics.k8s.io/awesome-nginx-nginx-ingress-default-backend-6979c95c78-xn9h2
rolebinding.rbac.authorization.k8s.io/awesome-nginx-nginx-ingress
role.rbac.authorization.k8s.io/awesome-nginx-nginx-ingress
You can modify the output by changing the -o parameter.
Additional resources:
Github.com: Kubectl get all does not list all resources in a namespace #151
Stackoverflow.com: Questions: Listing all resources in a namespace
$ helm get manifest RELEASE-NAME

helm status RELEASE_NAME
This command shows the status of a named release. The status consists
of:
last deployment time
k8s namespace in which the release lives
state of the release (can be: unknown, deployed, uninstalled, superseded, failed, uninstalling, pending-install, pending-upgrade or
pending-rollback)
list of resources that this release consists of, sorted by kind
details on last test suite run, if applicable
additional notes provided by the chart
Usage: helm status RELEASE_NAME [flags]
Official docs
Also note that helm place some known labels/annotations on resource it manages, see here. You can use it with kubectl get ... -l ...

kubectl get all -n <namespace> | grep <helm chart keyword, ex: kibana, elasticsearch>
Should list all resources created by helm chart in a particular namespace

Related

Is there a way to list all resources created by a specific operator and their status?

I use config connector https://cloud.google.com/config-connector/docs/overview
I create gcp resources with CRDs that config connector provides:
kind: IAMServiceAccount
kind: StorageBucket
etc
Now what I'd really like is to be able to get a simple list of each resource and its status (if it was created successfully or not). Where each resource is a single line that's something like: kind, name, status, etc
Is there a way with kubectl to get a list of all resources that were created by an operator like this? I suppose I could manually label all these resources and try to select with a label but I really don't want to do that
Edit
Per the comment I could do this, but curious if there is a less unwieldy command
kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true \
-o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | xargs -n 1 \
kubectl get -Ao jsonpath='{range .items[*]}{" Kind: "}{#.kind}{"Name: "}{#.metadata.name}{" Status: "}{#.status.conditions[].status}{" Reason: "}{#.status.conditions[].reason}{"\n"}{end}' --ignore-not-found
I've made a bit of research on this topic and I found 2 possible solutions to retrieve all the resources that were created by config-connector:
$ kubectl api-resources way
$ kubectl get-all/ketall way with labels (please see the explanation as it's not installed by default)
The discussion that is referencing similar issue can be found here:
Github.com: Kubernetes: kubectl: Issue 151
$ kubectl api-resources
As pointed in the comment I made you can use the following expression:
kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | xargs -n 1 kubectl get --ignore-not-found
Dissecting this solution:
kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true
retrieve the Customer Resource Definitions that have a matching selector
-o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
use the jsonpath to retrieve only the value stored in .metadata.name key (get the name of the crd)
| xargs -n 1 kubectl get
pipe the output to the xargs and use each CRD retrieved from previous command to run $ kubectl get <RESOURCE>
--ignore-not-found
do not display a message about missing resource
This command could also be altered to suit the specific needs as it's shown in the question.
A side note!
Similar command is referenced in the github link I pasted above:
Github.com: Kubernetes: kubectl: Issues 151: Comment 402003022
$ kubectl get-all/ketall
Above commands can be used to retrieve all of the resources in the cluster. They are not available in default kubectl and they need additional configuration.
More reference about the installation can be found in this github page:
Github.com: Corneliusweig: Ketall
Using the approach described in the official Kubernetes documentation:
Labels are intended to be used to specify identifying attributes of objects
Kubernetes.io: Docs: Concepts: Overview: Working with objects: Labels
You can label those resources created by config connector (I know that you would like to avoid it) and look for this resources like:
$ kubectl get-all -l look=here
NAME NAMESPACE AGE
storagebucket.storage.cnrm.cloud.google.com/config-connector-bucket config-connector 135m
storagebucket.storage.cnrm.cloud.google.com/config-connector-bucket-test config-connector 13s
This resources have the .metadata.labels.look=here added to it's definitions.
Additional resources:
Cloud.google.com: Config Connector: Docs: How to: Getting Started
Thenewstack.io: Tutorial use google config connector to manage a gcp cloud sql database
There is also a way suggested in GCP config-connector docs:
kubectl get gcp
from https://cloud.google.com/config-connector/docs/how-to/monitoring-your-resources#listing_all_resources

kubernetes get values from already deployed pod/daemonset

someone before me deployed daemonset, and configmap is it possible for me to somehow get the values he used ? smth like kubectl edit <name> but the edit option has some temporary data in it too - the name of pod with random chars etc. - and to get pure values used in that deployments/daemonset what command would I need to use?
kubectl get --export had bugs like this and this and --export will be deprecated in k8s v1.18 per this link.
$ kubectl get --export
Flag --export has been deprecated, This flag is deprecated and will be removed in future.
...
kubectl get -o yaml can be used to get values of a k8s resource's manifest along with metadata and status of the resource. kubectl get -o yaml has the following three sections:
metadata
spec with the values of the k8s resource's manifest
status with the resource's status
How about kubectl get --export?

Uninstall istio (all components) completely from kubernetes cluster

I installed istio using these commands:
VERSION = 1.0.5
GCP = gcloud
K8S = kubectl
#$(K8S) apply -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
#$(K8S) apply -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
#$(K8S) get pods -n istio-system
#$(K8S) label namespace default istio-injection=enabled
#$(K8S) get svc istio-ingressgateway -n istio-system
Now, how do I completely uninstall it including all containers/ingress/egress etc (everthing installed by istio-demo-auth.yaml?
Thanks.
If you used istioctl, it's pretty easy:
istioctl x uninstall --purge
Of course, it would be easier if that command were listed in istioctl --help...
Reference: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio
Based on their documentation here, you can generate all specs as yml file then pipe it to simple kubectl's delete operation
istioctl manifest generate <your original installation options> | kubectl delete -f -
here's an example:
istioctl manifest generate --set profile=default | kubectl delete -f -
A drawback of this approach though is to remember all options you have used when you installed istio which might be quite hard to remember especially if you enabled specific components.
If you have installed istio using helm's chart, you can uninstall it easily
First, list all installed charts:
helm list -n istio-system
NAME NAMESPACE REVISION UPDATED STATUS
istiod istio-system 1 2020-03-07 15:01:56.141094 -0500 EST deployed
and then delete/uninstall the chart using the following syntax:
helm delete -n istio-system --purge istio-system
helm delete -n istio-system --purge istio-init
...
Check their website for more information on how to do this.
If you already installed istio using istioctl or helm in its own separate namespace, you can easily delete completely that namespace which will in turn delete all resources created inside it.
kubectl delete namespace istio-system
Just run kubectl delete for the files you applied.
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
You can find this in docs as well.
If you have installed it as described, then you will need to delete it in the same way.
kubectl delete -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
Then you would manually delete the folder, and istioctl, if you moved to anywhere.
IMPORTANT: Deleting a namespace is super comfortable to clean up, but you can't do it for all scenarios. In this situation, if you delete the namespace only, you are leaving all the permissions and credentials intact. Now, say you want to update Istio, and Istio team has made some security changes in their RBAC rules, but has not changed the name of the object. You would deploy the new yaml file, and it will throw an error saying the object (for example clusterrolebinding) already exists. If you don't pay attention to what that error was, you can end up with the worse type of errors (when there are no error, but something goes wrong).
Cleaning up Istio is a bit tricky, because of all the things it adds: CustomResourceDefinitions, ConfigMaps, MutatingWebhookConfigurations, etc. Just deleting the istio-system namespace is not sufficient. The safest bet is to use the uninstall instructions from istio.io for the method you used to install.
Kubectl: https://istio.io/docs/setup/kubernetes/install/kubernetes/#uninstall
Helm: https://istio.io/docs/setup/kubernetes/install/helm/#uninstall
When performing these steps, use the version of Istio you are attempting to remove. So if you are trying to remove Istio 1.0.2, grab that release from istio.io.
Don't forget to disable the injection:
kubectl delete -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
kubectl label default your-namespace istio-injection=disabled
Using the profile you used in installation, demo for example, run the following command
istioctl manifest generate --set profile=demo | kubectl delete -f -
After normal istio uninstall (depending on the way istio was installed by helm or istioctl) following steps can be performed
Check if anything still exists in the istio-system namespace, if exists then delete manually, also remove the istio-system namespace
Check if there is a sidecar associated with any pod (sometimes sidecars not get cleaned up in case of failed uninstallation)
kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.namespace}{"\t"}{..image}{"\n\n"}{end}' -A | grep 'istio/proxyv' | grep -v istio-system
Get the CRD that is still in use and remove associated resources
kubectl get crds | grep 'istio.io' | cut -f1-1 -d "." | xargs -n1 -I{} bash -c " echo {} && kubectl get --all-namespaces {} -o wide && echo -e '---'"
Delete all the CRD
kubectl get crds | grep 'istio.io' | xargs -n1 -I{} sh -c "kubectl delete crd {}"
Edit the labels back (optional)
kubectl label default <namespace name> istio-injection=disabled
Just delete the ns
k delete ns istio-system
Deleting CRDs without needing to find the helm charts:
kubectl delete crd -l chart=istio
Hi if you installated via helm-template you can use these commands :
For CRD's:
$ helm template ${ISTIO_BASE_DIR}/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl delete -f -
$ kubectl delete crd $(kubectl get crd |grep istio)
For Deployment/NS..etc other resources:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system\
--values install/kubernetes/helm/istio/values-istio-demo.yaml \
--set global.controlPlaneSecurityEnabled=true \
--set global.mtls.enabled=true | kubectl delete -f -

How to delete all resources from Kubernetes one time?

Include:
Daemon Sets
Deployments
Jobs
Pods
Replica Sets
Replication Controllers
Stateful Sets
Services
...
If has replicationcontroller, when delete some deployments they will regenerate. Is there a way to make kubenetes back to initialize status?
Method 1: To delete everything from the current namespace (which is normally the default namespace) using kubectl delete:
kubectl delete all --all
all refers to all resource types such as pods, deployments, services, etc. --all is used to delete every object of that resource type instead of specifying it using its name or label.
To delete everything from a certain namespace you use the -n flag:
kubectl delete all --all -n {namespace}
Method 2: You can also delete a namespace and re-create it. This will delete everything that belongs to it:
kubectl delete namespace {namespace}
kubectl create namespace {namespace}
Note (thanks #Marcus): all in kubernetes does not refers to every kubernetes object, such as admin level resources (limits, quota, policy, authorization rules). If you really want to make sure to delete eveything, it's better to delete the namespace and re-create it. Another way to do that is to use kubectl api-resources to get all resource types, as seen here:
kubectl delete "$(kubectl api-resources --namespaced=true --verbs=delete -o name | tr "\n" "," | sed -e 's/,$//')" --all
Kubernetes Namespace would be the perfect options for you. You can easily create namespace resource.
kubectl create -f custom-namespace.yaml
$ apiVersion: v1
kind: Namespace
metadata:
name:custom-namespace
Now you can deploy all of the other resources(Deployment,ReplicaSet,Services etc) in that custom namespaces.
If you want to delete all of these resources, you just need to delete custom namespace. by deleting custom namespace, all of the other resources would be deleted. Without it, ReplicaSet might create new pods when existing pods are deleted.
To work with Namespace, you need to add --namespace flag to k8s commands.
For example:
kubectl create -f deployment.yaml --namespace=custom-namespace
you can list all the pods in custom-namespace.
kubectl get pods --namespace=custom-namespace
You can also delete Kubernetes resources with the help of labels attached to it. For example, suppose below label is attached to all resource
metadata:
name: label-demo
labels:
env: dev
app: nginx
now just execute the below commands
deleting resources using app label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l app=nginx
deleting resources using envirnoment label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l env=dev
can also try kubectl delete all --all --all-namespaces
all refers to all resources
--all refers to all resources, including uninitialized ones
--all-namespaces in all all namespaces
First backup your namespace resources and then delete all resources found with the get all command:
kubectl get all --namespace={your-namespace} -o yaml > {your-namespace}.yaml
kubectl delete -f {your-namespace}.yaml
Nevertheless, still some resources exists in your cluster.
Check with
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found --namespace {your-namespace}
If you really want to COMPLETELY delete your namespace, go ahead with:
kubectl delete namespace {your-namespace}
(tested with Client v1.23.1 and Server v1.22.3)
In case if you want to delete all K8S resources in the cluster. Then, easiest way would be to delete the entire namespace.
kubectl delete ns <name-space>
kubectl delete deploy,service,job,statefulset,pdb,networkpolicy,prometheusrule,cm,secret,ds -n namespace -l label
kubectl delete all --all
to delete all the resource in cluster.
after deleting all resources k8's will again relaunch the default services for cluster.

Get YAML for deployed Kubernetes services?

I am trying to deploy my app to Kubernetes running in Google Container
Engine.
The app can be found at: https://github.com/Industrial/docker-znc.
The Dockerfile is built into an image on Google Container Registry.
I have deployed the app in Kubernetes via the + button. I don't have the YAML
for this.
I have inserted a Secret in Kubernetes for the PEM file required by the app.
How do I get the YAML for the Deployment, Service and Pod created by
Kubernetes by filling in the form?
How do I get the Secret into my Pod for usage?
To get the yaml for a deployment (service, pod, secret, etc):
kubectl get deploy deploymentname -o yaml
How do I get the YAML for the Deployment, Service and Pod created by
Kubernetes by filling in the form?
kubectl get deployment,service,pod yourapp -o yaml --export
Answering #Sinaesthetic question:
any idea how to do it for the full cluster (all deployments)?
kubectl get deploy --all-namespaces -o yaml --export
The problem with this method is that export doesn't include the namespace. So if you want to export many resources at the same time, I recommend doing it per namespace:
kubectl get deploy,sts,svc,configmap,secret -n default -o yaml --export > default.yaml
Unfortunately kubernetes still doesn't support a true get all command, so you need to list manually the type of resources you want to export. You can get a list of resource types with
kubectl api-resources
The same issue is discussed at kubernetes GitHub issues page and the user "alahijani" made a bash script that exports all yaml and writes them to single files and folders.
Since this question ranks well on Google and since I found that solution very good, I represent it here.
Bash script exporting yaml to sub-folders:
for n in $(kubectl get -o=name pvc,configmap,serviceaccount,secret,ingress,service,deployment,statefulset,hpa,job,cronjob)
do
mkdir -p $(dirname $n)
kubectl get -o=yaml --export $n > $n.yaml
done
Another user "acondrat" made a script that do not use directories, which makes it easy to make a kubectl apply -f later.
Bash script exporting yaml to current folder:
for n in $(kubectl get -o=name pvc,configmap,ingress,service,secret,deployment,statefulset,hpa,job,cronjob | grep -v 'secret/default-token')
do
kubectl get -o=yaml --export $n > $(dirname $n)_$(basename $n).yaml
done
The last script does not include service account.
Now that --export is deprecated, to get the output from your resources in the 'original' format (just cleaned up, without any information about the current object state (unnecessary metadata in this circumstance)) you can do the following using yq v4.x:
kubectl get <resource> -n <namespace> <resource-name> -o yaml \
| yq eval 'del(.metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields)' -
Syntax for downloading yaml's from kubernetes
kubectl get [resource type] -n [namespace] [resource Name] -o yaml > [New file name]
Create yaml file from running pod:
kubectl get po -n nginx nginx-deployment-755cfc7dcf-5s7j8 -o yaml > podDetail.yaml
Create replicaset yaml file from running pod:
kubectl get rs -n nginx -o yaml > latestReplicaSet.yaml
Create deployement yaml file from running pod:
kubectl get deploy -n nginx -o yaml > latestDeployement.yaml
Also its possible to use the view-last-applied command e.g.
kubectl apply view-last-applied services --all > services.yaml
which will return all the manifests applied to create services. Also you can specify a certain k8 resource by services/resource-name label.
If you need to get 'clean' export, removing the annotations added by Kubernetes, there's an opensource project that does that by piping the output of kubectl get - https://github.com/itaysk/kubectl-neat.
It removes the timestamp metadata, etc.
kubectl get pod mypod -o yaml | kubectl neat
kubectl get pod mypod -oyaml | kubectl neat -o json
Use this command to get yaml format of your service
kubectl get service servicename -n <namespace> -o yaml
You can put it in some file also
kubectl get service servicename -n <namespace> -o yaml > service.yaml
The following code will extract all your K8s definitions at once and place them on individual folders below the current folder.
for OBJ in $(kubectl api-resources --verbs=list --namespaced -o name)
do
for DEF in $(kubectl get --show-kind --ignore-not-found $OBJ -o name)
do
mkdir -p $(dirname $DEF)
kubectl get $DEF -o yaml \
| yq eval 'del(.metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields)' - > $DEF.yaml
done
done
You can store output of deployed kubernetes service by using below command -
kubectl get svc -n -o yaml > svc-output.yaml
For deployments -
kubectl get deploy <deployment-name> -n <your-namespace> -o yaml > deploy-output.yaml
For Pod -
kubectl get pod <pod-name> -n <your-namespace> -o yaml > pod-output.yaml
You can get your secret details using below command -
kubectl get secret -n -o yaml
In order to use update your deployment file by using below command -
kubectl edit deploy -n
Under your pod template add below -
this will go under pod containers section to mount secret volume to container
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
this will go inside your pod template section in deployment
volumes:
- name: foo
secret:
secretName: mysecret
for the 2nd question regarding the secret, this is from the k8s documentation. see https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets for more info.
Create a secret or use an existing one. Multiple pods can reference the same secret.
Modify your Pod definition to add a volume under spec.volumes[]. Name the volume anything, and have a spec.volumes[].secret.secretName field equal to the name of the secret object.
Add a spec.containers[].volumeMounts[] to each container that needs the secret. Specify spec.containers[].volumeMounts[].readOnly = true and spec.containers[].volumeMounts[].mountPath to an unused directory name where you would like the secrets to appear.
Modify your image and/or command line so that the program looks for files in that directory. Each key in the secret data map becomes the filename under mountPath.
I have used this and it works fine.
Like mentioned above "--export" is one option to get the manifest corresponding to the kubeernetes objects
But "--export" is considered to be buggy and there is a proposal to deprecate it
Currently the better option is to do "-o yaml" or "-o json" and remove the unnecessary fields
The main difference is "--export" is expected to remove the cluster specific settings (e.g. cluster service IP of a k8s service). But it is found to be inconsistent in this regard
All services
kubectl get service --all-namespaces -o yaml > all-service.yaml
All deployments
kubectl get deploy --all-namespaces -o yaml > all-deployment.yaml
We can get the contents associated with any kind from a Kubernetes cluster through the command line if you have the read access.
kubectl get <kind> <kindname> -n <namespace> -o <yaml or json>
For example, if you want to export a deployment from a namespace follow the below command -
kubectl get deploy mydeploy -n mynamespace -o yaml > mydeploy.yaml
kubectl get deploy mydeploy -n mynamespace -o json > mydeploy.json
To get all yaml file deployments backup (not a specific deployment):
kubectl get deployments -n <namespace> -o yaml > deployments.yaml
for getting all yaml file services backup (not a specific deployment):
kubectl get services -n <namespace> -o yaml > services.yaml
enjoy it.
To get YAML for current running deployment on kubernetes, you can run this command:
kubectl get deployment <deployment_name> -o yaml
To generate YAML for deployment you can run the imperative command.
kubectl create deployment <deployment_name>--image=<image_name> -o yaml
To generate and export the deployment you can run the imperative command.
kubectl create deployment <deployment_name>--image=<image_name> --dry-run=client -o yaml > example.yaml
kubectl -n <namespace> get <resource type> <resource Name> -o yaml
With the command above, any resource defined in Kubernetes can be exported in YAML format.
You can try use kube-dump bash script
With this utility, you can save Kubernetes cluster resources as a pure yaml manifest without unnecessary metadata.
GitHub repository
Review of the utility in blog page
We can get yaml for deployed resources using below command.
kubectl get <resource name> -o yaml
OR
kubectl get <resource name> <name of pod> -o yaml
example:-
kubectl get deploy Nginx -o yaml
above commands will give you yaml output.
if you want to store the output into any file you can use below command.
kubectl get pod nginx -o yaml > Nginx-pod.yaml
above command will redirect you output to Nginx-pod.yaml in your courrent directory.
If you need to view and edit the file use:
kubectl edit service servicename
You can get the yaml files of the resources using this command
kubectl -n <namespace> get <resource type> <resource Name> -o yaml
To get the secret into your pod,
use something like this
env
- valueFrom
secretKeyRef:
name: secret_name
key: key_name
or
envFrom
- secretRef:
name: secret_name
Is only minor difference from #Janos Lenart's answer!
kubectl get deploy deploymentname -o yaml > outputFile.yaml will do
I know it is too old to answer, but hopefully, someone will find it helpful.
We can try below command to fetch a kind export from all namespace -
kubectl get <kind> --all-namespaces --export -o yaml