Why is kubectl describe secret not working? - kubernetes

I created a secret in kubernetes using the command below -
kubectl create secret generic -n mynamespace test --from-file=a.txt
Now I try to view it using below commands but am unsuccessful -
kubectl describe secrets/test
kubectl get secret test -o json
This is the error I get in either case -
Error from server (NotFound): secrets "test" not found
What can be the cause? I am using GCP for the kubernetes setup. Can the trial version of GCP be the cause for it?

Try to access the secret in the namespace where it was created in:
kubectl -n mynamespace describe secrets/test
kubectl -n mynamespace get secret test -o json

You create your secret on a specific namespace and not the default one and when you use kubectl describe it will be bind to the default one.
Good thinks to know when you use a specific namespace for your secret is that they can only be referenced by pods in that same namespace.

Related

How to list applied Custom Resource Definitions in kubernetes with kubectl

I recently applied this CRD file
https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
With kubectl apply to install this: https://hub.helm.sh/charts/jetstack/cert-manager
I think I managed to apply it successfully:
xetra11#x11-work configuration]$ kubectl apply -f ./helm-charts/certificates/00-crds.yaml --validate=false
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
But now I would like to "see" what I just applied here. I have no idea how to list those definitions or for example remove them if I think they will screw up my cluster somehow.
I was not able to find any information to that here: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#preparing-to-install-a-custom-resource
kubectl get customresourcedefinitions, or kubectl get crd.
You can then use kubectl describe crd <crd_name> to get a description of the CRD. And of course kubectl get crd <crd_name> -o yaml to get the complete definition of the CRD.
To remove you can use kubectl delete crd <crd_name>.
Custom Resources are like any other native Kubernetes resource.
All the basic kubeclt CRUD operations work fine for CRDs. So just use any of the below commands.
kubectl get crd <name of crd>
kubectl describe crd <name of crd>
kubectl get crd <name of crd> -o yaml
First, you can list all your CRD's with kubectl get crd for example:
$ kubectl get crd
NAME CREATED AT
secretproviderclasses.secrets-store.csi.x-k8s.io 2022-07-06
secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io 2022-07-06
This is the list of available CRD's definitions, then you take the name of one and launch a kubectl get <crd_name> to get a list of applied resources from this CRD. For example:
$ kubectl get secretproviderclasses.secrets-store.csi.x-k8s.io
NAME AGE
azure-kv 5d
Note: Use -A to target all namespaces or -n <namespace>
You may arrive here confused about why you see your CRDs in kubectl get api-resources, e.g. this Istio Telemetry resource:
kubectl api-resources --api-group=telemetry.istio.io
NAME SHORTNAMES APIVERSION NAMESPACED KIND
telemetries telemetry telemetry.istio.io/v1alpha1 true Telemetry
but then attempting to kubectl describe them yields an error like
kubectl describe crd Telemetry.telemetry.istio.io
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "Telemetry.telemetry.istio.io" not found
or
kubectl describe crd telemetry.istio.io/v1alpha1
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'kubectl get resource/<resource_name>' instead of 'kubectl get resource resource/<resource_name>'
That's because you must use the plural form of the full name of the CRD. See kubectl get crd for the names, e.g.:
$ kubectl get crd |grep -i telemetry
telemetries.telemetry.istio.io 2022-03-21T08:49:29Z
So kc describe crd telemetries.telemetry.istio.io will work for this CRD.
List the crds (no namespace as crds are cluster scoped):
kubectl get crds
Describe the crd:
kubectl describe crd challenges.acme.cert-manager.io

kubectl create doesn't seem to do anything

I am running the command
kubectl create -f mypod.yaml --namespace=mynamespace
as I need to specify the environment variables through a configMap I created and specified in the mypod.yaml file. Kubernetes returns
pod/mypod created
but kubectl get pods doesn't show it in my list of pods and I can't access it by name as if it does not exist. However, if I try to create it again, it says that the pod is already created.
What may cause this, and how would I diagnose the problem?
By default, kubectl commands operate in the default namespace. But you created your pod in the mynamespace namespace.
Try one of the following:
kubectl get pods -n mynamespace
kubectl get pods --all-namespaces

How to deploy an application in GKE from a public CI server

I'm trying to deploy an application in a GKE 1.6.2 cluster running ContainerOS but the instructions on the website / k8s are not accurate anymore.
The error that I'm getting is:
Error from server (Forbidden): User "circleci#gophers-slack-bot.iam.gserviceaccount.com"
cannot get deployments.extensions in the namespace "gopher-slack-bot".:
"No policy matched.\nRequired \"container.deployments.get\" permission."
(get deployments.extensions gopher-slack-bot)
The repository for the application is available here available here.
Thank you.
I had a few breaking changes in the past with using the gcloud tool to authenticate kubectl to a cluster, so I ended up figuring out how to auth kubectl to a specific namespace independent of GKE. Here's what works for me:
On CircleCI:
setup_kubectl() {
echo "$KUBE_CA_PEM" | base64 --decode > kube_ca.pem
kubectl config set-cluster default-cluster --server=$KUBE_URL --certificate-authority="$(pwd)/kube_ca.pem"
kubectl config set-credentials default-admin --token=$KUBE_TOKEN
kubectl config set-context default-system --cluster=default-cluster --user=default-admin --namespace default
kubectl config use-context default-system
}
And here's how I get each of those env vars from kubectl.
kubectl get serviceaccounts $namespace -o json
The service account will contain the name of it's secret. In my case, with the default namespace, it's
"secrets": [
{
"name": "default-token-655ls"
}
]
Using the name, I get the contents of the secret
kubectl get secrets $secret_name -o json
The secret will contain ca.crt and token fields, which match the $KUBE_CA_PEM and $KUBE_TOKEN in the shell script above.
Finally, use kubectl cluster-info to get the $KUBE_URL value.
Once you run setup_kubectl on CI, your kubectl utility will be authenticated to the namespace you're deploying to.
In Kubernetes 1.6 and GKE, we introduce role based cess control. The authors of your took need to give the service account the ability to get deployments (along with probably quite a few others) to its account creation.
https://kubernetes.io/docs/admin/authorization/rbac/

Get YAML for deployed Kubernetes services?

I am trying to deploy my app to Kubernetes running in Google Container
Engine.
The app can be found at: https://github.com/Industrial/docker-znc.
The Dockerfile is built into an image on Google Container Registry.
I have deployed the app in Kubernetes via the + button. I don't have the YAML
for this.
I have inserted a Secret in Kubernetes for the PEM file required by the app.
How do I get the YAML for the Deployment, Service and Pod created by
Kubernetes by filling in the form?
How do I get the Secret into my Pod for usage?
To get the yaml for a deployment (service, pod, secret, etc):
kubectl get deploy deploymentname -o yaml
How do I get the YAML for the Deployment, Service and Pod created by
Kubernetes by filling in the form?
kubectl get deployment,service,pod yourapp -o yaml --export
Answering #Sinaesthetic question:
any idea how to do it for the full cluster (all deployments)?
kubectl get deploy --all-namespaces -o yaml --export
The problem with this method is that export doesn't include the namespace. So if you want to export many resources at the same time, I recommend doing it per namespace:
kubectl get deploy,sts,svc,configmap,secret -n default -o yaml --export > default.yaml
Unfortunately kubernetes still doesn't support a true get all command, so you need to list manually the type of resources you want to export. You can get a list of resource types with
kubectl api-resources
The same issue is discussed at kubernetes GitHub issues page and the user "alahijani" made a bash script that exports all yaml and writes them to single files and folders.
Since this question ranks well on Google and since I found that solution very good, I represent it here.
Bash script exporting yaml to sub-folders:
for n in $(kubectl get -o=name pvc,configmap,serviceaccount,secret,ingress,service,deployment,statefulset,hpa,job,cronjob)
do
mkdir -p $(dirname $n)
kubectl get -o=yaml --export $n > $n.yaml
done
Another user "acondrat" made a script that do not use directories, which makes it easy to make a kubectl apply -f later.
Bash script exporting yaml to current folder:
for n in $(kubectl get -o=name pvc,configmap,ingress,service,secret,deployment,statefulset,hpa,job,cronjob | grep -v 'secret/default-token')
do
kubectl get -o=yaml --export $n > $(dirname $n)_$(basename $n).yaml
done
The last script does not include service account.
Now that --export is deprecated, to get the output from your resources in the 'original' format (just cleaned up, without any information about the current object state (unnecessary metadata in this circumstance)) you can do the following using yq v4.x:
kubectl get <resource> -n <namespace> <resource-name> -o yaml \
| yq eval 'del(.metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields)' -
Syntax for downloading yaml's from kubernetes
kubectl get [resource type] -n [namespace] [resource Name] -o yaml > [New file name]
Create yaml file from running pod:
kubectl get po -n nginx nginx-deployment-755cfc7dcf-5s7j8 -o yaml > podDetail.yaml
Create replicaset yaml file from running pod:
kubectl get rs -n nginx -o yaml > latestReplicaSet.yaml
Create deployement yaml file from running pod:
kubectl get deploy -n nginx -o yaml > latestDeployement.yaml
Also its possible to use the view-last-applied command e.g.
kubectl apply view-last-applied services --all > services.yaml
which will return all the manifests applied to create services. Also you can specify a certain k8 resource by services/resource-name label.
If you need to get 'clean' export, removing the annotations added by Kubernetes, there's an opensource project that does that by piping the output of kubectl get - https://github.com/itaysk/kubectl-neat.
It removes the timestamp metadata, etc.
kubectl get pod mypod -o yaml | kubectl neat
kubectl get pod mypod -oyaml | kubectl neat -o json
Use this command to get yaml format of your service
kubectl get service servicename -n <namespace> -o yaml
You can put it in some file also
kubectl get service servicename -n <namespace> -o yaml > service.yaml
The following code will extract all your K8s definitions at once and place them on individual folders below the current folder.
for OBJ in $(kubectl api-resources --verbs=list --namespaced -o name)
do
for DEF in $(kubectl get --show-kind --ignore-not-found $OBJ -o name)
do
mkdir -p $(dirname $DEF)
kubectl get $DEF -o yaml \
| yq eval 'del(.metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields)' - > $DEF.yaml
done
done
You can store output of deployed kubernetes service by using below command -
kubectl get svc -n -o yaml > svc-output.yaml
For deployments -
kubectl get deploy <deployment-name> -n <your-namespace> -o yaml > deploy-output.yaml
For Pod -
kubectl get pod <pod-name> -n <your-namespace> -o yaml > pod-output.yaml
You can get your secret details using below command -
kubectl get secret -n -o yaml
In order to use update your deployment file by using below command -
kubectl edit deploy -n
Under your pod template add below -
this will go under pod containers section to mount secret volume to container
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
this will go inside your pod template section in deployment
volumes:
- name: foo
secret:
secretName: mysecret
for the 2nd question regarding the secret, this is from the k8s documentation. see https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets for more info.
Create a secret or use an existing one. Multiple pods can reference the same secret.
Modify your Pod definition to add a volume under spec.volumes[]. Name the volume anything, and have a spec.volumes[].secret.secretName field equal to the name of the secret object.
Add a spec.containers[].volumeMounts[] to each container that needs the secret. Specify spec.containers[].volumeMounts[].readOnly = true and spec.containers[].volumeMounts[].mountPath to an unused directory name where you would like the secrets to appear.
Modify your image and/or command line so that the program looks for files in that directory. Each key in the secret data map becomes the filename under mountPath.
I have used this and it works fine.
Like mentioned above "--export" is one option to get the manifest corresponding to the kubeernetes objects
But "--export" is considered to be buggy and there is a proposal to deprecate it
Currently the better option is to do "-o yaml" or "-o json" and remove the unnecessary fields
The main difference is "--export" is expected to remove the cluster specific settings (e.g. cluster service IP of a k8s service). But it is found to be inconsistent in this regard
All services
kubectl get service --all-namespaces -o yaml > all-service.yaml
All deployments
kubectl get deploy --all-namespaces -o yaml > all-deployment.yaml
We can get the contents associated with any kind from a Kubernetes cluster through the command line if you have the read access.
kubectl get <kind> <kindname> -n <namespace> -o <yaml or json>
For example, if you want to export a deployment from a namespace follow the below command -
kubectl get deploy mydeploy -n mynamespace -o yaml > mydeploy.yaml
kubectl get deploy mydeploy -n mynamespace -o json > mydeploy.json
To get all yaml file deployments backup (not a specific deployment):
kubectl get deployments -n <namespace> -o yaml > deployments.yaml
for getting all yaml file services backup (not a specific deployment):
kubectl get services -n <namespace> -o yaml > services.yaml
enjoy it.
To get YAML for current running deployment on kubernetes, you can run this command:
kubectl get deployment <deployment_name> -o yaml
To generate YAML for deployment you can run the imperative command.
kubectl create deployment <deployment_name>--image=<image_name> -o yaml
To generate and export the deployment you can run the imperative command.
kubectl create deployment <deployment_name>--image=<image_name> --dry-run=client -o yaml > example.yaml
kubectl -n <namespace> get <resource type> <resource Name> -o yaml
With the command above, any resource defined in Kubernetes can be exported in YAML format.
You can try use kube-dump bash script
With this utility, you can save Kubernetes cluster resources as a pure yaml manifest without unnecessary metadata.
GitHub repository
Review of the utility in blog page
We can get yaml for deployed resources using below command.
kubectl get <resource name> -o yaml
OR
kubectl get <resource name> <name of pod> -o yaml
example:-
kubectl get deploy Nginx -o yaml
above commands will give you yaml output.
if you want to store the output into any file you can use below command.
kubectl get pod nginx -o yaml > Nginx-pod.yaml
above command will redirect you output to Nginx-pod.yaml in your courrent directory.
If you need to view and edit the file use:
kubectl edit service servicename
You can get the yaml files of the resources using this command
kubectl -n <namespace> get <resource type> <resource Name> -o yaml
To get the secret into your pod,
use something like this
env
- valueFrom
secretKeyRef:
name: secret_name
key: key_name
or
envFrom
- secretRef:
name: secret_name
Is only minor difference from #Janos Lenart's answer!
kubectl get deploy deploymentname -o yaml > outputFile.yaml will do
I know it is too old to answer, but hopefully, someone will find it helpful.
We can try below command to fetch a kind export from all namespace -
kubectl get <kind> --all-namespaces --export -o yaml

Can the "kubernetes" service be moved from the "default" namespace to the "kube-system" namespace?

The kubernetes service is in the default namespace. I want to move it to kube-system namespace. So I did it as follow:
kubectl get svc kubernetes -o yaml > temp.yaml
This generates temp.yaml using current kubernetes service information. Then I changed the value of namespace to kube-system in temp.yaml. Lastly, I ran the following command:
kubectl replace -f temp.yaml
But I got the error:
Error from server: error when replacing "temp.yaml": service "kubernetes" not found
I think there is no service named kubernetes in the kube-system namespace.
Who can tell me how can to do this?
Name and namespace are immutable on objects. When you try to change the namespace, replace looks for the service in the new namespace in order to overwrite it. You should be able to do create -f ... to create the service in the new namespace
The kubernetes service is special and exists in the default namespace. Too may things assume that to change it safely.