oc get deployment is returning No resources found - kubernetes

"oc get deployment" command is returning "No resources Found" as the result.
Even if I put an option of assigning or defining the namespace using -n as the option to above command, I am getting the same result.
Whereas, I am getting the correct result of oc get pods command.
Meanwhile, the oc version is
oc - v3.6.0
kubernetes - v1.6.1
openshift - v3.11.380

Check, if you connect to the correct kubernetes environment, (especially if you're running more than one).
If that is correct, I guess, either you don't have any deployments at all, or the deployments are in a different namespace than you think.
Try out listing all deployments:
oc get deployments -A

There are other objects that create pods such as statefulset or deamonset. Because it is OpenShift, my feeling is that the pods created by a deploymentconfig which is popular way to create applications.
Anyway, you can make sure which object is the owner of the pods by looking into the pod annotation. This command should work:
oc get pod -o yaml <podname> | grep ownerReference -A 6

Related

Kubernetes NetworkPolicy - Is there a way to identify which NetworkPolicies are applied to Pods

We have 3-4 different NetworkPolicy in our Namespace and they are applied based on Pod Selector. Want to know is there any way from Pod side to know which NetworkPolicy is applied on it?
If POD selector used you can use the simple way
kubectl get pod -l \
$( \
kubectl get netpolicies <netpolicy-name> \
-o jsonpath="{.spec.podSelector.matchLabels}"| \
jq -r 'to_entries|map("\(.key)=\(.value)")[]' \
)
This will get the policy selector and use it as input and list the pods
Any way from Pod side
There is no POD side you can check, however I read somewhere kubectl describe pod-name could show Network Policies I tested not showing at least in minikube
So you can use the above command or describe the networkpolicy itself to get POD selector and get an idea.
kubectl describe networkpolicies <name of policy>
The output of kubectl get network policy should display the pod-selector.
After that you can use kubectl get pod -l key=value to list the pods affected.
you can automate this using a bash script/function.
I would also recommend checking "kubectl np-viewer" which is a kubectl plugin, can be found here. This plugin has what you are asking for out of box.
kubectl np-viewer -p pod-name prints network policies rules affecting a specific pod in the current namespace

How to describe entire cluster (Nodes running and individual node basic information, we get with kubectl describe nodes)in Kubernetes maintenance?

Kubectl describe nodes ?
like wise do we have any commands like mentioned below to describe cluster information ?
kubectl describe cluster
"Kubectl describe <api-resource_type> <api_resource_name> "command is used to describe a specific resources running in your kubernetes cluster, Actually you need to verify different components separately as a developer to check your pods, nodes services and other tools that you have applied/created.
If you are the cluster administrator and you are asking about useful command to check the actual kube-system configuration it depends on your k8s cluster type for example if you are using "kubeadm" package to initialize k8s cluster on premises you can check and change the default cluster configuration using this command :
kubeadm config print init-defaults
after initializing your cluster all main server configurations files a.k.a manifests are located here /etc/kubernetes/manifests (and they are Realtime updated, change anything and the cluster will redeploy it automatically)
Useful kubectl commands :
For cluster infos (api-server domain and dns) run:
kubectl cluster-info
Either ways you can list all api-resources and check it one by one using these commands
kuectl api-resources (list all api-resources names and types)
kubectl get <api_resource_name> (specific to your cluster)
kubectl explain <api_resource_name> (explain the resource object with docs link)
For extra infos you can add specific flags, examples:
kubectl get nodes -o wide
kubectl get pods -n <specific-name-space> -o wide
kubectl describe pods <pod_name>
...
For more informations about the kubectl command line check the kubectl_cheatsheet

Is there a way to list all resources created by a specific operator and their status?

I use config connector https://cloud.google.com/config-connector/docs/overview
I create gcp resources with CRDs that config connector provides:
kind: IAMServiceAccount
kind: StorageBucket
etc
Now what I'd really like is to be able to get a simple list of each resource and its status (if it was created successfully or not). Where each resource is a single line that's something like: kind, name, status, etc
Is there a way with kubectl to get a list of all resources that were created by an operator like this? I suppose I could manually label all these resources and try to select with a label but I really don't want to do that
Edit
Per the comment I could do this, but curious if there is a less unwieldy command
kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true \
-o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | xargs -n 1 \
kubectl get -Ao jsonpath='{range .items[*]}{" Kind: "}{#.kind}{"Name: "}{#.metadata.name}{" Status: "}{#.status.conditions[].status}{" Reason: "}{#.status.conditions[].reason}{"\n"}{end}' --ignore-not-found
I've made a bit of research on this topic and I found 2 possible solutions to retrieve all the resources that were created by config-connector:
$ kubectl api-resources way
$ kubectl get-all/ketall way with labels (please see the explanation as it's not installed by default)
The discussion that is referencing similar issue can be found here:
Github.com: Kubernetes: kubectl: Issue 151
$ kubectl api-resources
As pointed in the comment I made you can use the following expression:
kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | xargs -n 1 kubectl get --ignore-not-found
Dissecting this solution:
kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true
retrieve the Customer Resource Definitions that have a matching selector
-o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
use the jsonpath to retrieve only the value stored in .metadata.name key (get the name of the crd)
| xargs -n 1 kubectl get
pipe the output to the xargs and use each CRD retrieved from previous command to run $ kubectl get <RESOURCE>
--ignore-not-found
do not display a message about missing resource
This command could also be altered to suit the specific needs as it's shown in the question.
A side note!
Similar command is referenced in the github link I pasted above:
Github.com: Kubernetes: kubectl: Issues 151: Comment 402003022
$ kubectl get-all/ketall
Above commands can be used to retrieve all of the resources in the cluster. They are not available in default kubectl and they need additional configuration.
More reference about the installation can be found in this github page:
Github.com: Corneliusweig: Ketall
Using the approach described in the official Kubernetes documentation:
Labels are intended to be used to specify identifying attributes of objects
Kubernetes.io: Docs: Concepts: Overview: Working with objects: Labels
You can label those resources created by config connector (I know that you would like to avoid it) and look for this resources like:
$ kubectl get-all -l look=here
NAME NAMESPACE AGE
storagebucket.storage.cnrm.cloud.google.com/config-connector-bucket config-connector 135m
storagebucket.storage.cnrm.cloud.google.com/config-connector-bucket-test config-connector 13s
This resources have the .metadata.labels.look=here added to it's definitions.
Additional resources:
Cloud.google.com: Config Connector: Docs: How to: Getting Started
Thenewstack.io: Tutorial use google config connector to manage a gcp cloud sql database
There is also a way suggested in GCP config-connector docs:
kubectl get gcp
from https://cloud.google.com/config-connector/docs/how-to/monitoring-your-resources#listing_all_resources

kubernetes to openshift equivalent command

In kubernetes I have a command as
kubectl create deployment nginx --image=ewoutp/docker-nginx-curl -n web
What should I run if I want to create this inside openshift cluster
I tried this
oc create deployment nginx --image=ewoutp/docker-nginx-curl -n web
I am getting error as error:
no matches for extensions/, Kind=Deployment
Can someone help me?
It might indicate that your openshift cluster is not running. Check oc status to view status of your current project. If it is not running you should create new project.
If you cluster is running you can run oc create deployment nginx --image=ewoutp/docker-nginx-curl -n web -o yaml to verify if apiVersion is correct. Currently used version is apps/v1. If it is incorrect you can save it to file and edit to match current version.

Kubectl using command to get cluster status

I need to create a shell-script which examine the cluster
Status.**
I saw that the kubectl describe-nodes provides lots of data
I can output it to json and then parse it but maybe it’s just overkill.
Is there a simple way to with kubectl command to get the status of the cluster ? just if its up / down
The least expensive way to check if you can reach the API server is kubectl version. In addition kubectl cluster-info gives you some more info.
In addition to Michael's answer, that would only tell you about the API server or master and internal services like KubeDns etc, but not the nodes.
It depends on your need and definition of "status" here. You could run kubectl cluster-info followed by kubectl get nodes and check the STATUS column for all nodes using parsing tools like awk, jq or kubectl's own -o jsonpath option to verify that all nodes are ready.
The below command would display the health of scheduler, controller and etcd
kubectl get cs
Command below lists Kubernetes core components like, etcd, controller, scheduler, kube-proxy, core-dns, network plugin. All those pods should be running to be sure that Kubernetes is healthy.
kubectl get pod -n kube-system
Finally deploy one front-end and back-end Pod and verify the inter-pod communication to ensure that cluster is up and working correctly.
Below are the commands to get cluster status based on requirements:
To get information regarding where your Kubernetes master is running at, CoreDNS is running at, kubernetes-dasboard is running at, use
kubectl cluster-info
To get detailed information to further debug and diagnose cluster problem, use kubectl cluster-info dump
To get only the health status for your node use, kubectl get componentstatus or kubectl get cs
*To show detailed information about a resource use kubectl describe node <node>