How to get creator information about particular resource from kubectl get or describe command - kubernetes

I want to know who create pod from kubectl get or describe command.
Can I insert this field into Pod? or Custom Resource Definition?
Please help.
when I access to kubernetes cluster as user 'Alice' and create pod 'sample-pod'
I want to see as follows when exec 'kubectl get pod sample-pod' command.
NAME READY STATUS RESTARTS AGE CREATEUSER
sample-pod 1/1 Running 0 10s Alice

I assume that the information about authenticated Kubernetes users is not included in the general API reference list for Pod core schema objects. Therefore, it's not possible to retrieve user data from Pod specification. Actually, you might try to use Service accounts that granted to manage Kubernetes API via RBAC mechanism, in that way you can create a particular SA in namespace related to a specific user, and use it for any resource Kubernetes creation purpose. Although the information about SA name for specific resource propagated through .spec.serviceAccountName field in relevant object configuration template, you can invoke it using custom-columns= option in kubectl get command:
kubectl get pods -o custom-columns=NAME:.metadata.name,SERVICE_ACCOUNT:.spec.serviceAccountName

kubectl get pods -n -o yaml will give the pod definition. you can edit and use it for deploying pods.

Related

How do you retrieve pod logs by labelSelector when using the k8s HTTP API?

I would like to collect the logs from one or more related pods using a labelSelector and the kubernetes HTTP API. However, I don't see any way to do this without first knowing all the pods names, e.g.
{{baseUrl}}/api/v1/namespaces/:namespace/pods/:name/log?container=cillum &follow=true&insecureSkipTLSVerifyBackend=true&limitBytes=-94468552&pretty=cillum &previous=true&sinceSeconds=-94468552&tailLines=-94468552&timestamps=true
Is this possible, or should I mount a container with kubectl and use that to get the logs I want?
I can get the logs using kubectl like so:
kubectl logs -l job=myjob -n test -c main
I would assume there is a similar way to retrieve logs using the labelSelector using the API.

oc get deployment is returning No resources found

"oc get deployment" command is returning "No resources Found" as the result.
Even if I put an option of assigning or defining the namespace using -n as the option to above command, I am getting the same result.
Whereas, I am getting the correct result of oc get pods command.
Meanwhile, the oc version is
oc - v3.6.0
kubernetes - v1.6.1
openshift - v3.11.380
Check, if you connect to the correct kubernetes environment, (especially if you're running more than one).
If that is correct, I guess, either you don't have any deployments at all, or the deployments are in a different namespace than you think.
Try out listing all deployments:
oc get deployments -A
There are other objects that create pods such as statefulset or deamonset. Because it is OpenShift, my feeling is that the pods created by a deploymentconfig which is popular way to create applications.
Anyway, you can make sure which object is the owner of the pods by looking into the pod annotation. This command should work:
oc get pod -o yaml <podname> | grep ownerReference -A 6

How to get another pod name from it's IP?

I have a pod that exposes a port (Server). Other pods (Clients) can communicate with it.
The server can find remote IP and port on a socket (when the client connects to it). I am looking for a way to get the client's pod name (from its IP and port).
I saw a bunch of questions/answers about getting pod names via kubectl. However, I am not sure whether I can do kubectl from within a cluster itself.
I am trying to figure out what is available for something running on the cluster. It's ok if it requires some special privileges. It's more complicated if it requires authentication.
List all the Pods with the List Pods API operation and parse the JSON response for the podIP field (e.g. with jq or some other JSON parsing tool) to find the JSON object of the Pod that has your desired IP address. Then, extract the metadata.name field from this JSON object to get the name of the Pod.
You can do this by either directly using the Kubernetes API (e.g. with curl) or with kubectl (e.g. kubectl get pods -o json | jq ...). In any case, you must include in this request the ServiceAccount token of the ServiceAccount used by the Pod from which you are issuing the request (if you use the Kubernetes API directly, as a Bearer token in the Authorization header, and if you use kubectl with the --token command-line flag).
Regarding authorisation, you need a Role allowing the list verb on the pods resource and a RoleBinding that binds this Role to the ServiceAccount that your Pod is using (by default, Pods use a ServiceAccount named default in their namespace, but you can specify a custom ServiceAccount with the serviceAccountName field of the Pod).
Whatever can be done via kubectl can be done via direct requests to the API server as well. All you need is to have proper ServiceAccount set up for your pod. Once you have it - you use can plenty of libraries dedicated for communication with k8s API server.

Openshift containers running in privileged mode

Being absolutely new to openshift, i'm curious how i can check if any of the running containers are running in "privileged" mode (openshift v4.6). Digging in the documentation and searching the net i could only find information regarding SCCs, which is great and all, but i didnt find anything regarding this (apart from an older version of openshift, where the oc get pods (or something similar command) used to show if a pod was running with such privileges
By default pods use the Restricted SCC. The pod's SCC is determined by the User/ServiceAccount and/or Group. Then, you also have to consider that a SA may or may not be bound to a Role, which can set a list of available SCCs.
To find out what SCC a pod runs under:
oc get pod $POD_NAME -o yaml | grep openshift.io/scc
The following commands can also be useful:
# get pod's SA name
oc get pod $POD_NAME -o yaml | grep serviceAccount:
# list service accounts that can use a particular SCC
oc adm policy who-can use scc privileged
# list users added by the oc adm policy command
oc get scc privileged -o yaml
# check roles and role bindings of your SA
# you need to look at rules.apiGroups: security.openshift.io
oc get rolebindings -o wide
oc get role $ROLE_NAME -o yaml
An OpenShift project comes with 3 service accounts by default which are builder, default, deployer.
The containers you deploy to that namespace will be assigned to "default" service account and that is the one which has "restricted" scc role.
You can find more here: https://www.openshift.com/blog/managing-sccs-in-openshift

Kubectl using command to get cluster status

I need to create a shell-script which examine the cluster
Status.**
I saw that the kubectl describe-nodes provides lots of data
I can output it to json and then parse it but maybe it’s just overkill.
Is there a simple way to with kubectl command to get the status of the cluster ? just if its up / down
The least expensive way to check if you can reach the API server is kubectl version. In addition kubectl cluster-info gives you some more info.
In addition to Michael's answer, that would only tell you about the API server or master and internal services like KubeDns etc, but not the nodes.
It depends on your need and definition of "status" here. You could run kubectl cluster-info followed by kubectl get nodes and check the STATUS column for all nodes using parsing tools like awk, jq or kubectl's own -o jsonpath option to verify that all nodes are ready.
The below command would display the health of scheduler, controller and etcd
kubectl get cs
Command below lists Kubernetes core components like, etcd, controller, scheduler, kube-proxy, core-dns, network plugin. All those pods should be running to be sure that Kubernetes is healthy.
kubectl get pod -n kube-system
Finally deploy one front-end and back-end Pod and verify the inter-pod communication to ensure that cluster is up and working correctly.
Below are the commands to get cluster status based on requirements:
To get information regarding where your Kubernetes master is running at, CoreDNS is running at, kubernetes-dasboard is running at, use
kubectl cluster-info
To get detailed information to further debug and diagnose cluster problem, use kubectl cluster-info dump
To get only the health status for your node use, kubectl get componentstatus or kubectl get cs
*To show detailed information about a resource use kubectl describe node <node>