Openshift containers running in privileged mode - kubernetes

Being absolutely new to openshift, i'm curious how i can check if any of the running containers are running in "privileged" mode (openshift v4.6). Digging in the documentation and searching the net i could only find information regarding SCCs, which is great and all, but i didnt find anything regarding this (apart from an older version of openshift, where the oc get pods (or something similar command) used to show if a pod was running with such privileges

By default pods use the Restricted SCC. The pod's SCC is determined by the User/ServiceAccount and/or Group. Then, you also have to consider that a SA may or may not be bound to a Role, which can set a list of available SCCs.
To find out what SCC a pod runs under:
oc get pod $POD_NAME -o yaml | grep openshift.io/scc
The following commands can also be useful:
# get pod's SA name
oc get pod $POD_NAME -o yaml | grep serviceAccount:
# list service accounts that can use a particular SCC
oc adm policy who-can use scc privileged
# list users added by the oc adm policy command
oc get scc privileged -o yaml
# check roles and role bindings of your SA
# you need to look at rules.apiGroups: security.openshift.io
oc get rolebindings -o wide
oc get role $ROLE_NAME -o yaml

An OpenShift project comes with 3 service accounts by default which are builder, default, deployer.
The containers you deploy to that namespace will be assigned to "default" service account and that is the one which has "restricted" scc role.
You can find more here: https://www.openshift.com/blog/managing-sccs-in-openshift

Related

oc get deployment is returning No resources found

"oc get deployment" command is returning "No resources Found" as the result.
Even if I put an option of assigning or defining the namespace using -n as the option to above command, I am getting the same result.
Whereas, I am getting the correct result of oc get pods command.
Meanwhile, the oc version is
oc - v3.6.0
kubernetes - v1.6.1
openshift - v3.11.380
Check, if you connect to the correct kubernetes environment, (especially if you're running more than one).
If that is correct, I guess, either you don't have any deployments at all, or the deployments are in a different namespace than you think.
Try out listing all deployments:
oc get deployments -A
There are other objects that create pods such as statefulset or deamonset. Because it is OpenShift, my feeling is that the pods created by a deploymentconfig which is popular way to create applications.
Anyway, you can make sure which object is the owner of the pods by looking into the pod annotation. This command should work:
oc get pod -o yaml <podname> | grep ownerReference -A 6

How to describe entire cluster (Nodes running and individual node basic information, we get with kubectl describe nodes)in Kubernetes maintenance?

Kubectl describe nodes ?
like wise do we have any commands like mentioned below to describe cluster information ?
kubectl describe cluster
"Kubectl describe <api-resource_type> <api_resource_name> "command is used to describe a specific resources running in your kubernetes cluster, Actually you need to verify different components separately as a developer to check your pods, nodes services and other tools that you have applied/created.
If you are the cluster administrator and you are asking about useful command to check the actual kube-system configuration it depends on your k8s cluster type for example if you are using "kubeadm" package to initialize k8s cluster on premises you can check and change the default cluster configuration using this command :
kubeadm config print init-defaults
after initializing your cluster all main server configurations files a.k.a manifests are located here /etc/kubernetes/manifests (and they are Realtime updated, change anything and the cluster will redeploy it automatically)
Useful kubectl commands :
For cluster infos (api-server domain and dns) run:
kubectl cluster-info
Either ways you can list all api-resources and check it one by one using these commands
kuectl api-resources (list all api-resources names and types)
kubectl get <api_resource_name> (specific to your cluster)
kubectl explain <api_resource_name> (explain the resource object with docs link)
For extra infos you can add specific flags, examples:
kubectl get nodes -o wide
kubectl get pods -n <specific-name-space> -o wide
kubectl describe pods <pod_name>
...
For more informations about the kubectl command line check the kubectl_cheatsheet

Is there a way to list all resources created by a specific operator and their status?

I use config connector https://cloud.google.com/config-connector/docs/overview
I create gcp resources with CRDs that config connector provides:
kind: IAMServiceAccount
kind: StorageBucket
etc
Now what I'd really like is to be able to get a simple list of each resource and its status (if it was created successfully or not). Where each resource is a single line that's something like: kind, name, status, etc
Is there a way with kubectl to get a list of all resources that were created by an operator like this? I suppose I could manually label all these resources and try to select with a label but I really don't want to do that
Edit
Per the comment I could do this, but curious if there is a less unwieldy command
kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true \
-o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | xargs -n 1 \
kubectl get -Ao jsonpath='{range .items[*]}{" Kind: "}{#.kind}{"Name: "}{#.metadata.name}{" Status: "}{#.status.conditions[].status}{" Reason: "}{#.status.conditions[].reason}{"\n"}{end}' --ignore-not-found
I've made a bit of research on this topic and I found 2 possible solutions to retrieve all the resources that were created by config-connector:
$ kubectl api-resources way
$ kubectl get-all/ketall way with labels (please see the explanation as it's not installed by default)
The discussion that is referencing similar issue can be found here:
Github.com: Kubernetes: kubectl: Issue 151
$ kubectl api-resources
As pointed in the comment I made you can use the following expression:
kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | xargs -n 1 kubectl get --ignore-not-found
Dissecting this solution:
kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true
retrieve the Customer Resource Definitions that have a matching selector
-o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
use the jsonpath to retrieve only the value stored in .metadata.name key (get the name of the crd)
| xargs -n 1 kubectl get
pipe the output to the xargs and use each CRD retrieved from previous command to run $ kubectl get <RESOURCE>
--ignore-not-found
do not display a message about missing resource
This command could also be altered to suit the specific needs as it's shown in the question.
A side note!
Similar command is referenced in the github link I pasted above:
Github.com: Kubernetes: kubectl: Issues 151: Comment 402003022
$ kubectl get-all/ketall
Above commands can be used to retrieve all of the resources in the cluster. They are not available in default kubectl and they need additional configuration.
More reference about the installation can be found in this github page:
Github.com: Corneliusweig: Ketall
Using the approach described in the official Kubernetes documentation:
Labels are intended to be used to specify identifying attributes of objects
Kubernetes.io: Docs: Concepts: Overview: Working with objects: Labels
You can label those resources created by config connector (I know that you would like to avoid it) and look for this resources like:
$ kubectl get-all -l look=here
NAME NAMESPACE AGE
storagebucket.storage.cnrm.cloud.google.com/config-connector-bucket config-connector 135m
storagebucket.storage.cnrm.cloud.google.com/config-connector-bucket-test config-connector 13s
This resources have the .metadata.labels.look=here added to it's definitions.
Additional resources:
Cloud.google.com: Config Connector: Docs: How to: Getting Started
Thenewstack.io: Tutorial use google config connector to manage a gcp cloud sql database
There is also a way suggested in GCP config-connector docs:
kubectl get gcp
from https://cloud.google.com/config-connector/docs/how-to/monitoring-your-resources#listing_all_resources

How to get creator information about particular resource from kubectl get or describe command

I want to know who create pod from kubectl get or describe command.
Can I insert this field into Pod? or Custom Resource Definition?
Please help.
when I access to kubernetes cluster as user 'Alice' and create pod 'sample-pod'
I want to see as follows when exec 'kubectl get pod sample-pod' command.
NAME READY STATUS RESTARTS AGE CREATEUSER
sample-pod 1/1 Running 0 10s Alice
I assume that the information about authenticated Kubernetes users is not included in the general API reference list for Pod core schema objects. Therefore, it's not possible to retrieve user data from Pod specification. Actually, you might try to use Service accounts that granted to manage Kubernetes API via RBAC mechanism, in that way you can create a particular SA in namespace related to a specific user, and use it for any resource Kubernetes creation purpose. Although the information about SA name for specific resource propagated through .spec.serviceAccountName field in relevant object configuration template, you can invoke it using custom-columns= option in kubectl get command:
kubectl get pods -o custom-columns=NAME:.metadata.name,SERVICE_ACCOUNT:.spec.serviceAccountName
kubectl get pods -n -o yaml will give the pod definition. you can edit and use it for deploying pods.

Kubernetes ABAC Policies for Groups and Users?

Currently, I have an ABAC policy that gives "system:autheticated" all access. K8s starts up fine when I have this defined, but if I remove it, K8s doesn't start up. I'm trying to find out what namespaces, service accounts, groups, users, etcs are being used on my K8s cluster so I can define a specific set of users/groups in the ABAC policy.
How can I get the groups and users in the K8s cluster? I'm using
"kubectl --namespace=kube-system get serviceaccounts"
to get the serviceaccounts... but where are the groups and users defined?
For Groups you might try (example for "system:masters"):
kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group") | select(.subjects[0].name=="system:masters") | .metadata.name'
Also, you can read all the namespaces at once adding --all-namespaces=true inside the kubectl command.
You should also check all local files for policies that might be applied.
Here is Kubernetes documentation regarding Using ABAC Authorization
As for users, I was only able to find a way of checking if a particular user is able, for example, to create a deployment in a namespace:
$ kubectl auth can-i create deployments --namespace dev
yes
$ kubectl auth can-i create deployments --namespace prod
no