How to obtain the enable admission controller list in kubernetes? - kubernetes

AFAIK, the admission controller is the last pass before the submission to the database.
However I cannot know which one is enabled, Is there a way to know which one is taking effect?
Thanks.

The kube-apiserver is running in your kube-apiserver-< example.com > container.
The application does not have a get method at the moment to obtain the enabled admission plugins, but you can get the startup parameters from its command line.
kubectl -n kube-system describe po kube-apiserver-example.com
Another way, to see what is in the container: unfortunately there is no "ps" command in the container, but you can get the initial process command parameters from /proc , something like that:
kubectl -n kube-system exec kube-apiserver-example.com -- sed 's/--/\n/g' /proc/1/cmdline
It will be probably like :
enable-admission-plugins=NodeRestriction

There isn't an admissionscontroller k8s object exposed directly in kubectl.
To get a list of admissions controllers, you have to hit the k8s master API directly with the right versions supported by your k8s installation:
kubectl get --raw /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations | jq
For our environment, we run open policy agent as an admissions controller and we can see the webhook object here:
kubectl get --raw /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations | jq '.items[] | select(.metadata.name=="open-policy-agent-latest-helm-opa")'
Which outputs the JSON object:
{
"metadata": {
"name": "open-policy-agent-latest-helm-opa",
"selfLink": "/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/open-policy-agent-latest-helm-opa",
"uid": "02139b9e-b282-4ef9-8017-d698bb13882c",
"resourceVersion": "150373119",
"generation": 93,
"creationTimestamp": "2021-03-18T06:22:54Z",
"labels": {
"app": "open-policy-agent-latest-helm-opa",
"app.kubernetes.io/managed-by": "Helm",
"chart": "opa-1.14.6",
"heritage": "Helm",
"release": "open-policy-agent-latest-helm-opa"
},
"annotations": {
"meta.helm.sh/release-name": "open-policy-agent-latest-helm-opa",
"meta.helm.sh/release-namespace": "open-policy-agent-latest"
},
"managedFields": [
{
"manager": "Go-http-client",
"operation": "Update",
"apiVersion": "admissionregistration.k8s.io/v1beta1",
"time": "2021-03-18T06:22:54Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:meta.helm.sh/release-name": {},
"f:meta.helm.sh/release-namespace": {}
},
"f:labels": {
".": {},
"f:app": {},
"f:app.kubernetes.io/managed-by": {},
"f:chart": {},
"f:heritage": {},
"f:release": {}
}
},
"f:webhooks": {
".": {},
"k:{\"name\":\"webhook.openpolicyagent.org\"}": {
".": {},
"f:admissionReviewVersions": {},
"f:clientConfig": {
".": {},
"f:caBundle": {},
"f:service": {
".": {},
"f:name": {},
"f:namespace": {},
"f:port": {}
}
},
"f:failurePolicy": {},
"f:matchPolicy": {},
"f:name": {},
"f:namespaceSelector": {
".": {},
"f:matchExpressions": {}
},
"f:objectSelector": {},
"f:rules": {},
"f:sideEffects": {},
"f:timeoutSeconds": {}
}
}
}
}
]
},
"webhooks": [
{
"name": "webhook.openpolicyagent.org",
"clientConfig": {
"service": {
"namespace": "open-policy-agent-latest",
"name": "open-policy-agent-latest-helm-opa",
"port": 443
},
"caBundle": "LS0BLAH="
},
"rules": [
{
"operations": [
"*"
],
"apiGroups": [
"*"
],
"apiVersions": [
"*"
],
"resources": [
"namespaces"
],
"scope": "*"
}
],
"failurePolicy": "Ignore",
"matchPolicy": "Exact",
"namespaceSelector": {
"matchExpressions": [
{
"key": "openpolicyagent.org/webhook",
"operator": "NotIn",
"values": [
"ignore"
]
}
]
},
"objectSelector": {},
"sideEffects": "Unknown",
"timeoutSeconds": 20,
"admissionReviewVersions": [
"v1beta1"
]
}
]
}
You can see from above the clientConfig endpoint in k8s which is what the admissions payload is sent to. Tail the logs of the pods that serve that endpoint and you'll see your admissions requests being processed.
To get mutating webhooks, hit the version of the API of interest again:
# get v1 mutating webhook configurations
kubectl get --raw /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations | jq

This is the official explanation:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#which-plugins-are-enabled-by-default
Notes: You should get the stdout by exec in container
kubectl exec -it kube-apiserver-your-machine-name -n kube-system -- kube-apiserver -h | grep enable-admission-plugins

You may find the list of default enabled admission controllers in doc:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#options, search for "--enable-admission-plugins";
or equivalently in code:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubeapiserver/options/plugins.go#L131-L145
For customized ones, you may run cmd in any master node:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -E "(enable|disable)-admission-plugins".

ImagePolicyWebhook uses a configuration file to set options for the behavior of the backend
Create one of these pods by running kubectl create -f examples/<name>.yaml. In this you can verify the user id under which the pod ran by inspecting the logs, for example:
$ kubectl create -f examples/pod-with-defaults.yaml
$ kubectl logs pod-with-defaults

Not sure why it was not stated before, but it's even in the kubernetes docs:
kubectl exec -it kube-apiserver-<your-machine-name> -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
It does exactly what you want.

Related

Istio -- Delete istio-control-plane Process Is Frozen

I was trying to uninstall and reinstall Istio from k8s cluster following the steps:
But I made a mistake that I deleted the namespace before deleting the istio-control-plane: kubectl delete istiooperator istio-control-plane -n istio-system. Then when I try to delete the istio-control-plane again, it froze.
I tried to remove the finalizer using the following steps but it said Error from server (NotFound): istiooperators.install.istio.io "istio-control-plane" not found
kubectl get istiooperator -n istio-system -o json > output.json
nano output.json # and remove finalizer
kubectl replace --raw "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane/finalize" -f output.json
Here is the content of kubectl get istiooperator -n istio-system -o json:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "install.istio.io/v1alpha1",
"kind": "IstioOperator",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"install.istio.io/v1alpha1\",\"kind\":\"IstioOperator\",\"metadata\":{\"annotations\":{},\"name\":\"istio-control-plane\",\"namespace\":\"istio-system\"},\"spec\":{\"addonComponents\":{\"prometheus\":{\"enabled\":false},\"tracing\":{\"enabled\":false}},\"hub\":\"hub.docker.prod.walmart.com/istio\",\"profile\":\"default\",\"values\":{\"global\":{\"defaultNodeSelector\":{\"beta.kubernetes.io/os\":\"linux\"}}}}}\n"
},
"creationTimestamp": "2020-12-05T23:39:34Z",
"deletionGracePeriodSeconds": 0,
"deletionTimestamp": "2020-12-07T16:41:41Z",
"finalizers": [
],
"generation": 2,
"name": "istio-control-plane",
"namespace": "istio-system",
"resourceVersion": "11750055",
"selfLink": "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane",
"uid": "fda8ee4f-54e7-45e8-91ec-c328fad1a86f"
},
"spec": {
"addonComponents": {
"prometheus": {
"enabled": false
},
"tracing": {
"enabled": false
}
},
"hub": "hub.docker.prod.walmart.com/istio",
"profile": "default",
"values": {
"global": {
"defaultNodeSelector": {
"beta.kubernetes.io/os": "linux"
}
}
}
},
"status": {
"componentStatus": {
"Base": {
"status": "HEALTHY"
},
"IngressGateways": {
"status": "HEALTHY"
},
"Pilot": {
"status": "HEALTHY"
}
},
"status": "HEALTHY"
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
Any ideas on how can I uninstall istio-control-plane manually?
You can use below command to change istio operator finalizer and delete it, it's a jq/kubectl oneliner made by #Rico here. I have tried also with kubectl patch but it didn't work.
kubectl get istiooperator -n istio-system istio-control-plane -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -
Additionally I have used istioctl operator remove
istioctl operator remove
Removing Istio operator...
Removed Deployment:istio-operator:istio-operator.
Removed Service:istio-operator:istio-operator.
Removed ServiceAccount:istio-operator:istio-operator.
Removed ClusterRole::istio-operator.
Removed ClusterRoleBinding::istio-operator.
✔ Removal complete
Results from kubectl get
kubectl get istiooperator istio-control-plane -n istio-system
Error from server (NotFound): namespaces "istio-system" not found

How to mount volume inside pod using "kubectl" CLI

I want to create a pod using kubectl CLI which will mount hostpath /etc/os-release inside pod container and display content of /etc/os-release file.
Is is possible to do it in using one-liner kubectl command?
kubectl run -i --rm busybox --image=busybox --overrides='{
"apiVersion": "v1",
"spec": {
"containers": [
{
"image": "busybox",
"name": "busybox",
"command": ["cat", "/etc/os-release"],
"resources": {},
"volumeMounts": [
{
"mountPath": "/etc/os-release",
"name": "release"
}
]
}
],
"volumes": [
{
"name": "release",
"hostPath": {
"path": "/etc/os-release",
"type": "File"
}
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Never"
},
"status": {}
}'
NAME=Buildroot
VERSION=2019.02.10
ID=buildroot
VERSION_ID=2019.02.10
PRETTY_NAME="Buildroot 2019.02.10"
pod "busybox" deleted

Telepresence fails, saying my namespace doesn't exist, pointing to problems with my k8s context

I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).
This all works fine. I have no trouble with this.
However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.
I ran the following (approximate) command:
kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true --token=mytoken
And it said: "Context "mycontext " modified."
I then ran "kubectl config view -o json" and got this:
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [],
"users": [],
"contexts": [
{
"name": "mycontext",
"context": {
"cluster": "",
"user": "",
"namespace": "abc-def-ghi"
}
}
],
"current-context": "mycontext"
}
That doesn't look right to me.
I then ran something like this:
telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n
And it said this:
T: Error: Namespace 'abc-def-ghi' does not exist
Update:
And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.
You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:
$ kubectl config set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
Then something like this for the user:
$ kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
Then you define your context based on your cluster, user and namespace:
$ kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
More information here
Your config should look something like this:
$ kubectl config view -o json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "development",
"cluster": {
"server": "https://1.2.3.4",
"certificate-authority-data": "DATA+OMITTED"
}
}
],
"users": [
{
"name": "developer",
"user": {
"client-certificate": "fake-cert-file",
"client-key": "fake-key-seefile"
}
}
],
"contexts": [
{
"name": "dev-frontend",
"context": {
"cluster": "development",
"user": "developer",
"namespace": "frontend"
}
}
],
"current-context": "dev-frontend"
}

kubernetes - volume mapping via command

I need to map a volume while starting the container, I am able to do it so with yaml file.
Is there an way volume mapping can be done via command line without using yaml file? just like
-v option in docker?
without using yaml file
Technically, yes: you would need a json file, as illustrated in "Create kubernetes pod with volume using kubectl run"
See kubectl run.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash

Call a service from any POD

I would like how to call a service from any pod inside or outsite the node.
I have 3 nodes with deployment and services. I already have a kube-proxy.
I exec bash on other pod:
kubectl exec --namespace=develop myotherdpod-78c6bfd876-6zvh2 -i -t -- /bin/bash
And inside my other pod I have tried to exec curl:
curl -v http://myservice.develop.svc.cluster.local/user
This is my created service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "myservice",
"namespace": "develop",
"selfLink": "/api/v1/namespaces/develop/services/mydeployment-svc",
"uid": "1b5fb4ae-ecd1-11e7-8599-02cc6a4bf8be",
"resourceVersion": "10660278",
"creationTimestamp": "2017-12-29T19:47:30Z",
"labels": {
"app": "mydeployment-deployment"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 8080
}
],
"selector": {
"app": "mydeployment-deployment"
},
"clusterIP": "100.99.99.140",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
It looks to me that something may be incorrect with the Network Overlay you deployed. First of all, I would double check that the pod can access kube-dns and obtain the proper IP of the service.
nslookup myservice.develop.svc.cluster.local
nslookup myservice # If they are in the same namespace it should work as well
If you are able to confirm that, then I would also check if services like kube-proxy are working correctly. You can do it by using
systemctl status kube-proxy
If that does not work I will also check the pods from the Overlay network by executing
kubectl get pods --namespace=kube-system
If they are all ok, I would try using a different network overlay: https://kubernetes.io/docs/concepts/cluster-administration/networking/
If that did not work either, I would check if there are firewall rules preventing some communication between the nodes.