How to retrieve the api server URL with kubeadm? - kubernetes

Is there a kubectl command which returns the API server URL?
What I want is the code I need to put instead of below ... :
API_SERVER_URL=$(kubectl ...)
echo $API_SERVER_URL
http://<API_SERVER_IP>:<API_SERVER_PORT>
API_SERVER_IP should be the very same that the one in my .kube/config.

Try this:
kubectl proxy --port=8090 &
curl http://localhost:8090/api/
This returns something like this:
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.165.39.165:16443"
}
]
}
Without proxying you can use:
https://10.165.39.165:16443/api
but you need to pass authorization in the request.
In the response you see the array with the versions.
From here you can call and inspect the versions or get what is available on that version.
curl http://localhost:8090/api/v1
{
"kind": "APIResourceList",
"groupVersion": "v1",
"resources": [
....
"shortNames": [
"cs"
]
},
{
"name": "configmaps",
"singularName": "",
"namespaced": true,
"kind": "ConfigMap",
"verbs": [
"create",
"delete",
"deletecollection",
"get",
"list",
"patch",
"update",
"watch"
],
"shortNames": [
"cm"
],
.....

This one works fine:
PROTOCOL=$(kubectl get endpoints -n default kubernetes -o yaml -o=jsonpath="{.subsets[0].ports[0].name}")
IP=$(kubectl get endpoints -n default kubernetes -o yaml -o=jsonpath="{.subsets[0].addresses[0].ip}")
PORT=$(kubectl get endpoints -n default kubernetes -o yaml -o=jsonpath="{.subsets[0].ports[0].port}")
API_SERVER_URL="${PROTOCOL}://${IP}:${PORT}"
and its result is something like that:
echo $API_SERVER_URL
https://172.18.0.3:6443

Related

Istio -- Delete istio-control-plane Process Is Frozen

I was trying to uninstall and reinstall Istio from k8s cluster following the steps:
But I made a mistake that I deleted the namespace before deleting the istio-control-plane: kubectl delete istiooperator istio-control-plane -n istio-system. Then when I try to delete the istio-control-plane again, it froze.
I tried to remove the finalizer using the following steps but it said Error from server (NotFound): istiooperators.install.istio.io "istio-control-plane" not found
kubectl get istiooperator -n istio-system -o json > output.json
nano output.json # and remove finalizer
kubectl replace --raw "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane/finalize" -f output.json
Here is the content of kubectl get istiooperator -n istio-system -o json:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "install.istio.io/v1alpha1",
"kind": "IstioOperator",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"install.istio.io/v1alpha1\",\"kind\":\"IstioOperator\",\"metadata\":{\"annotations\":{},\"name\":\"istio-control-plane\",\"namespace\":\"istio-system\"},\"spec\":{\"addonComponents\":{\"prometheus\":{\"enabled\":false},\"tracing\":{\"enabled\":false}},\"hub\":\"hub.docker.prod.walmart.com/istio\",\"profile\":\"default\",\"values\":{\"global\":{\"defaultNodeSelector\":{\"beta.kubernetes.io/os\":\"linux\"}}}}}\n"
},
"creationTimestamp": "2020-12-05T23:39:34Z",
"deletionGracePeriodSeconds": 0,
"deletionTimestamp": "2020-12-07T16:41:41Z",
"finalizers": [
],
"generation": 2,
"name": "istio-control-plane",
"namespace": "istio-system",
"resourceVersion": "11750055",
"selfLink": "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane",
"uid": "fda8ee4f-54e7-45e8-91ec-c328fad1a86f"
},
"spec": {
"addonComponents": {
"prometheus": {
"enabled": false
},
"tracing": {
"enabled": false
}
},
"hub": "hub.docker.prod.walmart.com/istio",
"profile": "default",
"values": {
"global": {
"defaultNodeSelector": {
"beta.kubernetes.io/os": "linux"
}
}
}
},
"status": {
"componentStatus": {
"Base": {
"status": "HEALTHY"
},
"IngressGateways": {
"status": "HEALTHY"
},
"Pilot": {
"status": "HEALTHY"
}
},
"status": "HEALTHY"
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
Any ideas on how can I uninstall istio-control-plane manually?
You can use below command to change istio operator finalizer and delete it, it's a jq/kubectl oneliner made by #Rico here. I have tried also with kubectl patch but it didn't work.
kubectl get istiooperator -n istio-system istio-control-plane -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -
Additionally I have used istioctl operator remove
istioctl operator remove
Removing Istio operator...
Removed Deployment:istio-operator:istio-operator.
Removed Service:istio-operator:istio-operator.
Removed ServiceAccount:istio-operator:istio-operator.
Removed ClusterRole::istio-operator.
Removed ClusterRoleBinding::istio-operator.
✔ Removal complete
Results from kubectl get
kubectl get istiooperator istio-control-plane -n istio-system
Error from server (NotFound): namespaces "istio-system" not found

How to check external metrics data in Kubernetes?

I am using DirectXMan12/k8s-prometheus-adapte to push the external metric from Prometheus to Kubernetes.
After pushing the external metric how can I verify the data is k8s?
When I hit kubectl get --raw /apis/external.metrics.k8s.io/v1beta1 | jq I got the following result but after that, I do not have an idea how to fetch actual metrics value
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "external.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "subscription_back_log",
"singularName": "",
"namespaced": true,
"kind": "ExternalMetricValueList",
"verbs": [
"get"
]
}]
}
actual metric value is fetched per instance, for example, the metric you attached is namespaced: true, assuming the metric is for pods, you can access the actual data at
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/wanted_namepsace/pods/*/subscription_back_log" | jq '.'
(or specify the pod name instead of *)
If you want HPA to read you metric, the configurations are (for example)
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: your-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: your-pod
minReplicas: 1
maxReplicas: 10
metrics:
- pods:
metricName: subscription_back_log
targetAverageValue: 10000
type: Pods
The metric is namespaced, so you will need to add the namespace into the URL. Contrary to what the other answer suggests, I believe you don't need to include pods into the URL. This is an external metric. External metrics are not associated to any kubernetes object, so only the namespace should suffice:
/apis/external.metrics.k8s.io/v1beta1/namespaces/<namespace>/<metric_name>
Here's an example that works for me, using an external metric in my setup:
$ kubectl get --raw /apis/external.metrics.k8s.io/v1beta1 | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "external.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "redis_key_size",
"singularName": "",
"namespaced": true,
"kind": "ExternalMetricValueList",
"verbs": [
"get"
]
}
]
}
$ kubectl get --raw /apis/external.metrics.k8s.io/v1beta1/namespaces/default/redis_key_size
{
"kind": "ExternalMetricValueList",
"apiVersion": "external.metrics.k8s.io/v1beta1",
"metadata": {},
"items": [
{
"metricName": "redis_key_size",
"metricLabels": {
"key": "..."
},
"timestamp": "2021-10-07T09:00:01Z",
"value": "0"
},
...
]
}

How to update secret with "kubectl patch --type='json'"

I created a secret like this:
kubectl create secret generic test --from-literal=username=testuser --from-literal=password=12345
I want to update the username to testuser2 but I want to do it only with kubectl patch --type='json'.
This is how I tried to do it:
kubectl patch secret test --type='json' -p='[{"data":{"username": "testuser 2"}}]' -v=1
But I received:
The "" is invalid
Remember, I want to do it with the option of --type='json', no other workarounds.
I found how to do it after I read here that referred me to this great article.
This is the JSON secret:
{
"apiVersion": "v1",
"data": {
"password": "aWx1dnRlc3Rz",
"username": "dGVzdHVzZXI="
},
"kind": "Secret",
"metadata": {
"creationTimestamp": "2019-04-18T11:37:09Z",
"name": "test",
"namespace": "default",
"resourceVersion": "3017",
"selfLink": "/api/v1/namespaces/default/secrets/test",
"uid": "4d0a763e-61ce-11e9-92b6-0242ac110015"
},
"type": "Opaque"
}
Therefore, to update the user's field I needed to create the JSON Patch format:
[
{
"op" : "replace" ,
"path" : "/data/username" ,
"value" : "dGVzdHVzZXIy" # testuser2 in base64
}
]
Notice that the value should be in base64.
The result is:
kubectl patch secret test --type='json' -p='[{"op" : "replace" ,"path" : "/data/username" ,"value" : "dGVzdHVzZXIy"}]'
This is what I do in order to replace the secret:
kubectl patch secret my-secret --patch="{\"data\": { \"NEW_PASSWORD\": \"$(echo -n mypassword |base64 -w0)\" }}" -oyaml
This command solved my issue on version 1.24.x:
kubectl patch secret app-sec --patch="{\"data\": { \"license-id\": \"TEST\" }}" -oyaml

How to obtain the enable admission controller list in kubernetes?

AFAIK, the admission controller is the last pass before the submission to the database.
However I cannot know which one is enabled, Is there a way to know which one is taking effect?
Thanks.
The kube-apiserver is running in your kube-apiserver-< example.com > container.
The application does not have a get method at the moment to obtain the enabled admission plugins, but you can get the startup parameters from its command line.
kubectl -n kube-system describe po kube-apiserver-example.com
Another way, to see what is in the container: unfortunately there is no "ps" command in the container, but you can get the initial process command parameters from /proc , something like that:
kubectl -n kube-system exec kube-apiserver-example.com -- sed 's/--/\n/g' /proc/1/cmdline
It will be probably like :
enable-admission-plugins=NodeRestriction
There isn't an admissionscontroller k8s object exposed directly in kubectl.
To get a list of admissions controllers, you have to hit the k8s master API directly with the right versions supported by your k8s installation:
kubectl get --raw /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations | jq
For our environment, we run open policy agent as an admissions controller and we can see the webhook object here:
kubectl get --raw /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations | jq '.items[] | select(.metadata.name=="open-policy-agent-latest-helm-opa")'
Which outputs the JSON object:
{
"metadata": {
"name": "open-policy-agent-latest-helm-opa",
"selfLink": "/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/open-policy-agent-latest-helm-opa",
"uid": "02139b9e-b282-4ef9-8017-d698bb13882c",
"resourceVersion": "150373119",
"generation": 93,
"creationTimestamp": "2021-03-18T06:22:54Z",
"labels": {
"app": "open-policy-agent-latest-helm-opa",
"app.kubernetes.io/managed-by": "Helm",
"chart": "opa-1.14.6",
"heritage": "Helm",
"release": "open-policy-agent-latest-helm-opa"
},
"annotations": {
"meta.helm.sh/release-name": "open-policy-agent-latest-helm-opa",
"meta.helm.sh/release-namespace": "open-policy-agent-latest"
},
"managedFields": [
{
"manager": "Go-http-client",
"operation": "Update",
"apiVersion": "admissionregistration.k8s.io/v1beta1",
"time": "2021-03-18T06:22:54Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:meta.helm.sh/release-name": {},
"f:meta.helm.sh/release-namespace": {}
},
"f:labels": {
".": {},
"f:app": {},
"f:app.kubernetes.io/managed-by": {},
"f:chart": {},
"f:heritage": {},
"f:release": {}
}
},
"f:webhooks": {
".": {},
"k:{\"name\":\"webhook.openpolicyagent.org\"}": {
".": {},
"f:admissionReviewVersions": {},
"f:clientConfig": {
".": {},
"f:caBundle": {},
"f:service": {
".": {},
"f:name": {},
"f:namespace": {},
"f:port": {}
}
},
"f:failurePolicy": {},
"f:matchPolicy": {},
"f:name": {},
"f:namespaceSelector": {
".": {},
"f:matchExpressions": {}
},
"f:objectSelector": {},
"f:rules": {},
"f:sideEffects": {},
"f:timeoutSeconds": {}
}
}
}
}
]
},
"webhooks": [
{
"name": "webhook.openpolicyagent.org",
"clientConfig": {
"service": {
"namespace": "open-policy-agent-latest",
"name": "open-policy-agent-latest-helm-opa",
"port": 443
},
"caBundle": "LS0BLAH="
},
"rules": [
{
"operations": [
"*"
],
"apiGroups": [
"*"
],
"apiVersions": [
"*"
],
"resources": [
"namespaces"
],
"scope": "*"
}
],
"failurePolicy": "Ignore",
"matchPolicy": "Exact",
"namespaceSelector": {
"matchExpressions": [
{
"key": "openpolicyagent.org/webhook",
"operator": "NotIn",
"values": [
"ignore"
]
}
]
},
"objectSelector": {},
"sideEffects": "Unknown",
"timeoutSeconds": 20,
"admissionReviewVersions": [
"v1beta1"
]
}
]
}
You can see from above the clientConfig endpoint in k8s which is what the admissions payload is sent to. Tail the logs of the pods that serve that endpoint and you'll see your admissions requests being processed.
To get mutating webhooks, hit the version of the API of interest again:
# get v1 mutating webhook configurations
kubectl get --raw /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations | jq
This is the official explanation:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#which-plugins-are-enabled-by-default
Notes: You should get the stdout by exec in container
kubectl exec -it kube-apiserver-your-machine-name -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
You may find the list of default enabled admission controllers in doc:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#options, search for "--enable-admission-plugins";
or equivalently in code:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubeapiserver/options/plugins.go#L131-L145
For customized ones, you may run cmd in any master node:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -E "(enable|disable)-admission-plugins".
ImagePolicyWebhook uses a configuration file to set options for the behavior of the backend
Create one of these pods by running kubectl create -f examples/<name>.yaml. In this you can verify the user id under which the pod ran by inspecting the logs, for example:
$ kubectl create -f examples/pod-with-defaults.yaml
$ kubectl logs pod-with-defaults
Not sure why it was not stated before, but it's even in the kubernetes docs:
kubectl exec -it kube-apiserver-<your-machine-name> -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
It does exactly what you want.

Failed to negotiate an API version when using Kubernetes RBAC authorizer plugin

I followed the instruction in reference document. I created a ClusterRole called 'admin-roles' granting admin privilege, and bound the role to user 'tester'.
In k8s master:
# curl localhost:8080/apis/rbac.authorization.k8s.io/v1alpha1/clusterroles
{
"kind": "ClusterRoleList",
"apiVersion": "rbac.authorization.k8s.io/v1alpha1",
"metadata": {
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterroles",
"resourceVersion": "480750"
},
"items": [
{
"metadata": {
"name": "admins-role",
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterroles/admins-role",
"uid": "88a58ac6-471a-11e6-9ad4-52545f942a3b",
"resourceVersion": "479484",
"creationTimestamp": "2016-07-11T03:49:56Z"
},
"rules": [
{
"verbs": [
"*"
],
"attributeRestrictions": null,
"apiGroups": [
"*"
],
"resources": [
"*"
]
}
]
}
# curl localhost:8080/apis/rbac.authorization.k8s.io/v1alpha1/clusterrolebindings
{
"kind": "ClusterRoleBindingList",
"apiVersion": "rbac.authorization.k8s.io/v1alpha1",
"metadata": {
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterrolebindings",
"resourceVersion": "480952"
},
"items": [
{
"metadata": {
"name": "bind-admin",
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterrolebindings/bind-admin",
"uid": "c53bbc34-471a-11e6-9ad4-52545f942a3b",
"resourceVersion": "479632",
"creationTimestamp": "2016-07-11T03:51:38Z"
},
"subjects": [
{
"kind": "User",
"name": "tester"
}
],
"roleRef": {
"kind": "ClusterRole",
"name": "admins-role",
"apiVersion": "rbac.authorization.k8s.io/v1alpha1"
}
}
But when run kubectl get pods with 'tester' as user:
error: failed to negotiate an api version; server supports: map[], client supports: map[extensions/v1beta1:{} authentication.k8s.io/v1beta1:{} autoscaling/v1:{} batch/v1:{} federation/v1alpha1:{} v1:{} apps/v1alpha1:{} componentconfig/v1alpha1:{} policy/v1alpha1:{} rbac.authorization.k8s.io/v1alpha1:{} authorization.k8s.io/v1beta1:{} batch/v2alpha1:{}]
You can't hit the discovery API. Update the your ClusterRole to include "nonResourceURLs": ["*"].
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admins-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: ["*"]