I was trying to uninstall and reinstall Istio from k8s cluster following the steps:
But I made a mistake that I deleted the namespace before deleting the istio-control-plane: kubectl delete istiooperator istio-control-plane -n istio-system. Then when I try to delete the istio-control-plane again, it froze.
I tried to remove the finalizer using the following steps but it said Error from server (NotFound): istiooperators.install.istio.io "istio-control-plane" not found
kubectl get istiooperator -n istio-system -o json > output.json
nano output.json # and remove finalizer
kubectl replace --raw "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane/finalize" -f output.json
Here is the content of kubectl get istiooperator -n istio-system -o json:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "install.istio.io/v1alpha1",
"kind": "IstioOperator",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"install.istio.io/v1alpha1\",\"kind\":\"IstioOperator\",\"metadata\":{\"annotations\":{},\"name\":\"istio-control-plane\",\"namespace\":\"istio-system\"},\"spec\":{\"addonComponents\":{\"prometheus\":{\"enabled\":false},\"tracing\":{\"enabled\":false}},\"hub\":\"hub.docker.prod.walmart.com/istio\",\"profile\":\"default\",\"values\":{\"global\":{\"defaultNodeSelector\":{\"beta.kubernetes.io/os\":\"linux\"}}}}}\n"
},
"creationTimestamp": "2020-12-05T23:39:34Z",
"deletionGracePeriodSeconds": 0,
"deletionTimestamp": "2020-12-07T16:41:41Z",
"finalizers": [
],
"generation": 2,
"name": "istio-control-plane",
"namespace": "istio-system",
"resourceVersion": "11750055",
"selfLink": "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane",
"uid": "fda8ee4f-54e7-45e8-91ec-c328fad1a86f"
},
"spec": {
"addonComponents": {
"prometheus": {
"enabled": false
},
"tracing": {
"enabled": false
}
},
"hub": "hub.docker.prod.walmart.com/istio",
"profile": "default",
"values": {
"global": {
"defaultNodeSelector": {
"beta.kubernetes.io/os": "linux"
}
}
}
},
"status": {
"componentStatus": {
"Base": {
"status": "HEALTHY"
},
"IngressGateways": {
"status": "HEALTHY"
},
"Pilot": {
"status": "HEALTHY"
}
},
"status": "HEALTHY"
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
Any ideas on how can I uninstall istio-control-plane manually?
You can use below command to change istio operator finalizer and delete it, it's a jq/kubectl oneliner made by #Rico here. I have tried also with kubectl patch but it didn't work.
kubectl get istiooperator -n istio-system istio-control-plane -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -
Additionally I have used istioctl operator remove
istioctl operator remove
Removing Istio operator...
Removed Deployment:istio-operator:istio-operator.
Removed Service:istio-operator:istio-operator.
Removed ServiceAccount:istio-operator:istio-operator.
Removed ClusterRole::istio-operator.
Removed ClusterRoleBinding::istio-operator.
✔ Removal complete
Results from kubectl get
kubectl get istiooperator istio-control-plane -n istio-system
Error from server (NotFound): namespaces "istio-system" not found
Related
Below is my output of kubectl get deploy --all-namespaces:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"downscaler/uptime": "Mon-Fri 07:00-23:59 Australia/Sydney",
"name": "actiontest-v2.0.9",
"namespace": "actiontest",
},
"spec": {
......
......
},
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"downscaler/uptime": "Mon-Fri 07:00-21:00 Australia/Sydney",
"name": "anotherapp-v0.1.10",
"namespace": "anotherapp",
},
"spec": {
......
......
}
}
I need to find the name of the deployment and its namespace if the annotation "downscaler/uptime" matches the value "Mon-Fri 07:00-21:00 Australia/Sydney". I am expecting an output like below:
deployment_name,namespace
If I am running below query against a single deployment, I get the required output.
#kubectl get deploy -n anotherapp -o jsonpath='{range .[*]}{.items[?(#.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.name}{","}{.items[?(#.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.namespace}{"\n"}'
anotherapp-v0.1.10,anotherapp
But when I run it against all namespaces, I am getting an output like below:
#kubectl get deploy --all-namespaces -o jsonpath='{range .[*]}{.items[?(#.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.name}{","}{.items[?(#.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.namespace}{"\n"}'
actiontest-v2.0.9 anotherapp-v0.1.10, actiontest anotherapp
This is quite short answer, however you can use this option:
kubectl get deploy --all-namespaces -o jsonpath='{range .items[?(.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")]}{.metadata.name}{"\t"}{.metadata.namespace}{"\n"}'
What I changed is logic how to work with data:
First thing what happens is getting into range list of elements we need to work on, not everything. I used filter expression - see Jsonpath notation - syntax elements.
And once we have already filtered entities in the list, we can easily retrieve other fields we need.
With the below yml file:
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:alpine
On running kubectl create -f nginx.pod.yml --save-config, then as per the documentation: If true, the configuration of current object will be saved in its annotation.
Where exactly is this annotation saved? How to view this annotation?
Below command would print all the annotations present in the pod my-nginx:
kubectl get pod my-nginx -o jsonpath='{.metadata.annotations}'
Under kubectl.kubernetes.io/last-applied-configuration of the above output, your configuration used is stored.
Here is an example showing the usage:
Original manifest for my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my-deploy
name: my-deploy
spec:
replicas: 1
selector:
matchLabels:
app: my-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
Created the deployment as follow:
k create -f x.yml --save-config
deployment.apps/my-deploy created
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
nginx
Now some user came and changed the image on nginx from nginx to httpd, using imperative commands.
k set image deployment/my-deploy nginx=httpd --record
deployment.apps/my-deploy image updated
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
httpd
However, we can check that the last applied declarative configuration is not updated.
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
Now, change the image name in the original manifest file from nginx to flask, then do kubectl apply(a declarative command)
kubectl apply -f orig.yml
deployment.apps/my-deploy configured
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
flask
Now check the last applied configuration annotation, this would have flask in it. Remember, it was missing when kubectl set image command was used.
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "flask",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
Where is the "last-applied" annotation saved:
Just like everything else, Its saved in etcd , created the pod using the manifest provided in the question and ran raw etcd command to print the content. (in this dev environment, etcd was not encrypted).
ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/pods/default/my-nginx
/registry/pods/default/my-nginx
k8s
v1Pod⚌
⚌
my-nginxdefault"*$a3s4b729-c96a-40f7-8de9-5d5f4ag21gfa2⚌⚌⚌b⚌
0kubectl.kubernetes.io/last-applied-configuration⚌{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"my-nginx","namespace":"default"},"spec":{"containers":[{"image":"nginx:alpine","name":"my-nginx"}]}}
I am trying to get the podname from the pod json using the command which is returning the error
kgp -o jsonpath="{.items[*].metadata[?(#.labels.module=='ddvv-script')].name}"
Error
is not array or slice and cannot be filtered. Printing more information for debugging the template:
template was:
{.items[*].metadata[?(#.labels.module=='ddvv-script')].name}
object given to jsonpath engine was:
Sample file
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2020-09-18T17:42:50Z",
"generateName": "ddvv-script-6b784db6bd-",
"labels": {
"app": "my-configs",
"lf.module": "ddvv-script",
"module": "ddvv-script",
"pod-template-hash": "6b784db6bd",
"release": "config"
},
"name": "ddvv-script-6b784db6bd-rjtgh",
What is wrong with this command
You can use below command. It gets podname of pods which has label module=ddvv-script
kubectl get pods --selector=module=ddvv-script --output=jsonpath={.items..metadata.name}
kgp -o jsonpath="{.items[*].metadata[?(#.labels.module=='ddvv-script')].name}"
should be
kgp -o jsonpath="{.items[?(#.metadata.labels.module=='ddvv-script')].metadata.name}"
I created a secret like this:
kubectl create secret generic test --from-literal=username=testuser --from-literal=password=12345
I want to update the username to testuser2 but I want to do it only with kubectl patch --type='json'.
This is how I tried to do it:
kubectl patch secret test --type='json' -p='[{"data":{"username": "testuser 2"}}]' -v=1
But I received:
The "" is invalid
Remember, I want to do it with the option of --type='json', no other workarounds.
I found how to do it after I read here that referred me to this great article.
This is the JSON secret:
{
"apiVersion": "v1",
"data": {
"password": "aWx1dnRlc3Rz",
"username": "dGVzdHVzZXI="
},
"kind": "Secret",
"metadata": {
"creationTimestamp": "2019-04-18T11:37:09Z",
"name": "test",
"namespace": "default",
"resourceVersion": "3017",
"selfLink": "/api/v1/namespaces/default/secrets/test",
"uid": "4d0a763e-61ce-11e9-92b6-0242ac110015"
},
"type": "Opaque"
}
Therefore, to update the user's field I needed to create the JSON Patch format:
[
{
"op" : "replace" ,
"path" : "/data/username" ,
"value" : "dGVzdHVzZXIy" # testuser2 in base64
}
]
Notice that the value should be in base64.
The result is:
kubectl patch secret test --type='json' -p='[{"op" : "replace" ,"path" : "/data/username" ,"value" : "dGVzdHVzZXIy"}]'
This is what I do in order to replace the secret:
kubectl patch secret my-secret --patch="{\"data\": { \"NEW_PASSWORD\": \"$(echo -n mypassword |base64 -w0)\" }}" -oyaml
This command solved my issue on version 1.24.x:
kubectl patch secret app-sec --patch="{\"data\": { \"license-id\": \"TEST\" }}" -oyaml
AFAIK, the admission controller is the last pass before the submission to the database.
However I cannot know which one is enabled, Is there a way to know which one is taking effect?
Thanks.
The kube-apiserver is running in your kube-apiserver-< example.com > container.
The application does not have a get method at the moment to obtain the enabled admission plugins, but you can get the startup parameters from its command line.
kubectl -n kube-system describe po kube-apiserver-example.com
Another way, to see what is in the container: unfortunately there is no "ps" command in the container, but you can get the initial process command parameters from /proc , something like that:
kubectl -n kube-system exec kube-apiserver-example.com -- sed 's/--/\n/g' /proc/1/cmdline
It will be probably like :
enable-admission-plugins=NodeRestriction
There isn't an admissionscontroller k8s object exposed directly in kubectl.
To get a list of admissions controllers, you have to hit the k8s master API directly with the right versions supported by your k8s installation:
kubectl get --raw /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations | jq
For our environment, we run open policy agent as an admissions controller and we can see the webhook object here:
kubectl get --raw /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations | jq '.items[] | select(.metadata.name=="open-policy-agent-latest-helm-opa")'
Which outputs the JSON object:
{
"metadata": {
"name": "open-policy-agent-latest-helm-opa",
"selfLink": "/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/open-policy-agent-latest-helm-opa",
"uid": "02139b9e-b282-4ef9-8017-d698bb13882c",
"resourceVersion": "150373119",
"generation": 93,
"creationTimestamp": "2021-03-18T06:22:54Z",
"labels": {
"app": "open-policy-agent-latest-helm-opa",
"app.kubernetes.io/managed-by": "Helm",
"chart": "opa-1.14.6",
"heritage": "Helm",
"release": "open-policy-agent-latest-helm-opa"
},
"annotations": {
"meta.helm.sh/release-name": "open-policy-agent-latest-helm-opa",
"meta.helm.sh/release-namespace": "open-policy-agent-latest"
},
"managedFields": [
{
"manager": "Go-http-client",
"operation": "Update",
"apiVersion": "admissionregistration.k8s.io/v1beta1",
"time": "2021-03-18T06:22:54Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:meta.helm.sh/release-name": {},
"f:meta.helm.sh/release-namespace": {}
},
"f:labels": {
".": {},
"f:app": {},
"f:app.kubernetes.io/managed-by": {},
"f:chart": {},
"f:heritage": {},
"f:release": {}
}
},
"f:webhooks": {
".": {},
"k:{\"name\":\"webhook.openpolicyagent.org\"}": {
".": {},
"f:admissionReviewVersions": {},
"f:clientConfig": {
".": {},
"f:caBundle": {},
"f:service": {
".": {},
"f:name": {},
"f:namespace": {},
"f:port": {}
}
},
"f:failurePolicy": {},
"f:matchPolicy": {},
"f:name": {},
"f:namespaceSelector": {
".": {},
"f:matchExpressions": {}
},
"f:objectSelector": {},
"f:rules": {},
"f:sideEffects": {},
"f:timeoutSeconds": {}
}
}
}
}
]
},
"webhooks": [
{
"name": "webhook.openpolicyagent.org",
"clientConfig": {
"service": {
"namespace": "open-policy-agent-latest",
"name": "open-policy-agent-latest-helm-opa",
"port": 443
},
"caBundle": "LS0BLAH="
},
"rules": [
{
"operations": [
"*"
],
"apiGroups": [
"*"
],
"apiVersions": [
"*"
],
"resources": [
"namespaces"
],
"scope": "*"
}
],
"failurePolicy": "Ignore",
"matchPolicy": "Exact",
"namespaceSelector": {
"matchExpressions": [
{
"key": "openpolicyagent.org/webhook",
"operator": "NotIn",
"values": [
"ignore"
]
}
]
},
"objectSelector": {},
"sideEffects": "Unknown",
"timeoutSeconds": 20,
"admissionReviewVersions": [
"v1beta1"
]
}
]
}
You can see from above the clientConfig endpoint in k8s which is what the admissions payload is sent to. Tail the logs of the pods that serve that endpoint and you'll see your admissions requests being processed.
To get mutating webhooks, hit the version of the API of interest again:
# get v1 mutating webhook configurations
kubectl get --raw /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations | jq
This is the official explanation:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#which-plugins-are-enabled-by-default
Notes: You should get the stdout by exec in container
kubectl exec -it kube-apiserver-your-machine-name -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
You may find the list of default enabled admission controllers in doc:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#options, search for "--enable-admission-plugins";
or equivalently in code:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubeapiserver/options/plugins.go#L131-L145
For customized ones, you may run cmd in any master node:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -E "(enable|disable)-admission-plugins".
ImagePolicyWebhook uses a configuration file to set options for the behavior of the backend
Create one of these pods by running kubectl create -f examples/<name>.yaml. In this you can verify the user id under which the pod ran by inspecting the logs, for example:
$ kubectl create -f examples/pod-with-defaults.yaml
$ kubectl logs pod-with-defaults
Not sure why it was not stated before, but it's even in the kubernetes docs:
kubectl exec -it kube-apiserver-<your-machine-name> -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
It does exactly what you want.