Below is my output of kubectl get deploy --all-namespaces:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"downscaler/uptime": "Mon-Fri 07:00-23:59 Australia/Sydney",
"name": "actiontest-v2.0.9",
"namespace": "actiontest",
},
"spec": {
......
......
},
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"downscaler/uptime": "Mon-Fri 07:00-21:00 Australia/Sydney",
"name": "anotherapp-v0.1.10",
"namespace": "anotherapp",
},
"spec": {
......
......
}
}
I need to find the name of the deployment and its namespace if the annotation "downscaler/uptime" matches the value "Mon-Fri 07:00-21:00 Australia/Sydney". I am expecting an output like below:
deployment_name,namespace
If I am running below query against a single deployment, I get the required output.
#kubectl get deploy -n anotherapp -o jsonpath='{range .[*]}{.items[?(#.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.name}{","}{.items[?(#.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.namespace}{"\n"}'
anotherapp-v0.1.10,anotherapp
But when I run it against all namespaces, I am getting an output like below:
#kubectl get deploy --all-namespaces -o jsonpath='{range .[*]}{.items[?(#.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.name}{","}{.items[?(#.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.namespace}{"\n"}'
actiontest-v2.0.9 anotherapp-v0.1.10, actiontest anotherapp
This is quite short answer, however you can use this option:
kubectl get deploy --all-namespaces -o jsonpath='{range .items[?(.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")]}{.metadata.name}{"\t"}{.metadata.namespace}{"\n"}'
What I changed is logic how to work with data:
First thing what happens is getting into range list of elements we need to work on, not everything. I used filter expression - see Jsonpath notation - syntax elements.
And once we have already filtered entities in the list, we can easily retrieve other fields we need.
Related
I was trying to uninstall and reinstall Istio from k8s cluster following the steps:
But I made a mistake that I deleted the namespace before deleting the istio-control-plane: kubectl delete istiooperator istio-control-plane -n istio-system. Then when I try to delete the istio-control-plane again, it froze.
I tried to remove the finalizer using the following steps but it said Error from server (NotFound): istiooperators.install.istio.io "istio-control-plane" not found
kubectl get istiooperator -n istio-system -o json > output.json
nano output.json # and remove finalizer
kubectl replace --raw "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane/finalize" -f output.json
Here is the content of kubectl get istiooperator -n istio-system -o json:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "install.istio.io/v1alpha1",
"kind": "IstioOperator",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"install.istio.io/v1alpha1\",\"kind\":\"IstioOperator\",\"metadata\":{\"annotations\":{},\"name\":\"istio-control-plane\",\"namespace\":\"istio-system\"},\"spec\":{\"addonComponents\":{\"prometheus\":{\"enabled\":false},\"tracing\":{\"enabled\":false}},\"hub\":\"hub.docker.prod.walmart.com/istio\",\"profile\":\"default\",\"values\":{\"global\":{\"defaultNodeSelector\":{\"beta.kubernetes.io/os\":\"linux\"}}}}}\n"
},
"creationTimestamp": "2020-12-05T23:39:34Z",
"deletionGracePeriodSeconds": 0,
"deletionTimestamp": "2020-12-07T16:41:41Z",
"finalizers": [
],
"generation": 2,
"name": "istio-control-plane",
"namespace": "istio-system",
"resourceVersion": "11750055",
"selfLink": "/apis/install.istio.io/v1alpha1/namespaces/istio-system/istiooperators/istio-control-plane",
"uid": "fda8ee4f-54e7-45e8-91ec-c328fad1a86f"
},
"spec": {
"addonComponents": {
"prometheus": {
"enabled": false
},
"tracing": {
"enabled": false
}
},
"hub": "hub.docker.prod.walmart.com/istio",
"profile": "default",
"values": {
"global": {
"defaultNodeSelector": {
"beta.kubernetes.io/os": "linux"
}
}
}
},
"status": {
"componentStatus": {
"Base": {
"status": "HEALTHY"
},
"IngressGateways": {
"status": "HEALTHY"
},
"Pilot": {
"status": "HEALTHY"
}
},
"status": "HEALTHY"
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
Any ideas on how can I uninstall istio-control-plane manually?
You can use below command to change istio operator finalizer and delete it, it's a jq/kubectl oneliner made by #Rico here. I have tried also with kubectl patch but it didn't work.
kubectl get istiooperator -n istio-system istio-control-plane -o=json | \
jq '.metadata.finalizers = null' | kubectl apply -f -
Additionally I have used istioctl operator remove
istioctl operator remove
Removing Istio operator...
Removed Deployment:istio-operator:istio-operator.
Removed Service:istio-operator:istio-operator.
Removed ServiceAccount:istio-operator:istio-operator.
Removed ClusterRole::istio-operator.
Removed ClusterRoleBinding::istio-operator.
✔ Removal complete
Results from kubectl get
kubectl get istiooperator istio-control-plane -n istio-system
Error from server (NotFound): namespaces "istio-system" not found
I am trying to get the podname from the pod json using the command which is returning the error
kgp -o jsonpath="{.items[*].metadata[?(#.labels.module=='ddvv-script')].name}"
Error
is not array or slice and cannot be filtered. Printing more information for debugging the template:
template was:
{.items[*].metadata[?(#.labels.module=='ddvv-script')].name}
object given to jsonpath engine was:
Sample file
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2020-09-18T17:42:50Z",
"generateName": "ddvv-script-6b784db6bd-",
"labels": {
"app": "my-configs",
"lf.module": "ddvv-script",
"module": "ddvv-script",
"pod-template-hash": "6b784db6bd",
"release": "config"
},
"name": "ddvv-script-6b784db6bd-rjtgh",
What is wrong with this command
You can use below command. It gets podname of pods which has label module=ddvv-script
kubectl get pods --selector=module=ddvv-script --output=jsonpath={.items..metadata.name}
kgp -o jsonpath="{.items[*].metadata[?(#.labels.module=='ddvv-script')].name}"
should be
kgp -o jsonpath="{.items[?(#.metadata.labels.module=='ddvv-script')].metadata.name}"
I can get all the things of a list of objects, such as Secrets and ConfigMaps.
{
"kind": "SecretList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/kube-system/secrets",
"resourceVersion": "499638"
},
"items": [{
"metadata": {
"name": "aaa",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/secrets/aaa",
"uid": "96b0fbee-f14c-423d-9734-53fed20ae9f9",
"resourceVersion": "1354",
"creationTimestamp": "2020-02-24T11:20:23Z"
},
"data": "aaa"
}]
}
but I only want the name list, for this example :"aaa". Is there any way?
Yes, you can achieve it by using jsonpath output. Note that the specification you posted will look quite differently once applied. It will create one Secret object in your kube-system namespace and when you run:
$ kubectl get secret -n kube-system aaa -o json
the output will look similar to the following:
{
"apiVersion": "v1",
"kind": "Secret",
"metadata": {
"creationTimestamp": "2020-02-25T11:08:21Z",
"name": "aaa",
"namespace": "kube-system",
"resourceVersion": "34488887",
"selfLink": "/api/v1/namespaces/kube-system/secrets/aaa",
"uid": "229edeb3-57bf-11ea-b366-42010a9c0093"
},
"type": "Opaque"
}
To get only the name of your Secret you need to run:
kubectl get secret aaa -n kube-system -o jsonpath='{.metadata.name}'
i think this should work.
kubectl get secretlist -o=jsonpath="{.items[*].metadata.name]}" | grep -v HEAD | head -n1
check link in the below for more info.
https://kubernetes.io/docs/reference/kubectl/jsonpath/
I am following Kubeedge v1.0.0 deployment on Katacoda and on executing the following command.
kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json -s <kubedge-node-ip-address>:8080
It gives me an error
error: unable to recognize "/root/kubeedge/src/github.com/kubeedge/kubeedge/build/node.json": no matches for kind "Node" in version "v1"
Tried searching for this error but found no relevant answers. Anyone has idea on to get through this?
Below is the content of my node.json file
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "edge-node-1",
"labels": {
"name": "edge-node",
"node-role.kubernetes.io/edge": ""
}
}
}
I have reproduced it in Katakoda and in my case it works perfectly. I recommend you to go through the tutorial once again and take each step carefully.
You need to pay attention for step 7. Change metadata.name to the name of the edge node:
vim $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "edge-node",
"labels": {
"name": "edge-node",
"node-role.kubernetes.io/edge": ""
}
}
}
Then, execute following command, where you need to change IP address:
kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json -s <kubedge-node-ip-address>:8080
Another command to check if a correct API version was used, is:
kubectl explain node -s <kubedge-node-ip-address>:8080
After successful creation of node you should see:
node/edge-node created
I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).
This all works fine. I have no trouble with this.
However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.
I ran the following (approximate) command:
kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true --token=mytoken
And it said: "Context "mycontext " modified."
I then ran "kubectl config view -o json" and got this:
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [],
"users": [],
"contexts": [
{
"name": "mycontext",
"context": {
"cluster": "",
"user": "",
"namespace": "abc-def-ghi"
}
}
],
"current-context": "mycontext"
}
That doesn't look right to me.
I then ran something like this:
telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n
And it said this:
T: Error: Namespace 'abc-def-ghi' does not exist
Update:
And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.
You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:
$ kubectl config set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
Then something like this for the user:
$ kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
Then you define your context based on your cluster, user and namespace:
$ kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
More information here
Your config should look something like this:
$ kubectl config view -o json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "development",
"cluster": {
"server": "https://1.2.3.4",
"certificate-authority-data": "DATA+OMITTED"
}
}
],
"users": [
{
"name": "developer",
"user": {
"client-certificate": "fake-cert-file",
"client-key": "fake-key-seefile"
}
}
],
"contexts": [
{
"name": "dev-frontend",
"context": {
"cluster": "development",
"user": "developer",
"namespace": "frontend"
}
}
],
"current-context": "dev-frontend"
}