Kubernetes API server filtering by field - in request time - kubernetes

I'm trying to get all the secrets in the cluster of type helm.sh/release.v1:
$ curl -X GET $APISERVER/api/v1/secrets --header "Authorization: Bearer $TOKEN" --insecure
{
"kind": "SecretList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/secrets",
"resourceVersion": "442181"
},
"items": [
{
"metadata": {
...
},
"data": {
...
},
"type": "helm.sh/release.v1"
},
{
"metadata": {
...
},
"data": {
...
},
"type": "kubernetes.io/service-account-token"
},
{
"metadata": {
...
},
"data": {
...
},
"type": "kubernetes.io/service-account-token"
},
...
}
I can use the command above and then filter by myself (jq or whatever) but I wonder if there's an option to filter in the API by adding query parameters or something, for example (didn't work):
curl -X GET $APISERVER/api/v1/secrets?type=<value>
any idea how to filter by specific field? (type) can I also request specific fields in the response (if I don't care about the data for instance)?

I'm going to use HTTP requests from my application (python) that runs
within a pod in the cluster. I am trying to be more efficient and ask
only for what I need (only specific type and not all secrets in the
cluster)
If your application is written in Python, maybe it's a good idea to use Kubernetes Python Client library to get the secrets ?
If you want to get all the secrets in the cluster of type helm.sh/release.v1, you can do it with the following Python code:
from kubernetes import client , config
config.load_kube_config()
v1 = client.CoreV1Api()
list_secrets = v1.list_secret_for_all_namespaces(field_selector='type=helm.sh/release.v1')
If you also want to count them, use:
print(len(list_secrets.items))
to print secret's name use:
print(list_secrets.items[0].metadata.name)
to retrieve it's data:
print(list_secrets.items[0].data)
and so on...
More details, including arguments that can be used with this method, you can find here (just search for list_secret_for_all_namespaces):
# **list_secret_for_all_namespaces**
> V1SecretList list_secret_for_all_namespaces(allow_watch_bookmarks=allow_watch_bookmarks, _continue=_continue, field_selector=field_selector, label_selector=label_selector, limit=limit, pretty=pretty, resource_version=resource_version, timeout_seconds=timeout_seconds, watch=watch)

Related

PUT k8s deployment returns 404

According to the Replacement section of Kubernetes API reference v1.24 I should be able to create a deployment with a PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name} HTTP request. The success response here is 201 Created. However, when I try the following, I get a 404 Not Found which is of course correct but unwanted: PUT requests should be treated as Create statements if the resource does not yet exist as documented. Updating a deployment does work (and returns the expected 200 OK HTTP response). Is there any documentation regarding this? Or is the request somehow incorrect? Ty.
➜ ~ curl --request PUT \
--url http://localhost:8080/apis/apps/v1/namespaces/ns/deployments/nginx-deployment \
--header 'content-type: application/json' \
--data '{
"apiVersion":"apps/v1",
"kind":"Deployment",
"metadata":{
"name":"nginx-deployment",
"labels":{
"app":"nginx"
}
},
"spec": {
"replicas" : 3,
"selector": {
"matchLabels" : {
"app":"nginx"
}
},
"template" : {
"metadata" : {
"labels" : {
"app":"nginx"
}
},
"spec":{
"containers":[
{
"name":"ngnix",
"image":"nginx:1.7.9",
"ports":[
{
"containerPort": 80
}
]
}
]
}
}
}
}'
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "deployments.apps \"nginx-deployment\" not found",
"reason": "NotFound",
"details": {
"name": "nginx-deployment",
"group": "apps",
"kind": "deployments"
},
"code": 404
}%
According to the documentation you provided,
PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name}
is meant to "replace the specified Deployment", while a Deployment is created with a POST:
create a Deployment
HTTP Request
POST /apis/apps/v1/namespaces/{namespace}/deployments
You are correct that the documentation also states:
For PUT requests, Kubernetes internally classifies these as either create or update based on the state of the existing object
so there seems to be a contradiction, but the Deployment API spec states that POST should be used to create a deployment and PUT to update it.

Using REST API to create alerting rule in Kibana fails on 400 "Invalid action groups: default"

I have ELK cloud v. 7.13.2 and trying to create alert rule with slack action via REST API. This is my curl invocation:
curl -u ****** -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' https://***********.westeurope.azure.elastic-cloud.com:9243/api/alerting/rule -X POST -d #src/rules/cpu_utilization.json
I am expecting that new rule is created, but unfortunately I am getting following error:
{
"statusCode": 400,
"error": "Bad Request",
"message": "Invalid action groups: default"
}
The contents of src/rules/cpu_utilization.json are:
{
"params": {
"nodeType": "host",
"criteria": [
{
"comparator": ">",
"timeSize": 1,
"metric": "cpu",
"threshold": [
80
],
"timeUnit": "m"
}
],
"sourceId": "default"
},
"consumer": "alerts",
"schedule": {
"interval": "1m"
},
"tags": [],
"name": "CPU2",
"throttle": "1000d",
"enabled": true,
"rule_type_id": "metrics.alert.inventory.threshold",
"notify_when": "onThrottleInterval",
"actions": [
{
"group": "default",
"id": "fce4c27f-d22a-4209-858c-253a06511c1b",
"params": {
"message": "{{alertName}} - {{context.group}} is in a state of {{context.alertState}}\n\nReason:\n{{context.reason}}"
}
}
]
}
Documentation says clearly:
Properties of the action objects:
group
(Required, string) Grouping actions is recommended for escalations for different types of alerts. If you don’t need this, set this value to default.
Is this a bug in ELK or I am doing something wrong? I am able to use API for other purposes, like listing rules, deleting rules etc. I am also capable of creating a rule without an action, but this doen`t seem to be too useful...
OKAY, I got an answer from ELK support. Apparently, you can use another endpoint to list all rule types GET /api/alerting/rule_types. Then you need to find your type and lookup property default_action_group_id - it will hold the correct value. Eg. in the above example it was:
"default_action_group_id": "metrics.inventory_threshold.fired"

Telepresence fails, saying my namespace doesn't exist, pointing to problems with my k8s context

I've been working with a bunch of k8s clusters for a while, using kubectl from the command line to examine information. I don't actually call kubectl directly, I wrap it in multiple scripting layers. I also don't use contexts, as it's much easier for me to specify different clusters in a different way. The resulting kubectl command line has explicit --server, --namespace, and --token parameters (and one other flag to disable tls verify).
This all works fine. I have no trouble with this.
However, I'm now trying to use telepresence, which doesn't give me a choice (yet) of not using contexts to configure this. So, I now have to figure out how to use contexts.
I ran the following (approximate) command:
kubectl config set-context mycontext --server=https://host:port --namespace=abc-def-ghi --insecure-skip-tls-verify=true --token=mytoken
And it said: "Context "mycontext " modified."
I then ran "kubectl config view -o json" and got this:
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [],
"users": [],
"contexts": [
{
"name": "mycontext",
"context": {
"cluster": "",
"user": "",
"namespace": "abc-def-ghi"
}
}
],
"current-context": "mycontext"
}
That doesn't look right to me.
I then ran something like this:
telepresence --verbose --swap-deployment mydeployment --expose 8080 --run java -jar target/my.jar -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n
And it said this:
T: Error: Namespace 'abc-def-ghi' does not exist
Update:
And I can confirm that this isn't a problem with telepresence. If I just run "kubectl get pods", it fails, saying "The connection to the server localhost:8080 was refused". That tells me it obviously can't connect to the k8s server. The key is my "set-context" command. It's obviously not working, and I don't understand what I'm missing.
You don't have any clusters or credentials defined in your configuration. First, you need to define a cluster:
$ kubectl config set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
Then something like this for the user:
$ kubectl config set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
Then you define your context based on your cluster, user and namespace:
$ kubectl config set-context dev-frontend --cluster=development --namespace=frontend --user=developer
More information here
Your config should look something like this:
$ kubectl config view -o json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
{
"name": "development",
"cluster": {
"server": "https://1.2.3.4",
"certificate-authority-data": "DATA+OMITTED"
}
}
],
"users": [
{
"name": "developer",
"user": {
"client-certificate": "fake-cert-file",
"client-key": "fake-key-seefile"
}
}
],
"contexts": [
{
"name": "dev-frontend",
"context": {
"cluster": "development",
"user": "developer",
"namespace": "frontend"
}
}
],
"current-context": "dev-frontend"
}

How can I use the BigQuery REST API from the command line?

Attempting to make a plain GET request to one of the BigQuery REST APIs gives an error that looks like this:
curl https://www.googleapis.com/bigquery/v2/projects/$PROJECT_ID/jobs/$JOBID
Output:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "required",
"message": "Login Required",
"locationType": "header",
"location": "Authorization",
...
What is the correct way to invoke one of the REST APIs from the command-line, such as the query or insert APIs? The API reference has a "Try this API", but the examples don't translate directly to something you can run from the command-line.
As a disclaimer, when working from the command-line, using the bq tool will usually be sufficient, or for more complex use cases, the BigQuery client libraries enable programming with BigQuery from multiple languages. It can still be useful sometimes to make plain requests to the REST APIs to see how certain APIs work at a low level, however.
First, make sure that you have installed the Google Cloud SDK. This should include the gcloud and bq command-line tools. If you haven't already, authorize your account by running this command from your terminal:
gcloud auth login
This should prompt you to log in and then give you an access code that you can paste into your terminal. (The exact process may change over time).
Now let's try a query using the BigQuery REST API, calling the jobs.query method. Modify this script with your own project name, which you can find from the Google Cloud Console, then paste the script into your terminal:
PROJECT="YOUR_PROJECT_NAME"
QUERY="\"SELECT 1 AS x, 'foo' AS y;\""
REQUEST="{\"kind\":\"bigquery#queryRequest\",\"useLegacySql\":false,\"query\":$QUERY}"
echo $REQUEST | \
curl -X POST -d #- -H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/queries
If it worked, you should see output that looks like this:
{
"kind": "bigquery#queryResponse",
"schema": {
"fields": [
{
"name": "x",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "y",
"type": "STRING",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<your project ID>",
"jobId": "<your job ID>"
},
"totalRows": "1",
"rows": [
{
"f": [
{
"v": "1"
},
{
"v": "foo"
}
]
}
],
"totalBytesProcessed": "0",
"jobComplete": true,
"cacheHit": false
}
If you haven't set up the bq command-line tool, you can use bq init from your terminal to do so. Once you have, you can try running the same query using it:
bq query --use_legacy_sql=False "SELECT 1 AS x, 'foo' AS y;"
You can also see the REST API requests that the bq tool makes by passing the --apilog= option:
bq --apilog= query --use_legacy_sql=False "SELECT [1, 2, 3] AS x;"
Now let's try an example using the jobs.insert method instead of the query API. Run this script, replacing YOUR_PROJECT_NAME with your project name:
PROJECT="YOUR_PROJECT_NAME"
QUERY="\"SELECT 1 AS x, 'foo' AS y;\""
REQUEST="{\"configuration\":{\"query\":{\"useLegacySql\":false,\"query\":${QUERY}}}}"
echo $REQUEST | \
curl -X POST -d #- -H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/jobs
Unlike the query API, which returned a response immediately, you will see a result that looks similar to this:
{
"kind": "bigquery#job",
"etag": "\"<etag string>\"",
"id": "<project name>:<job ID>",
"selfLink": "https://www.googleapis.com/bigquery/v2/projects/<project name>/jobs/<job ID>",
"jobReference": {
"projectId": "<project name>",
"jobId": "<job ID>"
},
"configuration": {
"query": {
"query": "SELECT 1 AS x, 'foo' AS y;",
"destinationTable": {
"projectId": "<project name>",
"datasetId": "<anonymous dataset>",
"tableId": "<anonymous table>"
},
"createDisposition": "CREATE_IF_NEEDED",
"writeDisposition": "WRITE_TRUNCATE",
"useLegacySql": false
}
},
"status": {
"state": "RUNNING"
},
"statistics": {
"creationTime": "<timestamp millis>",
"startTime": "<timestamp millis>"
},
"user_email": "<your email address>"
}
Notice the status:
"status": {
"state": "RUNNING"
},
If you want to check on the job now, you can use the jobs.get method. Similar to before, run this from your terminal, using the job ID from the output in the previous step:
PROJECT="YOUR_PROJECT_NAME"
JOB_ID="YOUR_JOB_ID"
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/jobs/$JOB_ID
If the query is done, you'll get a response that indicates as much:
...
"status": {
"state": "DONE"
},
...
Finally, we can make a request to fetch the query results, also using the REST API.
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/queries/$JOB_ID
The output will look similar to when we used the jobs.query method above:
{
"kind": "bigquery#getQueryResultsResponse",
"etag": "\"<etag string>\"",
"schema": {
"fields": [
{
"name": "x",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "y",
"type": "STRING",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<project ID>",
"jobId": "<job ID>"
},
"totalRows": "1",
"rows": [
{
"f": [
{
"v": "1"
},
{
"v": "foo"
}
]
}
],
"totalBytesProcessed": "0",
"jobComplete": true,
"cacheHit": true
}

How can I set a node to unschedulable status via the Kubernetes api?

I am attempting to emulate the behavior of kubectl patch. I'm sending an HTTP PATCH with a json payload of the following:
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "my-node-hostname"
},
"spec": {
"unschedulable": true
}
}
However, no matter how I seem to tweak this JSON, I keep getting a 415 and the following JSON status back:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server responded with the status code 415 but did not return more information",
"details": {},
"code": 415
}
Even with debug on kube-apiserver set to 1000, I get no feedback about why the payload is wrong!
Is there a particular format that one should use in the JSON payload sent via PATCH to enable this to work?
After a helpful member of the Kubernetes Slack channel mentioned I could get the payload from kubectl patch via the --verbose flag, it turns out that Kubernetes expects to get "Content-Type: application/strategic-merge-patch+json" when you are sending the PATCH payload.