K8s API Query single container log within a Pod of several containers - kubernetes

I am trying to query a specific container's logs within a pod of several containers:
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/demo/pods/mypod-fgsardg4-dfsdf/log
How do I specify a particular container within this pod? I see that is part of the query via: https://kubernetes.io/docs/reference/kubernetes-api/workloads-resources/pod-v1/
But am not sure what it means by "in query".
This type of request fails:
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/demo/pods/mypod-fgsardg4-dfsdf/containername/log

Figured it out:
GET /api/v1/namespaces/{namespace}/pods/{name}/log?container=test
Just didn't quite have the API syntax all the way there.

Related

kubectl patch doesn't update status subresource

I am trying to update status subresource for a Custom Resource and I see a discrepency with curl and kubectl patch commands. when I use curl call it works perfectly fine but when I use kubectl patch command it says patched but with no change. Here are the command that I used
Using Curl:
When I connect to kubectl proxy and run the below curl call, it's successful and updates status subresource on my CR.
curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" --data '[{"op": "replace", "path": "/status/state", "value": "newState"}]' 'http://127.0.0.1:8001/apis/acme.com/v1alpha1/namespaces/acme/myresource/default/status'
Kubectl patch command:
Using kubectl patch says the CR is patch but with no change and the status sub-resource is updated.
$ kubectl -n acme patch myresource default --type='json' -p='[{"op": "replace", "path": "/status/state", "value":"newState"}]'
myresource.acme.com/default patched (no change)
However when I do the kubectl patch on the other sub-resources like spec it works fine. Am i missing something here?
As of kubectl v1.24, it is possible to patch subresources with an additional flag e.g. --subresource=status. This flag is considered "Alpha" but does not require enabling the feature.
As an example, with a yaml merge:
kubectl patch MyCrd myresource --type=merge --subresource status --patch 'status: {healthState: InSync}'
The Sysdig "What's New?" for v1.24 includes some more words about this flag:
Some kubectl commands like get, patch, edit, and replace will now contain a new flag --subresource=[subresource-name], which will allow fetching and updating status and scale subresources for all API resources.
You now can stop using complex curl commands to directly update subresources.
The --subresource flag is scheduled for promotion to "Beta" in Kubernetes v1.27 through KEP-2590: graduate kubectl subresource support to beta. The lifecycle of this feature can be tracked in #2590 Add subresource support to kubectl.

Kubernetes : How to scale a deployment from another service/pod?

I have 2 services. Service A and Service B. They correspond to deployments dA and dB.
I set up my cluster and start both services/deployments. Service A is reachable from the external world. External World --> Service A <--> Service B.
How can I scale dB (change replicaCount and run kubectl apply OR kubectl scale) from within Service A that is responding to a user request.
For example, if a user that is being served by Service A wants some extra resource in my app, I would like to provide that by adding an extra pod to dB. How do I do this programatically?
Every Pod, unless it opts out, has a ServiceAccount token injected into it, which enables it to interact with the kubernetes API according to the Role associated with the ServiceAccount
Thus, one can use any number of kubernetes libraries -- most of which are "in cluster" aware, meaning they don't need any further configuration to know about that injected ServiceAccount token and how to use it -- to issue scale events against any resource the ServiceAccount's Role is authorized to use
You can make it as simple or as complex as you'd like, but the tl;dr is akin to:
curl --cacert /var/run/secrets/kubernetes.io/ca.crt \
--header "Accept: application/json" \
--header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/token)" \
https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}/api/v1/namespaces

Proxy K8S app delegating authentication of requests from other pods

Background
I have a K8S cluster with a number of different pods that have their own specific service accounts, cluster roles, and cluster role bindings, so that they can execute various read/write requests directly with the K8S REST API. There are some complicated requests that can be issued, and I'd like to make a function to wrap the complex logic. However, the various services in the cluster are written in multiple (i.e. 6+) programming languages, and there does not (yet) seem to be a trivial way to allow all these services to directly re-use this code.
I'm considering creating a "proxy" micro-service, that exposes its own REST API, and issues the necessary requests and handles the "complex logic" on behalf of the client.
Problem
The only problem is that, with the current deployment model, a client could request that the proxy micro-service execute an HTTP request that the client itself isn't authorized to make.
Question
Is there a trivial/straightforward way for one pod, for example, to identify the client pod, and execute some kind of query/result-of-policy operation (i.e. by delegating the authentication to the K8S cluster authentication mechanism itself) to determine if it should honor the request from the client pod?
Kubernetes Authentication model represents a way how the particular user or service account can be entitled in k8s cluster, however Authorization methods determine whether initial request from the cluster visitor, aimed to do some action on cluster resources/objects, has sufficient permissions to make that possible.
Due to the fact that you've used specific service accounts per each Pod entire the cluster and granting them specific RBAC rules, it might be possible to use SelfSubjectAccessReview API in order to inspect requests to k8s REST API and determine whether the client's Pod service account has appropriate permission to perform any action on target's Pod namespace.
That can be achievable using kubectl auth can-i subcommand by submitting essential information for user impersonation.
I assume that you might also be able to query k8s authorization API group within HTTP request schema and then parse structured data from JSON/YAML format, like in the example below:
Regular kubectl auth can-i command to check whether default SA can retrieve data about Pods in default namespace:
kubectl auth can-i get pod --as system:serviceaccount:default:default
Equivalent method via HTTP call to k8s REST API using JSON type of content within Bearer Token authentication:
curl -k \
-X POST \
-d #- \
-H "Authorization: Bearer $MY_TOKEN" \
-H 'Accept: application/json' \
-H "Impersonate-User: system:serviceaccount:default:default" \
-H 'Content-Type: application/json' \
https://<API-Server>/apis/authorization.k8s.io/v1/selfsubjectaccessreviews <<'EOF'
{
"kind": "SelfSubjectAccessReview",
"apiVersion": "authorization.k8s.io/v1",
"spec":{"resourceAttributes":{"namespace":"default","verb":"get","resource":"pods"}}
}
EOF
Output:
.... "status": {
"allowed": true,
"reason": "RBAC: allowed by RoleBinding ....

How to get running pod status via Rest API

Any idea how to get a POD status via Kubernetes REST API for a POD with known name?
I can do it via kubectl by just typing "kubectl get pods --all-namespaces" since the output lists STATUS as a separate column but not sure which REST API to use to get the STATUS of a running pod.
Thank you
You can just query the API server:
curl -k -X GET -H "Authorization: Bearer [REDACTED]" \
https://127.0.0.1:6443/api/v1/pods
If you want to get the status you can pipe them through something like jq:
curl -k -X GET -H "Authorization: Bearer [REDACTED]" \
https://127.0.0.1:6443/api/v1/pods \
| jq '.items[] | .metadata.name + " " + .status.phase'
When not sure which REST API and the command is known, run the command as below with -v9 option. Note the kubectl supports only a subset of options in imperative way (get, delete, create etc), so it's better to get familiar with the REST API.
kubectl -v9 get pods
The above will output the REST API call. This can be modified appropriately and the output can piped to jq to get subset of the data.

Kubernetes REST API - Create deployment

I was looking at the kubernetes API endpoints listed here. Im trying to create a deployment which can be run from the terminal using kubectl ru CLUSTER-NAME IMAGE-NAME PORT. However I cant seem to find any endpoint for this command in the link I posted above. I can create a node using curl POST /api/v1/namespaces/{namespace}/pods and then delete using the curl -X DELETE http://localhost:8080/api/v1/namespaces/default/pods/node-name where node name HAS to be a single node (if there are 100 nodes, each should be done individually). Is there an api endpoint for creating and deleting deployments??
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a different API path, such as /api/v1 or /apis/extensions/v1beta1 and to extend the Kubernetes API, API groups is implemented.
Currently there are several API groups in use:
the core (oftentimes called legacy, due to not having explicit group name) group, which is at REST path /api/v1 and is not specified as part of the apiVersion field, e.g. apiVersion: v1.
the named groups are at REST path /apis/$GROUP_NAME/$VERSION, and use apiVersion: $GROUP_NAME/$VERSION (e.g. apiVersion: batch/v1). Full list of supported API groups can be seen in Kubernetes API reference.
To manage extensions resources such as Ingress, Deployments, and ReplicaSets refer to Extensions API reference.
As described in the reference, to create a Deployment:
POST /apis/extensions/v1beta1/namespaces/{namespace}/deployments
I debugged this by running kubectl with verbose logging: kubectl --v=9 update -f dev_inventory.yaml.
It showed the use of an API call like this one:
curl -i http://localhost:8001/apis/extensions/v1beta1/namespaces/default/deployments
Note that the first path element is apis, not the normal api. I don't know why it's like this, but the command above works.
I might be too late to help in this question, but here is what I tried on v1.9 to deploy a StatefulSet:
curl -kL -XPOST -H "Accept: application/json" -H "Content-Type: application/json" \
-H "Authorization: Bearer <*token*>" --data #statefulset.json \
https://<*ip*>:6443/apis/apps/v1/namespaces/eng-pst/statefulsets
I converted the statefulset.yaml to json cause I saw the data format when api was doing the POST was in json.
I ran this command to find out the API call i need to make for my k8s object:
kubectl --v=10 apply -f statefulset.yaml
(might not need a v=10 level but I wanted to as much info as I could)
The Kubernetes Rest Api documentation is quite sophisticated but unfortunately the deployment documentation is missing.
Since the Rest schema is identical to other resources you can figure out the rest calls:
GET retrieve a deployment by name:
curl -H "Authorization: Bearer ${KEY}" ${API_URL}/apis/extensions/v1beta1/namespaces/${namespace}/deployments/${NAME}
POST create a new deployment
curl -X POST -d #deployment-definition.json -H "Content-Type: application/json" -H "Authorization: Bearer ${KEY}" ${API_URL}/apis/extensions/v1beta1/namespaces/${namespace}/deployments
You should be able to use the calls right away when you provide the placeholder for your
API key ${KEY}
API url ${API_URL}
Deployment name ${NAME}
Namespace ${namespace}
Have you tried the analogous URL?
http://localhost:8080/api/v1/namespaces/default/deployment/deployment-name