As you probably know, Kubernetes Dashboard let the user to upload any YAML file to the API Server.
I browsed the API by Swagger UI but i cannot find any Api Server endpoint where to PUT/POST/DELETE a generic (and potentially multi-resource) Kubernetes YAML file, as the dashboard does.
In other words, I need the Api Server endpoint used by kubectl when the command is
kubectl create -f myResources.yaml
No such endpoint exists. Adding --v=6 to the kubectl command reveals the API calls it is making, and it iterates and posts individual objects
Related
I have bunch of cron jobs that sit in an EKS cluster and would like to trigger them via HTTP call. Does such API exist from Kubernetes? If not, what else can be done?
Every action in Kubernetes ca be invoked via rest API call. This is also stated as such in the docs.
There is a full API reference for the Kubernetes API you can review.
In fact, kubectl is using http under the hood. You can see those http calls by using the v flag with some verbosity level. For example:
$ kubectl get pods -v=6
I1206 00:06:33.591871 19308 loader.go:372] Config loaded from file: /home/blue/.kube/config
I1206 00:06:33.826009 19308 round_trippers.go:454] GET https://mycluster.azmk8s.io:443/api?timeout=32s 200 OK in 233 milliseconds
...
So you could check out the command you need by looking how kubectl does it. But given the fact that kubectl does use http, it's maybe easier to just use kubectl directly.
A cron job by definition is triggered by a time event (every hour, every month).
If you want to to force a trigger you can use:
kubectl create job --from=cronjob/<cron-job-name> <job-name> -n <namespace>
This is using the Kube Api which is a RESTful service, so I think it's fulfilling your request.
How can we obtain the gke pod counts running in the cluster? I found there are ways to get node count but we needed pod count as well. it will be better if we can use something with no logging needed in gcp operations.
You can do it with Kubernetes Python Client library as shown in this question, posted by Pradeep Padmanaban C, where he was looking for more effective way of doing it, but his example is actually the best what you can do to perform such operation as there is no specific method which would allow you just to count pods without retrieving their entire json manifests:
from kubernetes import client , config
config.load_kube_config()
v1= client.CoreV1Api()
ret_pod = v1.list_pod_for_all_namespaces(watch=False)
print(len(ret_pod.items))
You can also use a different method, which allows to retrieve pods only from specific namespace e.g.:
list_namespaced_pod("default")
In kubectl way you can do it as follows (as proposed here by RammusXu):
kubectl get pods --all-namespaces --no-headers | wc -l
You can directly access the kubernetes API using a restful API call. You will need to make sure you provide the authentication token in your call by including a bearer token.
Once you are able to query the api server directly, you can use GET <master_endpoint>/api/v1/pods to list all the pods in the cluster. You can also search for specific namespaces by specifying the namespace /api/v1/namespaces/<namespace>/pods.
Keep in mind that the kubectl cli tool is just a wrapper for API calls, each kubectl command will form a RESTful API call in a similar format to the one listed above, so any interaction you have with the cluster using kubectl can also be achieved through RESTful API calls
Using command-line utility kubectl we can list a custom resource instances as follows
kubectl get <customresource_kind>
In a similar fashion, do we have a REST API to achieve the same? i.e. the API takes the Kind of the CustomResource and lists all the instances created?
I am referring to this API Reference :
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/
You can list crds like every other API resources using Kubernetes REST API.
The final URL path will differ, depending on object's scope: Cluster or Namespaced.
The general rule for constructing the URL path is described here in official documentation.
Just to give you an example based on calico's clusterinformations.crd.projectcalico.org (v1):
kubectl proxy --port=8080 &
curl http://localhost:8080/apis/crd.projectcalico.org/v1/clusterinformations | jq '.items[].metadata.name
"default" <- I have only one instance of this type of custom resource
Is it possible to invoke a kubernetes Cron job inside a pod . Like I have to run this job from the application running in pod .
Do I have to use kubectl inside the pod to execute the job .
Appreciate your help
Use the Default Service Account to access the API server. When you
create a pod, if you do not specify a service account, it is
automatically assigned the default service account in the same
namespace. If you get the raw json or yaml for a pod you have created
(for example, kubectl get pods/ -o yaml), you can see the
spec.serviceAccountName field has been automatically set.
You can access the API from inside a pod using automatically mounted
service account credentials, as described in Accessing the Cluster.
The API permissions of the service account depend on the authorization
plugin and policy in use.
In version 1.6+, you can opt out of automounting API credentials for a
service account by setting automountServiceAccountToken: false on the
service account
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
So the First task is to either grant the permission of doing what you need to create to the default service account of the pod OR create a custom service account and use it inside the pod
Programatically access the API server using that service account to create the job you need
It could be just a simple curl POST to the API server from inside the pod with the json for the job creation
How do I access the Kubernetes api from within a pod container?
you can also use the application specific SDK , for example if you have a python application , you can import kubernetes and run the job.
My objective is to fetch the time series of a metric for a pod running on a kubernetes cluster on GKE using the Stackdriver TimeSeries REST API.
I have ensured that Stackdriver monitoring and logging are enabled on the kubernetes cluster.
Currently, I am able to fetch the time series of all the resources available in a cluster using the following filter:
metric.type="container.googleapis.com/container/cpu/usage_time" AND resource.labels.cluster_name="<MY_CLUSTER_NAME>"
In order to fetch the time series of a given pod id, I am using the following filter:
metric.type="container.googleapis.com/container/cpu/usage_time" AND resource.labels.cluster_name="<MY_CLUSTER_NAME>" AND resource.labels.pod_id="<POD_ID>"
This filter returns an HTTP 200 OK with an empty response body. I have found the pod ID from the metadata.uid field received in the response of the following kubectl command:
kubectl get deploy -n default <SERVICE_NAME> -o yaml
However, when I use the Pod ID of a background container spawned by GKE/Stackdriver, I do get the time series values.
Since I am able to see Stackdriver metrics of my pod on the GKE UI, I believe I should also get the metric values using the REST API.
My doubts/questions are:
Am I fetching the Pod ID of my pod correctly using kubectl?
Could there be some issue with my cluster setup/service deployment due to which I'm unable to fetch the metrics?
Is there some other way in which I can get the time series of my pod using the REST APIs?
I wouldn't rely on kubectl get deploy for pod ids. I would get them with something like kubectl -n default get pods | grep <prefix-for-your-pod> | awk '{print $1}'
I don't think so, but the best way to find out is opening a support ticket with GCP if you have any doubts.
Not that I'm aware of, Stackdriver is the monitoring solution in GCP. Again, you can check with GCP support. There are other tools that you can use to get metrics from Kubernetes like Prometheus. There are multiple guides on the web on how to set it up with Grafana on k8s. This is one for example.
Hope it helps!
Am I fetching the Pod ID of my pod correctly using kubectl?
You could use JSONpath as output with kubectl, in this case iterating over the Pods and fetching the metadata.name and metadata.uid fields:
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.uid}{"\n"}{end}'
which will output something like this:
nginx-65899c769f-2j775 d4fr5t6-bc2f-11e8-81e8-42010a84011f
nginx2-77b5c9d48c-7qlps 4f5gh6r-bc37-11e8-81e8-42010a84011f
Could there be some issue with my cluster setup/service deployment due to which I'm unable to fetch the metrics?
As #Rico mentioned in his answer, contacting the GCP support could be a way forward if you don't get further with the troubleshooting, see below.
Is there some other way in which I can get the time series of my pod using the REST APIs?
You could use the APIs Explorer or the Metrics Explorer from within the Stackdriver portal. There's some good troubleshooting tips here with a link to the APIs Explorer. In the Stackdriver Metrics Explorer it's fairly easy to reassemble the filter you've used using dropdown lists to choose e.g. a particular pod_id.
Taken from the Troubleshooting the Monitoring guide (linked above) regarding an empty HTTP 200 response on filtered queries:
If your API call returns status code 200 and an empty response, there
are several possibilities:
If your call uses a filter, then the filter might not have matched anything. The filter match is case-sensitive. To resolve filter
problems, start by specifying only one filter component, such as
metric.type, and see if you get results. Add the other filter
components one-by-one.
If you are working with a custom metric, you might not have specified the project where your custom metric is defined.*
I found this link when reading through the documentation of the Monitoring API. That link will get you to the APIs Explorer with some pre-filled fields, change these accordingly and add your own filter.
I have not tested more using the REST API at the moment but hopefully this could get you forward.