I have kubernetes config with recourse limits for each container, something similar to this:
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
Is it possible to retrieve requests and limits configuration through kubernetes api or any other way to access it?
Is it possible to retrieve requests and limits configuration through kubernetes api or any other way to access it?
Yes, everything in Kubernetes can be accessed via APIs. You can use the REST-API directly, but it is easiest to use a Kubernetes client library for your favorite programming language, because authentication can be a bit tricky otherwise.
Access Kubernetes API with curl using proxy
Example of accessing the API using curl is documented in Using kubectl proxy.
First, use kubectl proxy to access the API:
kubectl proxy --port=8080 &
Then use that port, e.g.:
curl http://localhost:8080/api/
with output:
{
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.0.1.149:443"
}
]
}
Related
Looking for some ideas on how to expose an http endpoint from kubernetes cluster that shows the docker images tag for each service that is live and up-to-date as services are updated with newer tags.
Example something like this: GET endpoint.com/api/metadata
{
"foo-service": "registry.com/foo-service:1.0.1",
"bar-service": "registry.com/bar-service:2.0.1"
}
when foo-service is deployed with a new tag registry.com/foo-service:1.0.2, I want the endpoint to reflect that change.
I can't just store the values as environment variables as it is not guaranteed the service that exposes that endpoint will be updated on each deploy.
Some previous I had but does not seem clean:
Update an external file in s3 to keep track of image tags on each deployment, and cache/load data on each request to endpoint.
Update a key in Redis within the cluster and read from that.
Folowing on #DavidMaze's suggestion. I ended up using the Kubernetes API to display a formatted version of the deployed services via an rest api endpoint.
Steps:
Attach a role with relevant permissions to a service account.
# This creates a service role that can be attached to
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: service-role
namespace: dev
rules:
- verbs:
- list
apiGroups:
- apps
resources:
- deployments
Use one of the available kubernetes api libraries to connect to the kubernetes api from within the cluster, Java in my case. And retrieve the deployment list.
implementation io.kubernetes:client-java:15.0.1
Output the result of the kubernetes api call and expose a cached version of it through an endpoint (Java snippet)
public Map<String, String> getDeployedServiceVersions() {
ApiClient client = Config.defaultClient();
Configuration.setDefaultApiClient(client);
AppsV1Api api = new AppsV1Api();
V1DeploymentList v1DeploymentList = api.listNamespacedDeployment(releaseNamespace, null, false, null, null, null, null, null, null, 10, false);
# Helper method to map deployments to result
return getServiceVersions(v1DeploymentList);
}
I am trying to create a BackendConfig resource on a GKE cluster v1.16.13-gke.401 but it gives me the following error:
unable to recognize "backendconfig.yaml": no matches for kind "BackendConfig" in version "cloud.google.com/v1"
I have checked the available apis with the kubectl api-versions command and cloud.google.com is not available. How can I enable it?
I want to create a BackendConfig whit a custom health check like this:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
spec:
healthCheck:
checkIntervalSec: 8
timeoutSec: 1
healthyThreshold: 1
unhealthyThreshold: 3
type: HTTP
requestPath: /health
port: 10257
And attach this BackendConfig to a Service like this:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
As mentioned in the comments, issue was caused due to the lack of HTTP Load Balancing add-on in your cluster.
When you are creating GKE cluster with all default setting, feature like HTTP Load Balancing is enabled.
The HTTP Load Balancing add-on is required to use the Google Cloud Load Balancer with Kubernetes Ingress. If enabled, a controller will be installed to coordinate applying load balancing configuration changes to your GCP project
More details can be found in GKE documentation.
For test I have created Cluster-1 without HTTP Load Balancing add-on. There was no BackendConfig CRD - Custom Resource Definition.
The CustomResourceDefinition API resource allows you to define custom resources. Defining a CRD object creates a new custom resource with a name and schema that you specify. The Kubernetes API serves and handles the storage of your custom resource. The name of a CRD object must be a valid DNS subdomain name.
Without BackendConfig and without cloud apiVersion like below
user#cloudshell:~ (k8s-tests-XXX)$ kubectl get crd | grep backend
user#cloudshell:~ (k8s-tests-XXX)$ kubectl api-versions | grep cloud
I was not able to create any BackendConfig.
user#cloudshell:~ (k8s-tests-XXX) $ kubectl apply -f bck.yaml
error: unable to recognize "bck.yaml": no matches for kind "BackendConfig" in version "cloud.google.com/v1"
To make it work, you have to enable HTTP Load Balancing You can do it via UI or command.
Using UI:
Navigation Menu > Clusters > [Cluster-Name] > Details > Clikc on
Edit > Scroll down to Add-ons and expand > Find HTTP load balancing and change from Disabled to Enabled.
or command:
gcloud beta container clusters update <clustername> --update-addons=HttpLoadBalancing=ENABLED --zone=<your-zone>
$ gcloud beta container clusters update cluster-1 --update-addons=HttpLoadBalancing=ENABLED --zone=us-central1-c
WARNING: Warning: basic authentication is deprecated, and will be removed in GKE control plane versions 1.19 and newer. For a list of recommended authentication methods, see: https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication
After a while, when Add-on was enabled:
$ kubectl get crd | grep backend
backendconfigs.cloud.google.com 2020-10-23T13:09:29Z
$ kubectl api-versions | grep cloud
cloud.google.com/v1
cloud.google.com/v1beta1
$ kubectl apply -f bck.yaml
backendconfig.cloud.google.com/my-backendconfig created
Let's say I define a Service named my-backend in Kubernetes. I would like to intercept every request sent to this service, what is the proper way to do it? For example, another container under the same namespace sends a request through http://my-backend.
I tried to use Admission Controller with a validation Webhook. However, it can intercept the CRUD operations on service resources, but it fails to intercept any connection to a specific service.
There is no direct way to intercept the requests to a service in Kubernetes.
For workaround this is what you can do-
Create a sidecar container just to log the each incoming request. logging
Run tcpdump -i eth0 -n in your containers and filter out requests
Use Zipkin
Creating service on cloud providers, will have their own logging mechanism. for ex - load balancer service on aws will have its logs generated on S3. aws elb logs
You can use a service mesh such as istio. An istio service mesh deploys a envoy proxy sidecar along with every pod. Envoy intercepts all the incoming requests to the pod and can provide you metrics such as number of requests etc. A service mesh brings in more features such as distributed tracing, rate limiting etc.
Kubernetes NetworkPolicy object will help on this. A network policy controls how group of pods can communicate with each other and other network endpoints. You can only allow the ingress traffic to the my-backend service based on pod selector. Below is the example that will allow the ingress traffic from specific
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-only-from-frontend-to-my-backend
namespace: default
spec:
podSelector:
matchLabels:
<my-backend pod label>
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
<Frontend web pod label>
The examples in the k8s java client all use default client, see here.
ApiClient client = Config.defaultClient();
Configuration.setDefaultApiClient(client);
How I can config k8s client so that it can talk to k8s CRDs (say, sparkoperator) from a k8s cluster pod? How should I config this client? (basePath, authentications?) And what is the basePath I should use within a pod in the same k8s cluster?
You can use the defaultClient for that as well.
The defaultClient() method will create a in-cluster client if the application is running inside the cluster and has the correct service account.
You can see the rules for defaultClient from comments on the method here:
/**
* Easy client creation, follows this plan
*
* <ul>
* <li>If $KUBECONFIG is defined, use that config file.
* <li>If $HOME/.kube/config can be found, use that.
* <li>If the in-cluster service account can be found, assume in cluster config.
* <li>Default to localhost:8080 as a last resort.
* </ul>
*
* #return The best APIClient given the previously described rules
*/
So if the application using the k8s java client, run on the cluster it self, it should be able to access stuff on the cluster as long as it has correct permission.
You need to allow your client application to be able to access the CRDs, like this example of ClusterRole for CRDs of Prometheus Operator:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: prometheus-crd-view
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["monitoring.coreos.com"]
resources: ["alertmanagers", "prometheuses", "prometheusrules", "servicemonitors"]
verbs: ["get", "list", "watch"]
You can use Kubernetes API, you just need to install curl.
curl http://localhost:8080/api/v1/namespaces/default/pods
Just change the localhost to apiserver ip address/dns name
You should read the Kubernetes API documentation.
Also, you will need to configure RBAC for access and permissions.
Containers inside a cluster are populated with a token that is being used to authenticate to the API server.
You can verify that by executing cat /var/run/secrets/kubernetes.io/serviceaccount/token inside the POD.
With that, your request to the apiserver from inside the container, might look like the following:
curl -ik \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods
You can also install the kubectl inside the container, also setting needed permissions, see this for more details.
I recommend following reads Installing kubectl in a Kubernetes Pod and The Kubernetes API call is coming from inside the cluster!
As for other Java clients there are also unofficial client libraries like Java (OSGi) and Java (Fabric8, OSGi).
I use the Kubernetes ServiceAccount plugin to automatically inject a ca.crt and token in to my pods. This is useful for applications such as kube2sky which need to access the API Server.
However, I run many hundreds of other pods that don't need this token. Is there a way to stop the ServiceAccount plugin from injecting the default-token in to these pods (or, even better, have it off by default and turn it on explicitly for a pod)?
As of Kubernetes 1.6+ you can now disable automounting API credentials for a particular pod as stated in the Kubernetes Service Accounts documentation
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
Right now there isn't a way to enable a service account for some pods but not others, although you can use ABAC to for some service accounts to restrict access to the apiserver.
This issue is being discussed in https://github.com/kubernetes/kubernetes/issues/16779 and I'd encourage you to add your use can to that issue and see when it will be implemented.