I want to limit the number of job creations in my Kubernetes namespace per specific service account. jobs are created by another pod using this service account. do you think this is something possible?
my kubernetes version: v1.21.11
I tried to do it by a quota resource but I can't apply it on a particular serviceaccount.
Related
I have been struggling for some time to figure out how to accomplish the following:
I want to delete running pod on Azure Kubernetes Service cluster on scheduled basis, so that it respawns from deployment. This is required that application re-reads configuration files stored on shared storage and shared with other application.
I have found out that Kubernetes Jobs might be handy to accomplish this, but there is some but.
I cant figure how can I select corresponding pod related to my deployment as it adds random string to the deployment name, i.e
deployment-name-546fcbf44f-wckh4
Using selectors to get my pod doesnt succeed as there is not such operator like LIKE
kubectl get pods --field-selector metadata.name=deployment-name
No resources found
Looking at the official docs one way of doing this would be like so:
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
echo $pods
you'd need to modify job-name to match your job name
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job
I am spinning up a kubernetes job as a helm pre-install hook on GKE.
The job uses google/cloud-sdk image and I want it to create a compute engine persistent disk.
Here is its spec:
spec:
restartPolicy: OnFailure
containers:
- name: create-db-hook-container
image: google/cloud-sdk:latest
command: ["gcloud"]
args: ["compute", "disks", "create", "--size={{ .Values.volumeMounts.gceDiskSize }}", "--zone={{ .Values.volumeMounts.gceDiskZone }}", "{{ .Values.volumeMounts.gceDiskName }}"]
However this fails with the following error:
brazen-lobster-create-pd-hook-nc2v9 create-db-hook-container ERROR:
(gcloud.compute.disks.create) Could not fetch resource: brazen-lobster-create-pd-hook-nc2v9
create-db-hook-container
- Insufficient Permission: Request had insufficient authentication scopes.
brazen-lobster-create-pd-hook-nc2v9 create-db-hook-container
Apparently I have to grant the gcloud.compute.disks.create permission.
My question is to whom I have to grant this permission?
This is a GCP IAM permission therefore I assume it cannot be granted specifically on a k8s resource (?) so it cannot be dealt within the context of k8s RBAC, right?
edit: I have created a ComputeDiskCreate custom role, that encompasses two permissions:
gcloud.compute.disks.create
gcloud.compute.disks.list
I have attached it to service account
service-2340842080428#container-engine-robot.uam.gserviceaccount.com that my IAM google cloud console has given the name
Kubernetes Engine Service Agent
but the outcome is still the same.
In GKE, all nodes in a cluster are actually Compute Engine VM instances. They're assigned a service account at creation time to authenticate them to other services. You can check the service account assigned to nodes by checking the corresponding node pool.
By default, GKE nodes are assigned the Compute Engine default service account, which looks like PROJECT_NUMBER-compute#developer.gserviceaccount.com, unless you set a different one at cluster/node pool creation time.
Calls to other Google services (like the compute.disks.create endpoint in this case) will come from the node and be authenticated with the corresponding service account credentials.
You should therefore add the gcloud.compute.disks.create permission to your nodes' service account (likely PROJECT_NUMBER-compute#developer.gserviceaccount.com) in your Developer Console's IAM page.
EDIT: Prior to any authentication, the mere ability for a node to access a given Google service is defined by its access scope. This is defined at node pool's creation time and can't be edited. You'll need to create a new node pool and ensure you grant it the https://www.googleapis.com/auth/compute access scope to Compute Engine methods. You can then instruct your particular pod to run on those specific nodes.
Is it possible to invoke a kubernetes Cron job inside a pod . Like I have to run this job from the application running in pod .
Do I have to use kubectl inside the pod to execute the job .
Appreciate your help
Use the Default Service Account to access the API server. When you
create a pod, if you do not specify a service account, it is
automatically assigned the default service account in the same
namespace. If you get the raw json or yaml for a pod you have created
(for example, kubectl get pods/ -o yaml), you can see the
spec.serviceAccountName field has been automatically set.
You can access the API from inside a pod using automatically mounted
service account credentials, as described in Accessing the Cluster.
The API permissions of the service account depend on the authorization
plugin and policy in use.
In version 1.6+, you can opt out of automounting API credentials for a
service account by setting automountServiceAccountToken: false on the
service account
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
So the First task is to either grant the permission of doing what you need to create to the default service account of the pod OR create a custom service account and use it inside the pod
Programatically access the API server using that service account to create the job you need
It could be just a simple curl POST to the API server from inside the pod with the json for the job creation
How do I access the Kubernetes api from within a pod container?
you can also use the application specific SDK , for example if you have a python application , you can import kubernetes and run the job.
I'm trying to run a deployment on a Kubernetes cluster at work through a GitLab CI/CD process (i.e. I don't control most of the configs). I'm also new to Kubernetes, so please forgive me if this is basic and obvious.
I have created my rolebindings:
kubectl create rolebinding [foo] --clusterrole=edit --serviceaccount=[bar]:default
And added my tokens and all settings to GitLab
When the deployment kicks off however, it will always fail at deployment with:
Error from server (Forbidden): error when creating "/builds/bar/baz/deployment.yml": service is forbidden: User "system:serviceaccount:bar:bar-service-account" cannot create services in namespace "bar"
I thought I should be working in system:serviceaccount:bar:default. why is :default being replaced with :bar-service-account and/or how do I fix this.
Many many thanks in advance
You are granting permissions to the default service account with the rolebinding you are creating. However, the deployment is not using that service account. If you look at the deployment manifest, it will have a serviceAccountName of bar-service-account.
Either change the deployment to use the default service account or change the rolebinding to grant permissions to the service account being used.
Is it possible to specify the Pod creation time as part of the k8s Pod Name?
Scenario:
I have many pods with the same name prefix (and uniquely generated tail-end of the name) and these are all names of log groups.
I wish to distinguish between log groups by creation time.
Unfortunately AWS CloudWatch Logs console does not sort by log group creation time.
No, not with a deployment at least, a stateful set would work but you should really be using labels here.