I used the following guide to set up my chaostoolkit cluster: https://chaostoolkit.org/deployment/k8s/operator/
I am attempting to kill a pod using kubernetes, however the following error:
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb\" cannot list resource \"pods\" in API group \"\" in the namespace \"task-dispatcher\"","reason":"Forbidden","details":{"kind":"pods"},"code":403}
I set my serviceAccountName to an RBAC that I created but for some reason my kubernetes defaults to "system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb".
apiVersion: v1
kind: ConfigMap
metadata:
name: my-chaos-exp
namespace: chaostoolkit-run
data:
experiment.yaml: |
---
version: 1.0.0
title: Terminate Pod Experiment
description: If a pod gets terminated, a new one should be created in its place in a reasonable amount of time.
tags: ["kubernetes"]
secrets:
k8s:
KUBERNETES_CONTEXT: "docker-desktop"
method:
- type: action
name: terminate-k8s-pod
provider:
type: python
module: chaosk8s.pod.actions
func: terminate_pods
arguments:
label_selector: ''
name_pattern: my-release-rabbitmq-[0-9]$
rand: true
ns: default
---
apiVersion: chaostoolkit.org/v1
kind: ChaosToolkitExperiment
metadata:
name: my-chaos-exp
namespace: chaostoolkit-crd
spec:
serviceAccountName: test-user
automountServiceAccountToken: false
pod:
image: chaostoolkit/chaostoolkit:full
imagePullPolicy: IfNotPresent
experiment:
configMapName: my-chaos-exp
configMapExperimentFileName: experiment.yaml
restartPolicy: Never
Error which is shared is using default service account "choastoolkit". Look like the role associated might not proper permissions.
The service account "test-user" which is been used in ChaosToolkitExperiment defintion should have proper role access to delete pod.
Please specify proper service account having proper role access.
Related
I am new to Argo and following the Quickstart templates and would like to deploy the HTTP template as a workflow.
I create my cluster as so:
minikube start --driver=docker --cpus='2' --memory='8g'
kubectl create ns argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml
I then apply the HTTP template http_template.yaml from the docs:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: http-template-
spec:
entrypoint: main
templates:
- name: main
steps:
- - name: get-google-homepage
template: http
arguments:
parameters: [ { name: url, value: "https://www.google.com" } ]
- name: http
inputs:
parameters:
- name: url
http:
timeoutSeconds: 20 # Default 30
url: "{{inputs.parameters.url}}"
method: "GET" # Default GET
headers:
- name: "x-header-name"
value: "test-value"
# Template will succeed if evaluated to true, otherwise will fail
# Available variables:
# request.body: string, the request body
# request.headers: map[string][]string, the request headers
# response.url: string, the request url
# response.method: string, the request method
# response.statusCode: int, the response status code
# response.body: string, the response body
# response.headers: map[string][]string, the response headers
successCondition: "response.body contains \"google\"" # available since v3.3
body: "test body" # Change request body
argo submit -n argo http_template.yaml --watch
However I get the the following error:
Name: http-template-564qp
Namespace: argo
ServiceAccount: unset (will run with the default ServiceAccount)
Status: Error
Message: failed to get token volumes: service account argo/default does not have any secrets
I'm not clear on why this doesn't work given it's straight from the Quickstart documentation. Help would be appreciated.
It seems your default serviceaccount is missing a credential (kubernetes secret)
You can verify the existence of the credential by checking which one it needs by running
kubectl get serviceaccount -n default default -o yaml
kubectl get serviceaccount -n default default -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-02-10T10:48:54Z"
name: default
namespace: default
resourceVersion: "*******"
uid: ********************
secrets:
- name: default-token-*****
Now you should be able to find the secret which is attached to the serviceaccount
kubectl get secret -n default default-token-***** -o yaml
Or you can just run
kubectl get secret -n default
To see all secrets in the respective namespace (in this example, default)
Argo Workflows does not yet work with Kubernetes v1.24+. See this issue:
https://github.com/argoproj/argo-workflows/issues/8320
I am trying to establish the namespace "sandbox" in Kubernetes and have been using it for several days for several days without issue. Today I got the below error.
I have checked to make sure that I have all of the requisite configmaps in place.
Is there a log or something where I can find what this is referring to?
panic: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
I did find this (MountVolume.SetUp failed for volume "kube-api-access-fcz9j" : object "default"/"kube-root-ca.crt" not registered) thread and have applied the below patch to my service account, but I am still getting the same error.
automountServiceAccountToken: false
UPDATE:
In answer to #p10l I am working in a bare-metal cluster version 1.23.0. No terraform.
I am getting closer, but still not there.
This appears to be another RBAC problem, but the error does not make sense to me.
I have a user "dma." I am running workflows in the "sandbox" namespace using the context dma#kubernetes
The error now is
Create request failed: workflows.argoproj.io is forbidden: User "dma" cannot create resource "workflows" in API group "argoproj.io" in the namespace "sandbox"
but that user indeed appears to have the correct permissions.
This is the output of
kubectl get role dma -n sandbox -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"dma","namespace":"sandbox"},"rules":[{"apiGroups":["","apps","autoscaling","batch","extensions","policy","rbac.authorization.k8s.io","argoproj.io"],"resources":["pods","configmaps","deployments","events","pods","persistentvolumes","persistentvolumeclaims","services","workflows"],"verbs":["get","list","watch","create","update","patch","delete"]}]}
creationTimestamp: "2021-12-21T19:41:38Z"
name: dma
namespace: sandbox
resourceVersion: "1055045"
uid: 94191881-895d-4457-9764-5db9b54cdb3f
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
- argoproj.io
- workflows.argoproj.io
resources:
- pods
- configmaps
- deployments
- events
- pods
- persistentvolumes
- persistentvolumeclaims
- services
- workflows
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
This is the output of kubectl get rolebinding -n sandbox dma-sandbox-rolebinding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"dma-sandbox-rolebinding","namespace":"sandbox"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"dma"},"subjects":[{"kind":"ServiceAccount","name":"dma","namespace":"sandbox"}]}
creationTimestamp: "2021-12-21T19:56:06Z"
name: dma-sandbox-rolebinding
namespace: sandbox
resourceVersion: "1050593"
uid: d4d53855-b5fc-4f29-8dbd-17f682cc91dd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dma
subjects:
- kind: ServiceAccount
name: dma
namespace: sandbox
The issue you are describing is a reoccuring one, described here and here where your cluster lacks KUBECONFIG environment variable.
First, run echo $KUBECONFIG on all your nodes to see if it's empty.
If it is, look for the config file in your cluster, then copy it to all the nodes, then export this variable by running export KUBECONFIG=/path/to/config. This file can be usually found at ~/.kube/config/ or /etc/kubernetes/admin.conf` on master nodes.
Let me know, if this solution worked in your case.
I'm going through this post, where we bind a Role to a Service Account and then query the API Server using said Service Account. The role only has list permission to the pods resource.
I did an experiment where I mounted a random Secret into a Pod that is using the above Service Account and my expectation was that the Pod would attempt to query the Secret and fail the creation process, but the pod is actually running successfully with the secret mounted in place.
So I'm left wondering when does a pod actually needs to query the API Server for resources or if the pod creation process is special and gets the resources through other means.
Here is the actual list of resources I used for my test:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: example-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: example-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: example-rb
subjects:
- kind: ServiceAccount
name: example-sa
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: example-secret
data:
password: c3RhY2tvdmVyZmxvdw==
---
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
containers:
- name: webserver
image: nginx
volumeMounts:
- name: secret-volume
mountPath: /mysecrets
volumes:
- name: secret-volume
secret:
secretName: example-secret
...
I must admit that at first I didn't quite get your point, but when I read your question again I think now I can see what it's all about. First of all I must say that your initial interpretation is wrong. Let me explain it.
You wrote:
I did an experiment where I mounted a random Secret into a Pod that
is using the above Service Account
Actually the key word here is "I". The question is: who creates a Pod and who **mounts a random Secret into this Pod ? And the answer to that question from your perspective is simple: me. When you create a Pod you don't use the above mentioned ServiceAccount but you authorize your access to kubernetes API through entries in your .kube/config file. During the whole Pod creation process the ServiceAccount you created is not used a single time.
and my expectation was that the
Pod would attempt to query the Secret and fail the creation process,
but the pod is actually running successfully with the secret mounted
in place.
Why would it query the Secret if it doesn't use it ?
You can test it in a very simple way. You just need to kubectl exec into your running Pod and try to run kubectl, query kubernetes API directly or use one of the officially supported kubernetes cliet libraries. Then you will see that you're allowed to perform only specific operations, listed in your Role i.e. list Pods. If you attempt to run kubectl get secrets from within your Pod, it will fail.
The result you get is totally expected and there is nothig surprising in the fact that a random Secret is successfully mounted and a Pod is being created successfully every time. It's you who query kubernetes API and request creation of a Pod with a Secret mounted. **It's not Pod's
ServiceAccount.
So I'm left wondering when does a pod actually needs to query the API
Server for resources or if the pod creation process is special and
gets the resources through other means.
If you don't have specific queries e.g. written in python that use Kubernetes Python Client library that are run by your Pod or you don't use kubectl command from within such Pod, you won't see it making any queries to kubernetes API as all the queries needed for its creation process are performed by you, with permissions given to your user.
I have an application that I can deploy to kubernetes (Google Kubernetes Engine) to which I'm trying to add Google's CDN. For this I'm adding a BackendConfig. But when my gitlab pipeline tries to apply it it returns the following error.
$ kubectl apply -f backend-config.yaml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "cloud.google.com/v1beta1, Resource=backendconfigs", GroupVersionKind: "cloud.google.com/v1beta1, Kind=BackendConfig"
I have a strongly suspect that the account the pipeline is running under does not have enough privileges to access backend configs. Being new to k8s and gke I'm not sure how to fix this. Especially as I cannot find what permission is needed for this.
Edit
I added a kubectl get backendconfigs to my pipeline and that fails with the same error. Running it from my gcloud sdk environment the same command works.
Note the cluster is managed by Gitlab and using RBAC. My understanding is that gitlab creates service accounts per namespace in k8s with the edit role.
Edit 2
Added ClusterRole and ClusterRoleBinding based on Arghya's answer.
Output of $ kubectl get crd
NAME CREATED AT
backendconfigs.cloud.google.com 2020-01-09T15:37:27Z
capacityrequests.internal.autoscaling.k8s.io 2020-04-28T11:15:26Z
certificaterequests.cert-manager.io 2020-01-15T06:53:47Z
certificates.cert-manager.io 2020-01-15T06:53:48Z
challenges.acme.cert-manager.io 2020-01-15T06:53:48Z
challenges.certmanager.k8s.io 2020-01-09T15:47:01Z
clusterissuers.cert-manager.io 2020-01-15T06:53:48Z
clusterissuers.certmanager.k8s.io 2020-01-09T15:47:01Z
issuers.cert-manager.io 2020-01-15T06:53:48Z
issuers.certmanager.k8s.io 2020-01-09T15:47:01Z
managedcertificates.networking.gke.io 2020-01-09T15:37:53Z
orders.acme.cert-manager.io 2020-01-15T06:53:48Z
orders.certmanager.k8s.io 2020-01-09T15:47:01Z
scalingpolicies.scalingpolicy.kope.io 2020-01-09T15:37:53Z
updateinfos.nodemanagement.gke.io 2020-01-09T15:37:53Z
Output of kubectl describe crd backendconfigs.cloud.google.com
Name: backendconfigs.cloud.google.com
Namespace:
Labels: <none>
Annotations: <none>
API Version: apiextensions.k8s.io/v1beta1
Kind: CustomResourceDefinition
Metadata:
Creation Timestamp: 2020-01-09T15:37:27Z
Generation: 1
Resource Version: 198
Self Link: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/backendconfigs.cloud.google.com
UID: f0bc780a-32f5-11ea-b7bd-42010aa40111
Spec:
Conversion:
Strategy: None
Group: cloud.google.com
Names:
Kind: BackendConfig
List Kind: BackendConfigList
Plural: backendconfigs
Singular: backendconfig
Scope: Namespaced
Validation:
Open APIV 3 Schema:
Properties:
API Version:
Description: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
Type: string
Kind:
Description: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
Type: string
Metadata:
Type: object
Spec:
Description: BackendConfigSpec is the spec for a BackendConfig resource
Properties:
Cdn:
Description: CDNConfig contains configuration for CDN-enabled backends.
Properties:
Cache Policy:
Description: CacheKeyPolicy contains configuration for how requests to a CDN-enabled backend are cached.
Properties:
Include Host:
Description: If true, requests to different hosts will be cached separately.
Type: boolean
Include Protocol:
Description: If true, http and https requests will be cached separately.
Type: boolean
Include Query String:
Description: If true, query string parameters are included in the cache key according to QueryStringBlacklist and QueryStringWhitelist. If neither is set, the entire query string is included and if false the entire query string is excluded.
Type: boolean
Query String Blacklist:
Description: Names of query strint parameters to exclude from cache keys. All other parameters are included. Either specify QueryStringBlacklist or QueryStringWhitelist, but not both.
Items:
Type: string
Type: array
Query String Whitelist:
Description: Names of query string parameters to include in cache keys. All other parameters are excluded. Either specify QueryStringBlacklist or QueryStringWhitelist, but not both.
Items:
Type: string
Type: array
Type: object
Enabled:
Type: boolean
Required:
enabled
Type: object
Connection Draining:
Description: ConnectionDrainingConfig contains configuration for connection draining. For now the draining timeout. May manage more settings in the future.
Properties:
Draining Timeout Sec:
Description: Draining timeout in seconds.
Format: int64
Type: integer
Type: object
Iap:
Description: IAPConfig contains configuration for IAP-enabled backends.
Properties:
Enabled:
Type: boolean
Oauthclient Credentials:
Description: OAuthClientCredentials contains credentials for a single IAP-enabled backend.
Properties:
Client ID:
Description: Direct reference to OAuth client id.
Type: string
Client Secret:
Description: Direct reference to OAuth client secret.
Type: string
Secret Name:
Description: The name of a k8s secret which stores the OAuth client id & secret.
Type: string
Required:
secretName
Type: object
Required:
enabled
oauthclientCredentials
Type: object
Security Policy:
Type: object
Session Affinity:
Description: SessionAffinityConfig contains configuration for stickyness parameters.
Properties:
Affinity Cookie Ttl Sec:
Format: int64
Type: integer
Affinity Type:
Type: string
Type: object
Timeout Sec:
Format: int64
Type: integer
Type: object
Status:
Type: object
Version: v1beta1
Versions:
Name: v1beta1
Served: true
Storage: true
Status:
Accepted Names:
Kind: BackendConfig
List Kind: BackendConfigList
Plural: backendconfigs
Singular: backendconfig
Conditions:
Last Transition Time: 2020-01-09T15:37:27Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: <nil>
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Stored Versions:
v1beta1
Events: <none>
Create a ClusterRole and ClusterRoleBinding for service account example-sa in namespace example-namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: backendconfig-role
rules:
- apiGroups: ["cloud.google.com"]
resources: ["backendconfigs"]
verbs: ["get", "watch", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: backendconfig-rolebinding
subjects:
- kind: ServiceAccount
name: example-sa
namespace: example-namespace
roleRef:
kind: ClusterRole
name: backendconfig-role
apiGroup: rbac.authorization.k8s.io
To check the permission is applied
kubectl auth can-i get backendconfigs --as=system:serviceaccount:example-namespace:example-sa -n example-namespace
I have the following pod definition, (notice the explicitly set service account and secret):
apiVersion: v1
kind: Pod
metadata:
name: pod-service-account-example
labels:
name: pod-service-account-example
spec:
serviceAccountName: example-sa
containers:
- name: busybox
image: busybox:latest
command: ["sleep", "10000000"]
env:
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: example-secret
key: secret-key-123
It successfully runs. However if I use the the same service account example-sa, and try to retrieve the example-secret it fails:
kubectl get secret example-secret
Error from server (Forbidden): secrets "example-secret" is forbidden: User "system:serviceaccount:default:example-sa" cannot get resource "secrets" in API group "" in the namespace "default"
Does RBAC not apply for pods? Why is the pod able to retrieve the secret if not?
RBAC applies to service accounts, groups, users and not to pods.When you refer a secret in the env of a pod , service account is not being used to get the secret.Kubelet is getting the secret by using its own kubernetes client credential. Since kubelet is using its own credential to get the secret it does not matter whether the service account has RBAC to get secret or not because its not used.
Service account is used when you want to invoke Kubernetes API from a pod using kubernetes standard client library or Kubectl.
Code snippet of Kubelet for reference.