when I create ingress it is created with no address and when I describe my ingress I see message
Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity
status code: 403, request id: 5423ee08-9a72-47fe-8389-3f50ce78b0e5
and when I check pod logs for aws loadbalancer controller, see the similar error
{"level":"error","ts":1674658664.611337,"logger":"controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"catch","namespace":"sa-backup","error":"WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: b4a791b1-f56b-4d4b-84b4-a7b6bc5ff8b9"}
I can confirm that classic load balancer is created fine and ingressRoutes are working. Just problem with ingress controller
Your AWS Load Balancer Controller needs access to the AWS API. The standard way to give API access to a pod in EKS is using IRSA (https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) which allows the Pod to assume roles through OIDC in the cluster and a trust relationship with the AWS API. This trust relationship needs setup before IRSA will work (https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html).
The AWS IAM Role that your AWS LB Controller is trying to assume will need to have a Trust policy that allows the OIDC endpoint of your EKS cluster that references the Namespace and ServiceAccount name used by the pod.
Assuming you are in us-east-2 and the service account is in the aws-lbc namespace and named aws-load-balancer-controller, the ServiceAccount that the pod is using will need an annotation specifying which AWS IAM role to use:
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam:<account number>:role/<iam role name>
name: aws-load-balancer-controller
namespace: aws-lbc
The trust policy on the AWS IAM role should look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<account number>:oidc-provider/oidc.eks.us-east-2.amazonaws.com/id/<OIDC endpoint ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-2.amazonaws.com/id/<OIDC endpoint ID>:sub": "system:serviceaccount:aws-lbc:aws-load-balancer-controller"
}
}
}
]
}
The <OIDC endpoint ID> can be retrieved from the AWS console under your EKS cluster Overview -> Details -> OpenID Connect provider URL (it's the 32 character string after /id).
Related
I have kubernetes config with recourse limits for each container, something similar to this:
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
Is it possible to retrieve requests and limits configuration through kubernetes api or any other way to access it?
Is it possible to retrieve requests and limits configuration through kubernetes api or any other way to access it?
Yes, everything in Kubernetes can be accessed via APIs. You can use the REST-API directly, but it is easiest to use a Kubernetes client library for your favorite programming language, because authentication can be a bit tricky otherwise.
Access Kubernetes API with curl using proxy
Example of accessing the API using curl is documented in Using kubectl proxy.
First, use kubectl proxy to access the API:
kubectl proxy --port=8080 &
Then use that port, e.g.:
curl http://localhost:8080/api/
with output:
{
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.0.1.149:443"
}
]
}
I am trying to assume a role from an eks container that has an IRSA role attached to it. however when I assume a role, I can see that the container is using the ec2 iam role instead or the IRSA role.
[I] ✦2 ➜ kubectl get sa web -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxxx:role/web
assume role from inside the container
CREDENTIALS=$(aws sts assume-role --role-arn "$ROLE_ARN" --role-session-name "$ROLE_SESSION_NAME")
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::xxxxxxxxxx:assumed-role/eks-instance/role/i-02xxxxxxxxxx is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::yyyyyyyyyy:role/app
Cleaning up file based variables
aws sts get-caller-identity returns the ec2 iam role.
If not specified, pods are run under a default service account.
How can I check what the default service account is authorized to do?
Do we need it to be mounted there with every pod?
If not, how can we disable this behavior on the namespace level or cluster level.
What other use cases the default service account should be handling?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace? For example we will not use real user accounts to create things in the cluster because users come and go.
Environment: Kubernetes 1.12 , with RBAC
A default service account is automatically created for each namespace.
kubectl get serviceaccount
NAME SECRETS AGE
default 1 1d
Service accounts can be added when required. Each pod is associated with exactly one service account but multiple pods can use the same service account.
A pod can only use one service account from the same namespace.
Service account are assigned to a pod by specifying the account’s name in the pod manifest. If you don’t assign it explicitly the pod will use the default service account.
The default permissions for a service account don't allow it to
list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.
By default, the default service account in a namespace has no permissions other than those of an unauthenticated user.
Therefore pods by default can’t even view cluster state. Its up to you to grant them appropriate permissions to do that.
kubectl exec -it test -n foo sh / # curl
localhost:8001/api/v1/namespaces/foo/services { "kind": "Status",
"apiVersion": "v1", "metadata": {
}, "status": "Failure", "message": "services is forbidden: User
"system:serviceaccount:foo:default" cannot list resource
"services" in API group "" in the namespace "foo"", "reason":
"Forbidden", "details": {
"kind": "services" }, "code": 403
as can be seen above the default service account cannot list services
but when given proper role and role binding like below
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: foo-role
namespace: foo
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: test-foo
namespace: foo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: foo-role
subjects:
- kind: ServiceAccount
name: default
namespace: foo
now i am able to list the resurce service
kubectl exec -it test -n foo sh
/ # curl localhost:8001/api/v1/namespaces/foo/services
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/bar/services",
"resourceVersion": "457324"
},
"items": []
Giving all your service accounts the clusteradmin ClusterRole is a
bad idea. It is best to give everyone only the permissions they need to do their job and not a single permission more.
It’s a good idea to create a specific service account for each pod
and then associate it with a tailor-made role or a ClusterRole through a
RoleBinding.
If one of your pods only needs to read pods while the other also needs to modify them then create two different service accounts and make those pods use them by specifying the serviceaccountName property in the
pod spec.
You can refer the below link for an in-depth explanation.
Service account example with roles
You can check kubectl explain serviceaccount.automountServiceAccountToken and edit the service account
kubectl edit serviceaccount default -o yaml
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
creationTimestamp: 2018-10-14T08:26:37Z
name: default
namespace: default
resourceVersion: "459688"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: de71e624-cf8a-11e8-abce-0642c77524e8
secrets:
- name: default-token-q66j4
Once this change is done whichever pod you spawn doesn't have a serviceaccount token as can be seen below.
kubectl exec tp -it bash
root#tp:/# cd /var/run/secrets/kubernetes.io/serviceaccount
bash: cd: /var/run/secrets/kubernetes.io/serviceaccount: No such file or directory
An application/deployment can run with a service account other than default by specifying it in the serviceAccountName field of a deployment configuration.
What I service account, or any other user, can do is determined by the roles it is given (bound to) - see roleBindings or clusterRoleBindings; the verbs are per a role's apiGroups and resources under the rules definitions.
The default service account doesn't seem to be given any roles by default. It is possible to grant a role to the default service account as described in #2 here.
According to this, "...In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account".
HTH
How can I check what the default service account is authorized to do?
There isn't an easy way, but auth can-i may be helpful. Eg
$ kubectl auth can-i get pods --as=system:serviceaccount:default:default
no
For users there is auth can-i --list but this does not seem to work with --as which I suspect is a bug. In any case, you can run the above commands on a few verbs and the answer will be no in all cases, but I only tried a few. Conclusion: it seems that the default service account has no permissions by default (since in the cluster where I checked, we have not configured it, AFAICT).
Do we need it to be mounted there with every pod?
Not sure what the question means.
If not, how can we disable this behavior on the namespace level or cluster level.
You can set automountServiceAccountToken: false on a service or an individual pod. Service accounts are per namespace, so when done on a service account, any pods in that namespace that use this account will be affected by that setting.
What other use cases the default service account should be handling?
The default service account is a fallback, it is the SA that gets used if a pod does not specify one. So the default service account should have no privileges whatsoever. Why would a pod need to talk to the kube API by default?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace?
I don't recommend that, see previous answer. Instead, you should create a service account (bound to appropriate role/clusterrole) for each pod type that needs access to the API, following principle of least privileges. All other pod types can use default service account, which should not mount SA token automatically and should not be bound to any role.
kubectl auth can-i --list --as=system:serviceaccount:<namespace>:<serviceaccount> -n <namespace>
as a simple example. to check the default service account in the testns namespace
kubectl auth can-i --list --as=system:serviceaccount:testns:default -n testns
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[ ... ]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
I'm configuring a highly available kubernetes cluster using GKE and terraform. Multiple teams will be running multiple deployments on the cluster and I anticipate most deployments will be in a custom namespace, mainly for isolation reasons.
One of our open questions is how to manage to manage GCP service accounts on the cluster.
I can create the cluster with a custom GCP service account, and adjust the permissions so it can pull images from GCR, log to stackdriver, etc. I think this custom service account will be used by the GKE nodes, instead of the default compute engine service account. Please correct me if I'm wrong on this front!
Each deployment needs to access a different set of GCP resources (cloud storage, data store, cloud sql, etc) and I'd like each deployment to have it's own GCP service account so we can control permissions. I'd also like running pods to have no access to the GCP service account that's available to the node running the pods.
Is that possible?
I've considered some options, but I'm not confident on the feasibility or desirability:
A GCP Service account for a deployment could be added to the cluster as a kubernetes secret, deployments could mount it as a file, and set GOOGLE_DEFAULT_CREDENTAILS to point to it
Maybe access to the metadata API for the instance can be denied to pods, or can the service account returned by the metadata API be changed?
Maybe there's a GKE (or kubernetes) native way to control the service account presented to pods?
You are on the right track. GCP service accounts can be used in GKE for PODs to assign permissions to GCP resources.
Create an account:
cloud iam service-accounts create ${SERVICE_ACCOUNT_NAME}
Add IAM permissions to the service account:
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
--role='roles/${ROLE_ID}'
Generate a JSON file for the service account:
gcloud iam service-accounts keys create \
--iam-account "${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
service-account.json
Create a secret with that JSON:
kubectl create secret generic echo --from-file service-account.json
Create a deployment for your application using that secret:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
name: echo
spec:
containers:
- name: echo
image: "gcr.io/hightowerlabs/echo"
env:
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/var/run/secret/cloud.google.com/service-account.json"
- name: "PROJECT_ID"
valueFrom:
configMapKeyRef:
name: echo
key: project-id
- name: "TOPIC"
value: "echo"
volumeMounts:
- name: "service-account"
mountPath: "/var/run/secret/cloud.google.com"
volumes:
- name: "service-account"
secret:
secretName: "echo"
If you want to use various permissions for separate deployments, you need to create some GCP service accounts with different permissions, generate JSON tokens for them, and assign them to the deployments according to your plans. PODs will have access according to mounted service accounts, not to service the account assigned to the node.
For more information, you can look through the links:
Authenticating to Cloud Platform with Service Accounts
Google Cloud Service Accounts with Google Container Engine (GKE) - Tutorial
Hi I installed a fresh kubernetes cluster on Ubuntu 16.04 using this tutorial:https://blog.alexellis.io/kubernetes-in-10-minutes/
However as soon as I try to access my api (for example: https://[server-ip]:6443 /api/v1/namespaces) I get the following message
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "namespaces is forbidden: User \"system:bootstrap:a916af\" cannot list namespaces at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "namespaces"
},
"code": 403
}
Does anyone know how to fix this or what I am doing wrong?
While I haven't run through that tutorial, the service account with which you're making the request doesn't have access to cluster-level information, like listing namespaces. RBAC (Role-Based Access Control) binds users with either a Role or a ClusterRole, which grant them different permissions. My guess is that service account shouldn't ever need to know what other namespaces exist, therefore doesn't have access to list them.
In terms of "fixing" this, aside from creating a serviceaccount/user with correct permissions, that tutorial makes several references to a config file stored at $HOME/.kube/config, which stores the credentials for a user that should have access to cluster-level resources, including listing namespaces. You could start there.
You should bind service account system:serviceaccount:default:default (which is the default account bound to Pod) with role cluster-admin, just create a yaml (named like fabric8-rbac.yaml) with following contents:
I solve it by create
# NOTE: The service account `default:default` already exists in k8s cluster.
# You can create a new account following like this:
#---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: <new-account-name>
# namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fabric8-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
kubectl apply -f fabric8-rbac.yaml