Implementing RBAC, the default user still retains access even after applying roleBinding - kubernetes

I want to implement RBAC for each user. Already have OIDC running and I can see my user credentials being saved in kube config.
But to check my rolebindings, i have to run the command as
kubectl get pods --as=user#email.com, even though I am logged in as user#email.com (through gcloud init).
I am an owner account in our cloud but I was assuming the RBAC limitations should still work.

Apart from credentials, you should configure a kubectl context to associate this credentials with the cluster. And to set it as the default context:
First, list kubectl clusters with k config get-clusters
Then create a new context:
kubectl config set-context my-new-context --cluster <CLUSTER NAME> --user="user#email.com"
And finally configure the new context as default:
kubectl config use-context my-new-context

I am an owner account in our cloud but I was assuming the RBAC limitations should still work.
RBAC is additive only. If you have permissions via another configured authorizer, you will still have those permissions even if you have lesser permissions via RBAC.

Related

CLI command ordering for toggling between two GKE/kubectl accounts w/ two emails

Several weeks ago I asked this question and received a very helpful answer. The gist of that question was: "how do I switch back and forth between two different K8s/GCP accounts on the same machine?" I have 2 different K8s projects with 2 different emails (gmails) that live on 2 different GKE clusters in 2 different GCP accounts. And I wanted to know how to switch back and forth between them so that when I run kubectl and gcloud commands, I don't inadvertently apply them to the wrong project/account.
The answer was to basically leverage kubectl config set-context along with a script.
This question (today) is an extenuation of that question, a "Part 2" so to speak.
I am confused about the order in which I:
Set the K8s context (again via kubectl config set-context ...); and
Run gcloud init; and
Run gcloud auth; and
Can safely run kubectl and gcloud commands and be sure that I am hitting the right GKE cluster
My understanding is that gcloud init only has to be ran once to initialize the gcloud console on your system. Which I have already done.
So my thinking here is that I could be able to do the following:
# 1. switch K8s context to Project 1
kubectl config set-context <context for GKE project 1>
# 2. authenticate w/ GCP so that now gcloud commands will only hit the GCP
# resources associated with Project 1 (and GCP Account 1)
gcloud auth
# 3. run a bunch of kubectl and gcloud commands for Project/GCP Account 1
# 4. switch K8s context to Project 2
kubectl config set-context <context for GKE project 2>
# 5. authenticate w/ GCP so that now gcloud commands will only hit the GCP
# resources associated with Project 2 (and GCP Account 2)
gcloud auth
# 6. run a bunch of kubectl and gcloud commands for Project/GCP Account 2
Is my understanding here correct or is it more involved/complicated than this (and if so, why)?
I'll assume familiarity with the earlier answer
gcloud
gcloud init need only be run once per machine and only again if you really want to re-init'ialize the CLI (gcloud).
gcloud auth login ${ACCOUNT} authenticates a (Google) (user or service) account and persists (on Linux by default in ${HOME}/.config/gcloud) and renews the credentials.
gcloud auth list lists the accounts that have been gcloud auth login. The results show which account is being used by default (ACTIVE with *).
Somewhat inconveniently, one way to switch between the currently ACTIVE account is to change gcloud global (every instance on the machine) configuration using gcloud config set account ${ACCOUNT}.
kubectl
To facilitate using previously authenticated (i.e. gcloud auth login ${ACCOUNT}) credentials with Kubernetes Engine, Google provides the command gcloud container clusters get-credentials. This uses the currently ACTIVE gcloud account to create a kubectl context that joins a Kubernetes Cluster with a User and possibly with a Kubernetes Namespace too. gcloud container clusters get-credentials makes changes to kubectl config (on Linux by default in ${HOME}/.kube/config).
What is a User? See Users in Kubernetes. Kubernetes Engine (via kubectl) wants (OpenID Connect) Tokens. And, conveniently, gcloud can provide these tokens for us.
How? Per previous answer
user:
auth-provider:
config:
access-token: [[redacted]]
cmd-args: config config-helper --format=json
cmd-path: path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-02-22T22:22:22Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
kubectl uses the configuration file to invoke gcloud config config-helper --format=json and extracts the access_token and token_expiry from the result. GKE can then use the access_token to authenticate the user. And, if necessary can renew the token using Google's token endpoint after expiry (token_expiry).
Scenario
So, how do you combine all of the above.
Authenticate gcloud with all your Google accounts
ACCOUNT="client1#gmail.com"
gcloud auth login ${ACCOUNT}
ACCOUNT="client2#gmail.com"
gcloud auth login ${ACCOUNT} # Last will be the `ACTIVE` account
Enumerate these
gcloud auth list
Yields:
ACTIVE ACCOUNT
client1#gmail.com
* client2#gmail.com # This is ACTIVE
To set the active account, run:
$ gcloud config set account `ACCOUNT`
Switch between users for gcloud commands
NOTE This doesn't affect kubectl
Either
gcloud config set account client1#gmail.com
gcloud auth list
Yields:
ACTIVE ACCOUNT
* client1#gmail.com # This is ACTIVE
client2#gmail.com
Or you can explicitly add --account=${ACCOUNT} to any gcloud command, e.g.:
# Explicitly unset your account
gcloud config unset account
# This will work and show projects accessible to client1
gcloud projects list --account=client1#gmail.com
# This will work and show projects accessible to client2
gcloud projects list --account=client2#gmail.com
Create kubectl contexts for any|all your Google accounts (via gcloud)
Either
ACCOUNT="client1#gmail.com"
PROJECT="..." # Project accessible to ${ACCOUNT}
gcloud container clusters get-credentials ${CLUSTER} \
--ACCOUNT=${ACCOUNT} \
--PROJECT=${PROJECT} \
...
Or equivalently using kubectl config set-context directly:
kubectl config set-context ${CONTEXT} \
--cluster=${CLUSTER} \
--user=${USER} \
But it avoids having to gcloud config get-clusters, gcloud config get-users etc.
NOTE gcloud containers clusters get-credentials uses derived names for contexts and GKE uses derived names for clusters. If you're confident you can edit kubectl config directly (or using kubectl config commands) to rename these cluster, context and user references to suit your needs.
List kubectl contexts
kubectl config get-context
Yields:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* client1 a-cluster client1
client2 b-cluster client2
Switch between kubectl contexts (clusters*users)
NOTE This doesn't affect gcloud
Either
kubectl config use-context ${CONTEXT}
Or* you can explicitly add --context flag to any kubectl commands
# Explicitly unset default|current context
kubectl config unset current-context
# This will work and list deployments accessible to ${CONTEXT}
kubectl get deployments --context=${CONTEXT}

How to change K8S default service account permissions?

My team recently found out that the default service account, managed by K8S and associated by default to pods, had full read and write permissions in the cluster. We could list secrets from the running pods, create new pods....
We found this strange, as we thought the default service account had no permissions whatsoever or even just read permissions. So we decided to search through the cluster for role bindings or cluster role bindings associated with that service account, but we could find none.
In a K8S cluster, doesn't the default service account have a basic role binding associated with it? Why don't we have any? And if we don't have any, why does the service account have full permissions on the cluster, instead of none at all? Lastly, how can we modify it so it has no permissions in the cluster?
Just to make it clear: we have multiple namespaces in our cluster, each one having its own default service account. However, none of them have any role bindings associated with them and they all have full cluster permissions.
Apparently, by default, kops sets up clusters with the K8S API server authorization mode set to AlwaysAllow, meaning any request, as long as it is successfully authenticated, has global admin permissions.
In order to fix this, we had to change the authorization mode to RBAC and manually tweak the permissions.
Thank you to #ArthurBusser for pointing it out!
You just have to look through your RoleBindings/ClusterRoleBindings. Probably there's a default SA somewhere.
Unfortunately there is no built in solution to search for ClusterRoles of a user, but you can use below script
function getRoles() {
local kind="${1}"
local name="${2}"
local namespace="${3:-}"
kubectl get clusterrolebinding -o json | jq -r "
.items[]
|
select(
.subjects[]?
|
select(
.kind == \"${kind}\"
and
.name == \"${name}\"
and
(if .namespace then .namespace else \"\" end) == \"${namespace}\"
)
)
|
(.roleRef.kind + \"/\" + .roleRef.name)
"
}
$ getRoles Group system:authenticated
ClusterRole/system:basic-user
ClusterRole/system:discovery
$ getRoles ServiceAccount attachdetach-controller kube-system
ClusterRole/system:controller:attachdetach-controller

How to change users in kubectl?

In my machine I have two kubectl users, my company's account and my personal account. I can confirm that by running kubectl config view.
I'm trying to access my company's cluster but kubectl is using to my personal credentials to authenticate. Which is causing an error, as expected.
How do I change to my company's account?
Users and clusters are tied to a context and you can change users and clusters by changing the context.
kubectl config use-context my-context-name
Above command sets the current context to my-context-name.Now when kubectl is used the user and cluster tied to my-context-name context will be used.
Check the docs for more details and various other available options.

How to easily switch gcloud / kubectl credentials

At work we use Kubernetes hosted in GCP. I also have a side project hosted in my personal GCP account using Google App Engine (deploy using gcloud app deploy).
Often when I try to run a command such as kubectl logs -f service-name, I get an error like "Error from server (Forbidden): pods is forbidden: User "my_personal_email#gmail.com" cannot list resource "pods" in API group "" in the namespace "WORK_NAMESPACE": Required "container.pods.list" permission." and then I have to fight with kubectl for hours trying to get it to work.
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects? I'm happy to nuke my whole config and start from scratch if that's what it takes. I've found various kubectl and gcloud documentation but it doesn't make much sense or talks in circles.
Edit: this is on Linux.
Had the same problem and doing all of the:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
didn't help. I knew gcloud switched fine as I was able to list other resources with it directly.
But it seems kubectl can't pick those changes up automatically, as kubectl/gcloud integration relies on the pre-generated key, which has a 1h expiration(not sure if it's a default but it's what it is on my machine right now).
So, on top of setting right user/project/account with gcloud, you should re-generate the creds:
gcloud container clusters get-credentials <my-cluster> --zone <clusters-zone>
I'm in the same boat as you - apps deployed in GKE for work and personal projects deployed in my personal GCP account.
gcloud stores a list of logged in accounts that you can switch between to communicate with associated projects. Take a look at these commands:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
To work with a specific project under one of your accounts you need to set that configuration via gcloud config set project PROJECT_ID
kubectl has a list of "contexts" on your local machine in ~/.kube/config. Your current context is the cluster you want to run commands against - similar to the active account/project in gcloud.
Unlike gcloud these are cluster specific and store info on cluster endpoint, default namespaces, the current context, etc. You can have contexts from GCP, AWS, on-prem...anywhere you have a cluster. We have different clusters for dev, qa, and prod (thus different contexts) and switch between them a ton. Take a look at the [kubectx project][1] https://github.com/ahmetb/kubectx for an easier way to switch between contexts and namespaces.
kubectl will use the keys from whatever GCP account you are logged in with against the cluster that is set as your current context. i.e., from your error above, if your active account for gcloud is your personal but try to list pods from a cluster at work you will get an error. You either need to set the active account/project for gcloud to your work email or change the kubectl context to a cluster that is hosted in your personal GCP account/project.
For me updating the ~/.kube/config and setting the expiry to a date in past fixes it
TL;DR
Use gcloud config configurations to manage your separate profiles with Google Cloud Platform.
Add an explicit configuration argument to the cmd-args of your kubeconfig's user to prevent gcloud from producing an access token for an unrelated profile.
users:
- user:
auth-provider:
config:
cmd-args: config --configuration=work config-helper --format=json
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects?
Sure! By following Google's suggested instructions that lead to running gcloud container clusters get-credentials ... when configuring a kubernetes cluster, you will end up with a section of your kubeconfig that contains information on what kubectl should do to acquire an access token when communicating with a cluster that is configured with a given user. That will look something like this:
users:
- name: gke_project-name_cluster-zone_cluster-name
user:
auth-provider:
config:
access-token: &Redacted
cmd-args: config config-helper --format=json
cmd-path: /path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-12-25T01:02:03Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Basically, this tells kubectl to run gcloud config config-helper --format=json when it needs a new token, and to parse the access_token using the json-path .credential.access_token in the response from that command. This is the crux in understanding how kubectl communicates with gcloud.
Like you, I use google cloud both personally and at work. The issue is that this user configuration block does not take into account the fact that it shouldn't use the currently active gcloud account when generating a credential. Even if you don't use kubernetes in either one of your two projects, extensions in vscode for example might try to run a kubectl command when you're working on something in a different project. If this were to happen after your current token is expired, gcloud config config-helper might get invoked to generate a token using a personal account.
To prevent this from happening, I suggest using gcloud config configuations. Configurations are global configuration profiles that you can quickly switch between. For example, I have two configurations that look like:
> gcloud config configurations list
NAME IS_ACTIVE ACCOUNT PROJECT COMPUTE_DEFAULT_ZONE COMPUTE_DEFAULT_REGION
work False zev#work.email work-project us-west1-a us-west1
personal True zev#personal.email personal-project northamerica-northeast1-a northamerica-northeast1
With configurations you can alter your kubeconfig to specify which configuration to always use when creating an access token for a given kubernetes user by altering the kubeconfig user's auth-provider.config.cmd-args to include one of your gcloud configurations. With a value like config --configuration=work config-helper --format=json, whenever kubectl needs a new access token, it will use the account from your work configuration regardless of which account is currently active with the gcloud tool.

AWS EKS: How is the first user added to system:masters group by EKS

EKS documentation says
"When you create an Amazon EKS cluster, the IAM entity (user or role) is automatically granted system:master permissions in the cluster's RBAC configuration".
But after the EKS cluster creation, if you check the aws-auth config map, it does NOT have the ARN mapping to system:masters group. But I am able to access the cluster via kubectl. So if the aws-auth (heptio config map) DOES NOT have the my ARN (I was the one who created the EKS cluster) mapped to system:masters group, how does the heptio aws authenticator authenticate me?
I got to know the answer. Basically on the heptio server side component, the static mapping for system:master is done under /etc/kubernetes/aws-iam-authenticator/ (https://github.com/kubernetes-sigs/aws-iam-authenticator#3-configure-your-api-server-to-talk-to-the-server) which is mounted into the heptio authenticator pod. Since you do not have access to this in EKS, you cant see it. However if you do invoke the /authenticate yourself with the pre-signed request, you should get the TokenReviewStatus response from heptio authenticator showing the mapping for ARN (who created the cluster) to system:master group!
when you create your cluster, you also install aws-iam-authenticator,
and since you created the cluster, I'm sure you have ~/.aws/credentials.
If you check the aws-auth file you can see it has aws-iam-authenticator in it.
also you have ~/.kube/config file where you can see that iam-authenticator maps your AWS-PROFILE as a ConfigMap.
so when over you run kubectl commandit reads kube config file to authenticate with your cluster.