What is the difference between a service account and a context in Kubernetes? - kubernetes

What are the practical differences between the two? When should I choose one over the other?
For example if I'd like to give a developer in my project access to just view the logs of a pod. It seems both a service account or a context could be assigned these permissions via a RoleBinding.

What is Service Account?
From Docs
User accounts are for humans. Service accounts are for processes,
which run in pods.
User accounts are intended to be global...Service
accounts are namespaced.
Context
context related to kubeconfig file(~/.kube/config). As you know kubeconfig file is a yaml file, the section context holds your user/token and cluster references. context is really usefull when you have multiple cluster, you can define all your clusters and users in single kubeconfig file, then you can switch between them with help of context (Example: kubectl config --kubeconfig=config-demo use-context dev-frontend)
From Docs
apiVersion: v1
clusters:
- cluster:
certificate-authority: fake-ca-file
server: https://1.2.3.4
name: development
- cluster:
insecure-skip-tls-verify: true
server: https://5.6.7.8
name: scratch
contexts:
- context:
cluster: development
namespace: frontend
user: developer
name: dev-frontend
- context:
cluster: development
namespace: storage
user: developer
name: dev-storage
- context:
cluster: scratch
namespace: default
user: experimenter
name: exp-scratch
current-context: ""
kind: Config
preferences: {}
users:
- name: developer
user:
client-certificate: fake-cert-file
client-key: fake-key-file
- name: experimenter
user:
password: some-password
username: exp
You can above, there are 3 contexts, holds references of cluster and user.
..if I'd like to give a developer in my project access to just view the
logs of a pod. It seems both a service account or a context could be
assigned these permissions via a RoleBinding.
That correct, you need to create service account, Role(or ClusterRole), RoleBinding(or ClusterRoleBinding) and generate kubeconfig file that contains service account token and give it your developer.
I have a script to generate kubconfig file, takes service account name argument. Feel free to check out
UPDATE:
If you want to create Role and RoleBinding, this might help

Service Account: A service account represents an identity for processes that run in a pod. When a process is authenticated through a service account, it can contact the API server and access cluster resources. If a pod doesn’t have an assigned service account, it gets the default service account.
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace and you can access the API from inside a pod using automatically mounted service account credentials.
Context: A context is just a set of access parameters that contains a Kubernetes cluster, a user, and a namespace.
The current context is the cluster that is currently the default for kubectl and all kubectl commands run against that cluster.

They are two different concepts. A context most probably refers to an abstraction relating to the kubectl configuration, where a context can be associated with a service account.
For some reason I assumed a context was just another method of authentication.

Related

How to add more nodes in the self signed certificate of Kubernetes Dashboard

I finally managed to resolve my question related to how to add more nodes in the CAs of the Master nodes (How to add extra nodes to the certificate-authority-data from a self signed k8s cluster?).
Now the problem that I am facing is I want to use kubeconfig file e.g. ~/.kube/config to access the Dashboard.
I managed to figured it out by having the following syntax:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: REDACTED
The problem that I am having is that I need to use the IP of one of the Master nodes in order to be able to reach the Dashboard. I would like to be able to use the LB IP to reach the Dashboard.
I assume this is related to the same problem that I had before as I can see from the file that the CAs are autogenerated.
args:
- --auto-generate-certificates
- etc etc
.
.
.
Apart from creating the CAs on your self in order to use them is there any option to pass e.g. IP1 / IP2 etc etc in a flag within the file?
Update: I am deploying the Dashboard through the recommended way kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (Deploying the Dashboard UI). The deployment is on prem but I have configured the cluster with an external loadbalancer (HAProxy) towards the Api and also Ingress and also type: LoadBalancer on Ingress. Everything seems to working as expected apart from the Dashboard UI (through LB IP). I am also using authentication mode authorization-mode: Node,RBAC on the kubeconfig file (if relevant).
I am access the Dashboard through Inress HTTPS e.g. https://dashboard.example.com.
I get the error Not enough data to create auth info structure. Found the token: xxx solution from this question Kubernetes Dashboard access using config file Not enough data to create auth info structure..
If I switch the LB IP with the Master nodes then I can access the UI with the kubeconfig file.
I just updated now to the latest version of the dashboard v2.0.5 is not working with the kubeconfig button / file but it works with the token directly kubernetes/Dashoboard-v2.0.5. With the previous version everything works as described above. No error logs in the pod logs.

Kubernetes read-only context

I have full admin access to a GKE cluster, but I want to be able to create a kubernetes context with just read only privileges. This way I can prevent myself from accidentally messing with the cluster. However, I still want to be able to switch into a mode with full admin access temporarily when I need to make changes (I would probably use cloud shell for this to fully distinguish the two)
I haven't much docs about this - it seems I can set up roles based on my email but not have two roles for one user.
Is there any way to do this? Or any other way to prevent fat-finger deleting prod?
There are a few ways to do this with GKE. A context in your KUBECONFIG consists of a cluster and a user. Since you want to be pointing at the same cluster, it's the user that needs to change. Permissions for what actions users can perform on various resources can be controlled in a couple ways, namely via Cloud IAM policies or via Kubernetes RBAC. The former applies project-wide, so unless you want to create a subject that has read-only access to all clusters in your project, rather than a specific cluster, it's preferable to use the more narrowly-scoped Kubernetes RBAC.
The following types of subjects can authenticate with a GKE cluster and have Kubernetes RBAC policies applied to them (see here):
a registered (human) GCP user
a Kubernetes ServiceAccount
a GCloud IAM service account
a member of a G Suite Google Group
Since you're not going to register another human to accomplish this read-only access pattern and G Suite Google Groups are probably overkill, your options are a Kubernetes ServiceAccount or a GCloud IAM service account. For this answer, we'll go with the latter.
Here are the steps:
Create a GCloud IAM service account in the same project as your Kubernetes cluster.
Create a local gcloud configuration to avoid cluttering your default one. Just as you want to create a new KUBECONFIG context rather than modifying the user of your current context, this does the equivalent thing but for gcloud itself rather than kubectl. Run the command gcloud config configurations create <configuration-name>.
Associate this configuration with your GCloud IAM service account: gcloud auth activate-service-account <service_account_email> --key-file=</path/to/service/key.json>.
Add a context and user to your KUBECONFIG file so that you can authenticate to your GKE cluster as this GCloud IAM service account as follows:
contexts:
- ...
- ...
- name: <cluster-name>-read-only
context:
cluster: <cluster-name>
user: <service-account-name>
users:
- ...
- ...
- name: <service-account-name>
user:
auth-provider:
name: gcp
config:
cmd-args: config config-helper --format=json --configuration=<configuration-name>
cmd-path: </path/to/gcloud/cli>
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
Add a ClusterRoleBinding so that this subject has read-only access to the cluster:
$ cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <any-name>
subjects:
- kind: User
name: <service-account-email>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
EOF
Try it out:
$ kubectl use-context <cluster-name>-read-only
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
Error from server (Forbidden): namespaces is forbidden: User "<service-account-email>" cannot create resource "namespaces" in API group "" at the cluster scope: Required "container.namespaces.create" permission.
$ kubectl use-context <original-namespace>
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
namespace/foo created

Can kubectl work from an assumed role from AWS

I'm using Amazon EKS for Kubernetes deployment (initially created by an AWS admin user), and currently having difficulty to use the AWS credentials from AWS STS assume-role to execute kubectl commands to interact with the stack
I have 2 EKS stacks on 2 different AWS accounts (PROD & NONPROD), and I'm trying to get the CI/CD tool to deploy to both kubernetes stacks with the credentials provided by AWS STS assume-role but I'm constantly getting error such as error: You must be logged in to the server (the server has asked for the client to provide credentials).
I have followed the following link to add additional AWS IAM role to the config:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
But I'm not sure what I'm not doing right.
I ran "aws eks update-kubeconfig" to update the local .kube/config file, contents populated as below:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: [hidden]
server: https://[hidden].eu-west-1.eks.amazonaws.com
name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
contexts:
- context:
cluster: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
user: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
current-context: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-1:[hidden]:cluster/demo-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- triage-eks
command: aws-iam-authenticator
and had previously updated Kubernetes aws-auth ConfigMap with an additional role as below:
data:
mapRoles: |
- rolearn: arn:aws:iam::[hidden]:role/ci_deployer
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:masters
My CI/CD EC2 instance can assume the ci_deployer role for either AWS accounts.
Expected: I can call "kubectl version" to see both Client and Server versions
Actual: but I get "the server has asked for the client to provide credentials"
What is still missing?
After further testing, I can confirm kubectl will only work from an environment (e.g. my CI EC2 instance with an AWS instance role) of the same AWS account where the EKS stack is created. This means that my CI instance from account A will not be able to communicate with EKS from account B, even if the CI instance can assume a role from account B, and the account B role is included in the aws-auth of the kube config of account B EKS. I hope its due to missing configuration as I find this rather undesirable if a CI tool can't deploy to multiple EKS's from multiple AWS accounts using role assumption.
Look forward to further #Kubernetes support on this
Can kubectl work from an assumed role from AWS
Yes, it can work. A good way to troubleshoot it is to run from the same command line where you are running kubectl:
$ aws sts get-caller-identity
You can see the Arn for the role (or user) and then make sure there's a trust relationship in IAM between that and the role that you specify here in your kubeconfig:
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
- "-r"
- "<role-you-want-to-assume-arn>"
or with the newer option:
command: aws
args:
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role
- <role-you-want-to-assume-arn>
Note that if you are using aws eks update-kubeconfig you can pass in the --role-arn flag to generate the above in your kubeconfig.
In your case, some things that you can look at:
The credential environment variables are not set in your CI?:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Your ~/.aws/credentials file is not populated correctly in your CI. With something like this:
[default]
aws_access_key_id = xxxx
aws_secret_access_key = xxxx
Generally, the environment variables take precedence so it could be that you could have different credentials altogether in those environment variables too.
It could also be the AWS_PROFILE env variable or the AWS_PROFILE config in ~/.kube/config
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
- "-r"
- "<role-arn>"
env:
- name: AWS_PROFILE <== is this value set
value: "<aws-profile>"
Is the profile set correctly under ~/.aws/config?
From Step 1: Create Your Amazon Cluster
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.
As you have discovered you can only access the cluster with the same user/role that created the EKS cluster in the first place.
There is a way to add additional roles to the cluster after creation by editing the aws-auth ConfigMap that has been created.
Add User Role
By editing the aws-auth ConfigMap you can add different levels of access based on the role of the user.
First you MUST have the "system:node:{{EC2PrivateDNSName}}" user
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
This is required for Kubernetes to even work, giving the nodes the ability to join the cluster. The "ARN of instance role" is the role that includes the required policies AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly etc.
Below that add your role
- rolearn: arn:aws:iam::[hidden]:role/ci_deployer
username: ci-deployer
groups:
- system:masters
The 'username' can actually be set to about anything. It appears to only be important if there are custom roles and bindings added to your EKS cluster.
Also, use the command 'aws sts get-caller-identity' to validate the environment/shell and the AWS credentials are properly configured. When correctly configured 'get-caller-identity' should return the same role ARN specified in aws-auth.

How do I present a custom GCP service account to kubernetes workloads?

I'm configuring a highly available kubernetes cluster using GKE and terraform. Multiple teams will be running multiple deployments on the cluster and I anticipate most deployments will be in a custom namespace, mainly for isolation reasons.
One of our open questions is how to manage to manage GCP service accounts on the cluster.
I can create the cluster with a custom GCP service account, and adjust the permissions so it can pull images from GCR, log to stackdriver, etc. I think this custom service account will be used by the GKE nodes, instead of the default compute engine service account. Please correct me if I'm wrong on this front!
Each deployment needs to access a different set of GCP resources (cloud storage, data store, cloud sql, etc) and I'd like each deployment to have it's own GCP service account so we can control permissions. I'd also like running pods to have no access to the GCP service account that's available to the node running the pods.
Is that possible?
I've considered some options, but I'm not confident on the feasibility or desirability:
A GCP Service account for a deployment could be added to the cluster as a kubernetes secret, deployments could mount it as a file, and set GOOGLE_DEFAULT_CREDENTAILS to point to it
Maybe access to the metadata API for the instance can be denied to pods, or can the service account returned by the metadata API be changed?
Maybe there's a GKE (or kubernetes) native way to control the service account presented to pods?
You are on the right track. GCP service accounts can be used in GKE for PODs to assign permissions to GCP resources.
Create an account:
cloud iam service-accounts create ${SERVICE_ACCOUNT_NAME}
Add IAM permissions to the service account:
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
--role='roles/${ROLE_ID}'
Generate a JSON file for the service account:
gcloud iam service-accounts keys create \
--iam-account "${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
service-account.json
Create a secret with that JSON:
kubectl create secret generic echo --from-file service-account.json
Create a deployment for your application using that secret:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
name: echo
spec:
containers:
- name: echo
image: "gcr.io/hightowerlabs/echo"
env:
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/var/run/secret/cloud.google.com/service-account.json"
- name: "PROJECT_ID"
valueFrom:
configMapKeyRef:
name: echo
key: project-id
- name: "TOPIC"
value: "echo"
volumeMounts:
- name: "service-account"
mountPath: "/var/run/secret/cloud.google.com"
volumes:
- name: "service-account"
secret:
secretName: "echo"
If you want to use various permissions for separate deployments, you need to create some GCP service accounts with different permissions, generate JSON tokens for them, and assign them to the deployments according to your plans. PODs will have access according to mounted service accounts, not to service the account assigned to the node.
For more information, you can look through the links:
Authenticating to Cloud Platform with Service Accounts
Google Cloud Service Accounts with Google Container Engine (GKE) - Tutorial

kubernetes: Authentication to ui with default config file fails

I have successfully set up a kubernetes cluster on AWS using kops and the following commands:
$ kops create cluster --name=<my_cluster_name> --state=s3://<my-state-bucket> --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.small --dns-zone=<my-cluster-dns>
$ kops update cluster <my-cluster-name> --yes
When accessing the dashboard, I am prompted to either enter a token or
Please select the kubeconfig file that you have created to configure access to the cluster.
When creating the cluster, ~/.kube/config was created that has the following form:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
<some_key_or_token_here>
server: https://api.<my_cluster_url>
name: <my_cluster_name>
contexts:
- context:
cluster: <my_cluster_name>
user: <my_cluster_name>
name: <my_cluster_name>
current-context: <my_cluster_name>
kind: Config
preferences: {}
users:
- name: <my_cluster_name>
user:
as-user-extra: {}
client-certificate-data:
<some_key_or_certificate>
client-key-data:
<some_key_or_certificate>
password: <password>
username: admin
- name:<my-cluster-url>-basic-auth
user:
as-user-extra: {}
password: <password>
username: admin
Why when pointing the kubernetes ui to the above file, I get
Authentication failed. Please try again.
I tried the same and had the same problem. It turns out that kops creates a certificate based authentication. Certificate based authentication can't be used on the web UI interface. Instead, I tried using the token based authentication. Next question, where do you find the token?
kubectl describe secret
This will show you the default token for the cluster. I assume this is very bad security practice but if you're using the UI to improve your learning and understanding then it will get you moving in the right direction.
This Dashboard wiki page is about authentication. That's where I discovered how to do it.
In order to enable basic auth in Dashboard --authentication-mode=basic flag has to be provided. By default it is set to --authentication-mode=token
To get the token or understand more about access control please refer here