Can't modify(patch) docker image id of existing kubernetes deployment definition on eks(aws) - kubernetes

I created CodePipeline definition on aws. On the beginning I build docker image and send it to ecr (container registry on aws). When the docker image has been sent to the registry I call lambda function that should update definition of existing deployment by replacing docker image id in that deployment definition. Lambda function is implemented using nodejs, takes recently sent image id and is trying to patch deployment definition. When it's trying to patch the deployment I receive a response like below.
body: {
kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'deployments.apps "arch-app" is forbidden:
User "system:serviceaccount:arch-user:default" cannot patch resource "deployments"
in API group "apps" in the namespace "arch-ns"',
reason: 'Forbidden',
details: [Object],
code: 403
}
This user account belongs to aws iam and I used it to create test cluster with kubernetes so it's owner of the cluster. Any operation on the cluster I do I do it using this account and it works fine (I can create resources and apply changes on them without any problems using this account).
I created additional role in this namespace and role binding for the aws user account I use but it didn't resolve the issue (and probably was redundant). Lambda function has full permissions to all resources on ecr and eks.
Does/did anybody have similar issue with such deployment patching on eks using lambda function?

You can check if the service account has RBAC to patch deployment in namespace arch-ns
kubectl auth can-i patch deployment --as=system:serviceaccount:arch-user:default -n arch-ns
If the above command returns no then add necessary role and rolebinding to the service account.
One thing to notice here is that its a default service account in arch-user namespace but trying to perform operation in a different namespace arch-ns

Related

How do I grant permission to my Kubernetes cluster to pull images from gcr.io?

In Kubernetes container repository I have my permission set to Private:
When I create a pod on my cluster I get the the pod status ending in ImagePullBackOff and when I describe the pod I see:
Failed to pull image "gcr.io/REDACTED": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/REDACTED, repository does not exist or may require 'docker login': denied: Permission denied for "v11" from request "/v2/REDACTED/manifests/v11".
I am certainly logged in.
docker login
Authenticating with existing credentials...
Login Succeeded
Now if I enable public access (top image) on my Container Repository things work fine and the pod deploys correctly. But I don't want my repository to be public. What is the correct way to keep my container repository private and still be able to deploy. I'm pretty sure this used to work a couple weeks ago unless I messed up something with my service account although I don't know how to find out which service account is being used for these permissions.
If your GKE version is > 1.15, and the Container Registry is in the same project, and GKE uses the default Compute Engine service account (SA) it should work out of the box.
If you are running the registry in another project, or using a different service account, you should give to the SA the right permissions (e.g., roles/artifactregistry.reader)
A step by step tutorial, with all the different cases, it is present in the official documentation: https://cloud.google.com/artifact-registry/docs/access-control#gcp
To use gcr.io or any other private artifact registry, you'll need to create a Secret of type docker-registry in k8s cluster. The secret will contain credential details of your registry:
kubectl create secret docker-registry <secret-name> \
--docker-server=<server-name> \
--docker-username=<user-name> \
--docker-password=<user-password> \
--docker-email=<user-email-id>
After this, you will need to specify the above secret in imagePullSecrets property of your manifest so that k8s able to authenticate and pull the image.
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: default
spec:
containers:
- name: pod1
image: gcr.io/pod1:latest
imagePullSecrets:
- name: myregistrykey
Check out this tutorial from container-solutions and official k8s doc.
GKE uses the service account attached to the node pools to grant access to the registry, however, you must be sure that the OAuth scope for your cluster is set to https://www.googleapis.com/auth/devstorage.read_only as well.

Azure AKS: how to avoid resource creation in "default" namespace during cluster creation

I am trying to create a K8s cluster in Azure AKS and when cluster is ready I can see couple of resources are created within the default namespace. Example secret, configmap:
As a security recommendation NO k8s resources should be created under the default namespace so how to avoid it? It's created by default during cluster creation.
I have found the same question asked here:
User srbose-msft (Microsoft employee) explained the principle of operation very well:
In Kubernetes, a ServiceAccount controller manages the ServiceAccounts inside namespaces, and ensures a ServiceAccount named "default" exists in every active namespace. [Reference]
TokenController runs as part of kube-controller-manager. It acts asynchronously. It watches ServiceAccount creation and creates a corresponding ServiceAccount token Secret to allow API access. [Reference] Thus, the secret for the default ServiceAccount token is also created.
Trusting the custom CA from an application running as a pod usually requires some extra application configuration. You will need to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts. For example, you would do this with a golang TLS config by parsing the certificate chain and adding the parsed certificates to the RootCAs field in the tls.Config struct.
You can distribute the CA certificate as a ConfigMap that your pods have access to use. [Reference] AKS implements this in all active namespaces through ConfigMaps named kube-root-ca.crt in these namespaces.
You shall also find a Service named kubernetes in the default namespace. It has a ServiceType of ClusterIP and exposes the API Server Endpoint also named kubernetes internally to the cluster in the default namespace.
All the resources mentioned above will be created by design at the time of cluster creation and their creation cannot be prevented. If you try to remove these resources manually, they will be recreated to ensure desired goal state by the kube-controller-manager.
Additionally:
The Kubernetes clusters should not use the default namespace Policy is still in Preview. Currently the schema does not explicitly allow for Kubernetes resources in the default namespace to be excluded during policy evaluation. However, at the time of writing, the schema allows for labelSelector.matchExpressions[].operator which can be set to NotIn with appropriate labelSelector.matchExpressions[].values for the Service default/kubernetes with label:
component=apiserver
The default ServiceAccount, the default ServiceAccount token Secret and the RootCA ConfigMap themselves are not created with any labels and hence cannot to added to this list. If this is impeding your use-case I would urge you to share your feedback at https://techcommunity.microsoft.com/t5/azure/ct-p/Azure

How is Kubernetes RBAC Actually Enforced for Service Accounts?

We're trying to create different kuberentes secrets and offer access to specific secrets through specific service accounts that are assigned to pods. For example:
Secrets
- User-Service-Secret
- Transaction-Service-Secret
Service Account
- User-Service
- Transaction-Service
Pods
- User-Service-Pod
- Transaction-Service-Pod
The idea is to restrict access to User-Service-Secretsecret to User-Service service account that is assign to User-Service-Pod. So we can set this all up with the relevant kuberentes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because Transaction-Service-Pod can just as easily read the User-Service-Secret secret when Pod starts up, even though the service account its assign to doesn't have get permission to the User-Service-Secret.
How do we actually enforce the RBAC system?
FYI we are using EKS
First it is important to distinguish between API access to the secret and consuming the secret as an environment variable or a mounted volume.
TLDR:
RBAC controls who can access a secret (or any other resource) using K8s API requests.
Namespaces or the service account's secrets attribute control if a pod can consume a secret as an environment variable or through a volume mount.
API access
RBAC is used to control if an identity (in your example the service account) is allowed to access a resource via the K8s API. You control this by creating a RoleBinding (namespaced) or a ClusterRoleBinding (cluster-wide) that binds an identity to a Role (namespaced) or a ClusterRole (not-namespaced) to your identity (service account). Then, when you assign the service account to a pod by setting the serviceAccountName attribute, running kubectl get secret in that pod or the equivalent method from one of the client libraries would mean you have credentials available to make the API request.
Consuming Secrets
This however is independent of configuring the pod to consume the secret as an environment variable or a volume mount. If the container spec in a pod spec references the secret it is made available inside that container. Note, per container, not per pod. You can limit what secret a pod can mount by having the pods in different namespaces, because a pod can only refer to a secret in the same namespace. Additionally, you can use the service account's secrets attribute, to limit what secrets a pod with thet service account can refer to.
$ kubectl explain sa.secrets
KIND: ServiceAccount
VERSION: v1
RESOURCE: secrets <[]Object>
DESCRIPTION:
Secrets is the list of secrets allowed to be used by pods running using
this ServiceAccount. More info:
https://kubernetes.io/docs/concepts/configuration/secret
ObjectReference contains enough information to let you inspect or modify
the referred object.
You can learn more about the security implications of Kubernetes secrets in the secret documentation.
The idea is to restrict access to User-Service-Secret secret to User-Service service account that is assign to User-Service-Pod. So we can set this all up with the relevant Kubernetes resources (i.e. ServiceAccount, Role, RoleBinding), but we realize that this may not be actually enforced, because Transaction-Service-Pod can just as easily read the User-Service-Secret secret when Pod starts up, even though the service account its assign to doesn't have get permission to the User-Service-Secret.
Yes, this is correct.
This is documented for Kubernetes on privilege escalation via pod creation - within a namespace.
Users who have the ability to create pods in a namespace can potentially escalate their privileges within that namespace. They can create pods that access their privileges within that namespace. They can create pods that access secrets the user cannot themselves read, or that run under a service account with different/greater permissions.
To actually enforce this kind of Security Policies, you probably have to add an extra layer of policies via the admission controller. The Open Policy Agent in the form of OPA Gatekeeper is most likely a good fit for this kind of policy enforcement.

How to restrict default Service account from creating/deleting kubernetes resources

I am using Google cloud's GKE for my kubernetes operations.
I am trying to restrict access to the users that access the clusters using command line. I have applied IAM roles in Google cloud and given view role to the Service accounts and users. It all works fine if we use it through api or "--as " in kubectl commands but when someone tries to do a kubectl create an object without specifying "--as" object still gets created with "default" service account of that particular namespace.
To overcome this problem we gave restricted access to "default" service account but still we were able to create objects.
$ kubectl auth can-i create deploy --as default -n test-rbac
no
$ kubectl run nginx-test-24 -n test-rbac --image=nginx
deployment.apps "nginx-test-24" created
$ kubectl describe rolebinding default-view -n test-rbac
Name: default-view
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: view
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default test-rbac
I expect users who are accessing cluster through CLI should not be able to create objects if they dont have permisssions, even if they dont use "--as" flag they should be restricted.
Please take in count that first you need to review the prerequisites to use RBAC in GKE
Also, please note that IAM roles applies to the entire Google Cloud project and all clusters within that project and RBAC enables fine grained authorization at a namespace level. So, with GKE these approaches to authorization work in parallel.
For more references, please take a look on this document RBAC in GKE
For all the haters of this question, I wish you could've tried pointing to this:
there is a file at:
~/.config/gcloud/configurations/config_default
in this there is a option under [container] section:
use_application_default_credentials
set to true
Here you go , you learnt something new.. enjoy. Wish you could have tried helping instead of down-voting.

Kubernetes RBAC authentication for default user

I am using kops in AWS to create my Kubernetes cluster.
I have created a cluster with RBAC enabled via --authorization=RBAC as described here.
I am trying to use the default service account token to interact with the cluster and getting this error:
Error from server (Forbidden): User "system:serviceaccount:default:default" cannot list pods in the namespace "default". (get pods)
Am I missing a role or binding somewhere?
I thing it is not a good idea to give the cluster-admin role to default service account in default namespace.
If you will give cluster-admin access to default user in default namespace - every app (pod) that will be deployed in cluster, in default namespace - will be able to manipulate the cluster (delete system pods/deployments or make other bad stuff).
By default the clusterrole cluster-admin is given to default service account in kube-system namespace.
You can use it for interacting with cluster.
try to give admin role and try.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:default