I have one master node, and it looks like everything ok.
But I open the dashboard and I see many errors.
The system:anonymous is not authorized to perform the list actions within your cluster.
You can solve this case in 2 ways:
1 - Using RBAC Authorization, described in k8s documentation here.
2 - A NOT recommended way is explained in this GitHub thread:
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=dont-do-this --user=system:anonymous
At the end of the GitHub thread, the Kubernetes team explains why this is not a recommended approach:
[...] granting anonymous clients full access to the Kubernetes API [...] should not be considered as solutions to permission issues
But if you are not in Production and would like to check if it works first, that can help.
You can double check and create the proper Cluster Role and Cluster Role Binding afterwards.
Related
Such as system:masters、system:anonymous、system:unauthenticated.
Is there a way to have all system groups that do not contain external creation, just the system,kubectl command or a list?
I searched the Kubernetes documentation but didn't find a list or a way to get it.
There is no build-in command to list all the default user groups from the Kubernetes cluster.
However you can try to workaround in several options:
You can create your custom script (i.e. in Bash) based on kubectl get clusterrole command.
You can try install some plugins. Plugin rakkess could help you:
Have you ever wondered what access rights you have on a provided kubernetes cluster? For single resources you can use kubectl auth can-i list deployments, but maybe you are looking for a complete overview? This is what rakkess is for. It lists access rights for the current user and all server resources, similar to kubectl auth can-i --list.
See also more information about:
kubelet authentication / authorization
anonymous requests
I understand that I can copy my .kube/config to my CI/CD server, or just name the ServiceAccount to allow my CD pipeline to use HELM for deployment.
However, what if I want to allow deployment via Helm, but restrict a lot of other access, like:
reading data from pods or a deployed database
port-forward services
... so basically accessing all data in the cluster, except for stateless Docker containers deployed via Helm.
Would it be possible to create a new ClusterRole with limited rights? What verbs in a ClusterRole does Helm need at least to function properly?
What rights does Helm need at the least?
It comes down to what your Helm chart is doing to Kubernetes.
ClusterRoles can be bound to a particular namespace through reference in a RoleBinding. The admin, edit and view default ClusterRoles are commonly used in this manner. For more detailed info see this description. For example edit is a default ClusterRole which allows read/write access to most objects in a namespace. It does not allow viewing or modifying Roles or RoleBindings; and granting a user cluster-admin access at the namespace scope provides full control over every resource in the namespace, including the namespace itself.
You can also restrict a user's access to a particular namespace by using either the edit or the admin role. See this example.
The permissions strategy could also depend on what objects will be created by the installation. The user will need all access to those API objects that will be managed by helm installations. Using RBAC Authorization has this concept explained in more detail with several examples that you could use as a reference. Also, this source would be helpful.
I am studying about CI/CD on AWS (CodePipeline/CodeBuild/CodeDeploy) and found it to be a very good tool for managing a pipeline on the cloud with everything managed (don't even need to install Jenkins on EC2).
I am now reading about container building and deployment. For the build phase, CodeBuild supports building container images. For the deploy phase, while I could find a CodeDeploy solution to ECS cluster, it seems there is no direct CodeDeploy solution for EKS (kindly correct if I am wrong).
May I know if there is a solution to integrate EKS cluster (i.e. the deploy phase can fetch the docker image from ECR or dockerhub and deploy to EKS)? I have come across some ideas using lamda functions to trigger the cluster to perform rolling update of the container image, but I could not find a step-by-step guide on this.
=========================
(Update 17 Sep 2020)
Somehow managed to create a lambda function to trigger an update to EKS to perform rolling update of the k8s deployment. Thanks Prashanna for the source base.
Just want to share the key setups in the process.
(1) Update the lambda execution role to include permission to describe EKS clusters
Create a policy with describe EKS cluster access, and attach to the role:
Policy snippet:
...
......
"Action": "eks:Describe*"
...
......
Or you can create a "EKSFullAccess" policy, and attach to the lambda execution role
(2) Update the k8s ConfigMap, and supplement the lambda execution role ARN to the mapRole section. The corresponding k8s role should be a role that has permission to update container images (say system:masters) used for the k8s deployment
You can edit the map with command like below:
kubectl edit -n kube-system configmap/aws-auth
You don't have to add/update another ConfigMap even if your deployment is in another namespace. It will take effect as well.
Sample lambda function call request and response:
Gitab provides the inbuilt integration of EKS and deployment with the help of Helm charts. If you plan to use other tools Using AWS lambda to update the image is the best bet!
I've added my github project.
Setup a lambda with below code and give RBAC access to this lambda in your EKS. Try invoking the lambda by passing the required information like namespace, deployment, image etc
Lambda for Kubernetes image update
The lambda must require EKS:describecluster policy.
The Lambda role must be provided atleast update image RBAC role in EKS cluster RBAC role setup
Since there's no built-in CI/CD for EKS at the moment, this is going to be a showcase of success/failure stories of a 3rd-party CI/CDs in EKS :) My take: https://github.com/fluxcd/flux
Pros:
Quick to set up initially (until you get into multiple teams/environments)
Tracks and deploys image releases out of box
Possibility to split what to auto-deploy in dev/prod using regex. E.g. all versions to dev, only minor to prod. Or separate tag prefixes for dev/prod.
All state is in git - a good practice to start with
Cons:
Getting complex for further pipeline expansion, e.g. blue-green, canary, auto-rollbacks, etc.
The dashboard is proprietary (weave works product)
Not for on-demand parametrized job runs like traditional CIs.
Setup:
Setup an automated image build (looks like you've already figured out)
Setup flux and helm-operator into the cluster, point them to your "gitops repo"
For each app, create a HelmRelease object that describes a regex of image tag to track
Done. A newly published image tag that falls into regex will be auto-deployed to the cluster and the new version is committed to a gitops repo.
I'm setting up Spinnaker in K8s with aws-ecr. My setup and steps are:
on AWS side:
Added policies ecr-pull, ecr-push, and ecr-generate-token
Attached the policy to a role
Spinnaker setup:
Modified values.yaml with below above settings:
```accounts:
name: my-ecr
address: https://123456xxx.dkr.ecr.my-region.amazonaws.com
repositories:
123456xxx.dkr.ecr..amazonaws.com/spinnaker-test-project
```
Annotated clouddriver.yaml: deployment to use created role (using the IAM role in a pod by referencing the role name in an annotation on the pod specification)
But it doesn't work and the error on the cloudrvier side is :
.d.r.p.a.DockerRegistryImageCachingAgent : Could not load tags for 1234xxxxx.dkr.ecr.<my_region>.amazonaws.com/spinnaker-test-project in https://1234xxxxx.dkr.ecr.<my_region>.amazonaws.com
Would like to get some help or advice what I'm missing, thank you
Got the answer from an official Spinnaker slack channel. That adding an iam policy to the clouddriver pod won't work unfortunately since it uses the docker client instead of the aws client. The workaround to make it work can be found here
Note* Ecr support currently is broken in halyard.This might get fixed in future after halyard migrates from the kubernetes v1 -> v2 or earlier so please verify with community or docs.
1.6+ sees a lot of changes revolving around RBAC and ABAC. However, what is a little quirky is not being able to access the dashboard etc. by default as previously possible.
Access will result in
User "system:anonymous" cannot proxy services in the namespace "kube-system".: "No policy matched."
Documentation at the k8s docs is plenty, but not really stating how to gain access practically, as creator of a cluster, to become cluster-admin
What is a practical way to authenticate me as cluster-admin?
By far the easiest method is to use the credentials from /etc/kubernetes/admin.conf (this is on your master if you used kubeadm) . Run kubectl proxy --kubeconfig=admin.conf on your client and then you can visit http://127.0.0.1:8001/ui from your browser.
You might need to change the master address in admin.conf after you copied to you client machine.